id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
260316320
pes2o/s2orc
v3-fos-license
The SNR Kes 17-ISM interaction: a fresh view from radio and $\gamma$ rays This paper presents a comprehensive analysis of the Galactic SNR Kes 17 (G304.6+0.1) with focus on its radio synchrotron emission, environs, and the factors contributing to the observed gamma rays. The firstly-obtained integrated radio continuum spectrum from 88 to 8800 MHz yields an index alpha = -0.488 +/- 0.023 (S_nu $\propto$ nu^alpha), indicative of a linear particle acceleration process at the shock front. Accounting for the SNR radio shell size, the distribution of atomic hydrogen (n_H ~ 10 cm^-3), and assuming the SNR is in the Sedov-Taylor stage of its evolution, we estimate Kes 17 to be roughly 11 kyr. From 12CO and 13CO (J=1-0) emission-line data as a proxy for molecular hydrogen we provided the first evidence that the eastern shell of Kes 17 is engulfing a molecular enhancement, with 4.2 x 10^4 M_sun and n ~ 300 cm^-3. Towards the western boundary of Kes 17 there are not CO signatures above 3 sigma, despite previously reported infrared observations have revealed shocked molecular gas at that location. This suggests the existence of a CO-dark interacting molecular gas, a phenomenon also recorded in other Galactic SNRs (e.g. CTB 37A and RX J1713.7-3946). Additionally, by analysing ~14.5 yr of data from Fermi-LAT, we determined a power-law photon index in the 0.3-300 GeV range of Gamma = 2.39 +/- 0.04^+0.063_-0.114 (+/-stat +/-syst) in agreement with prior studies. The energy flux turns out to be (2.98 +/- 0.14) x 10^-11 erg cm-2 s-1 implying a luminosity (2.22 +/- 0.45) x 10^35 erg s-1 at ~8 kpc. Finally, we successfully modelled the multiwavelength SED by incorporating the improved radio synchrotron spectrum and the new gamma-ray measurements. Our analysis indicates that the observed GeV flux most likely originates from the interaction of Kes 17 with western ''dark'' CO zone with a proton density n_p ~ 400 cm-3. Introduction Supernova remnants (SNRs) are captivating objects that cause a long-lasting impact on the Galactic ecosystem, leaving distinct imprints that can be observed across the entire electromagnetic spectrum. This paper, focused on the source Kes 17 (G304.6+0.1), is part of a series of articles conducted by the author team, dedicated to investigating the association between radio and γ-ray emissions in remnants of stellar explosions. Previous studies in this series were devoted to G338.3−0.0 (Supan et al. 2016), Kes 41 (Supan et al. 2018a,b), and G46.8−0.0 (Supan et al. 2022), all of them middle-aged γ-ray emitting SNRs interacting with their ambient medium. The initial observations of Kes 17 at radio wavelengths were conducted in the 1970s at 408 and 5000 MHz using the Molonglo and Parkes single-dish telescopes (Goss & Shaver 1970;Shaver & Goss 1970a;Milne & Dickel 1975). The first distance estimate for the remnant was approximately 6 kpc, derived using the uncertain Σ-D relation (Shaver & Goss 1970b). Later on, Caswell et al. (1975) established a lower limit of 9.7 kpc based on absorption features in the low-resolution neutral hydrogen (H i) spectrum of gas clouds along the line-of-sight. This lower limit remained unchanged for nearly five decades, until a recent reanalysis by Ranasinghe & Leahy (2022), which yielded a kinematic distance 7.9 ± 0.6 kpc to the remnant. Kes 17 was extensively studied in the X-ray domain using data obtained with XMM-Newton, Suzaku, and ASCA satellites (Combi et al. 2010;Gök & Sezer 2012;Gelfand et al. 2013;Pannuti et al. 2014;Washino et al. 2016). According to the observed properties of the X-ray emitting gas, Combi et al. (2010) proposed that the source belongs to the mixed-morphology (MM) type of SNRs, characterised by a shell-like morphology in radio wavelengths and a filled-centre composition in X rays. Additionally, they suggested the presence of a nonthermal component in the northern, central, and southern regions of the SNR shock front. However, subsequent studies with improved statistics have raised doubts about this possibility and concluded that the X-ray spectrum is dominated by thermal emission (Gök & Sezer 2012;Gelfand et al. 2013;Washino et al. 2016). At the high-energy end of the spectrum, Kes 17 has been linked to a GeV source detected by the Fermi Large Area Telescope (LAT) (Wu et al. 2011;Gelfand et al. 2013). No counterpart at TeV energies has been reported so far. Concerning the SNR environment, the first hint of interaction came from low-resolution observations of the 1720-MHz maser-line of hydroxyl (OH) (Frail et al. 1996). A significant breakthrough in linking Kes 17 to the interstellar matter occurred through near-infrared (near-IR) spectroscopic studies, which were crucial in firmly determining the location and characteristics of the shocked H 2 gas (Lee et al. 2011). Article number, page 1 of 10 arXiv:2307.15649v1 [astro-ph.HE] 28 Jul 2023 A&A proofs: manuscript no. kes17_supan_astroph This paper is organised as follows: In Sect. 2 we present the first analysis of the radio continuum spectrum for Kes 17. Adopting a standard evolution model we also derived the SNR's age by means of the radio size of the expanding forward shock and 21 cm spectral-line observations of the H i gas. Sect. 3 focuses on investigating the morphology and kinematics of the molecular gas emitting in CO lines. This represents the first study of the CO gas in the region of Kes 17. In Sect. 4, we provide an updated analysis of the Fermi-LAT data covering 14.5 yr. We also investigate the emission mechanism responsible for the high-energy flux through a broadband modelling that incorporates the revisited measurements at radio and GeV γ-ray energies. Our findings are summarised in Sect. 5. Morphology and spectrum The radio remnant Kes 17 is characterised by non uniform emission from a complete, albeit irregular, shell structure with an average size of ∼ 7 ′ . This can be observed in the 843 MHz image from the Molonglo Sky Survey (SUMSS, HPBW = 45 ′′ × 50 ′′ ) 1 included in the inset of Fig. 2a. The surface brightness of the remnant is ∼ 4.3 × 10 −20 W m −2 Hz −1 at 843 MHz, while that of the brightest elongated feature (∼4 ′ .2 × 1 ′ .2 in size) along the southern periphery is ∼ 0.11 × 10 −20 W m −2 Hz −1 An arc of enhanced synchrotron emission is noticed in the northwest region of the remnant, near 13 h 05 m 30 s , −62 • 41 ′ 00 ′′ . This arc has a size of approximately ∼1 ′ .3 × 2 ′ .5 and coincides with a distinctive bend in the shock front of Kes 17. Bright continuum and line emission in the IR wavebands, as reported by Lee et al. (2011), accompanies the radio synchrotron emission along this edge of Kes 17. Another noteworthy feature is an indentation towards the east of the radio shell, at approximately 13 h 06 m 05 s , −62 • 41 ′ 10 ′′ , possibly indicating that the SNR shock is wrapping around an external inhomogeneity. The structure of the surrounding matter, revealed for the first time by the analysis of the CO gas is discussed in Sect. 3. To construct the global spectrum of the radio continuum emission of Kes 17, we compiled flux density estimates from the literature as well as new fluxes that we measured from publicly available radio surveys. For frequencies below ∼160 MHz, the lowest at which Kes 17 has been detected to date, we used the Galactic and Extragalactic All-sky Murchison Widefield Array Survey (GLEAM, Hurley-Walker et al. 2019) 2 . We also used the Southern Galactic Plane Survey (SGPS, McClure-Griffiths et al. 2005) 3 at 1420 MHz and the S-band Polarisation All Sky Survey (S-PASS, Carretti et al. 2019) 4 at 2303 MHz. Flux measurements with an error greater than 20% were discarded from the analysis. For the remaining data, if information on the primary calibrator was available, they were brought to the absolute flux scale presented by Perley & Butler (2017). This flux scale is valid for the entire range of compiled frequencies in our analysis (88-8800 MHz) and has an accuracy of ∼3%. The set of data points was fitted using the simple power-law model S ν ∝ ν α , where S ν represents the integrated flux at frequency ν and α is the radio spectral index. During the fitting process, flux measurements were rejected if their dispersion with respect to the model was greater than 2σ of the best-fit values. The final dataset is reported in Table 1. It constitutes the most complete compilation of radio flux measurements conducted for Kes 17 to date. Figure 1 displays the integrated radio continuum spectrum for this SNR, with our new flux density determinations represented by blue filled circles. A weighted leastsquares fit was applied to the data points resulting a spectral index α = −0.488 ± 0.023. This value is flatter than the previous measurement (α ≃ −0.54) reported by Shaver & Goss (1970b), which was based on flux estimates at only 408 and 5000 MHz. The synchrotron radiation spectrum we derived for Kes 17 is consistent with electrons being accelerated via a firstorder Fermi mechanism (Onić 2013). Regarding the spectral shape, the straight distribution of flux densities at low radio frequencies (below 100 MHz) indicates that if ionised gas exists, whether it is located either co-spatially or coincidentally intersecting Kes 17 along the line of sight, the free-free absorption it produces does not have an impact on the integrated continuum spectrum of the remnant. Low-frequency turnovers caused by free-free absorption by ionised gas in H ii regions (or in their associated lower-density envelopes), as well as at the interface between an ionising shock and its immediate environment, have been measured in the spectra of some SNRs (e.g. Kes 67, Kes 75, W41, Kes 73, 3C 396, and W49B, Castelletti et al. 2021). For Kes 17, determining whether its forward shock ionises or not the western region where SNR's interaction with dense gas has been proved in the infrared (Lee et al. 2011), could potentially be illuminated by improved sensitivity and resolution radio observations, especially at the low-frequency portion of the spectrum. 18.0 ± 1.8 † Whiteoak & Green (1996) 1400 10.9 ± 0.14 † Gelfand et al. (2013Gelfand et al. ( ) 1420 11.2 ± 0.7 † This work (SGPS) 2303 11.1 ± 1.6 † This work (S-PASS) 5000 6.7 ± 0.7 Shaver & Goss (1970b) 5000 6.8 ± 0.7 Milne (1969) 5000 6.9 ± 1.4 Milne & Dickel (1975) 8800 6.3 ± 1.3 Dickel et al. (1973) Notes. ( †) Measurements not corrected for the absolute flux scale of Perley & Butler (2017) due to missing information on the primary flux calibration. SNR's age and neutral gas properties from radio data Estimating the dynamical age of supernova remnants involves indirect methods due to the inability to measure it directly. So far, all age estimates for Kes 17 have been derived on the basis of spectral fitting parameters to the cold and low density X-ray emitting gas inner to the radio shell. However, there are large differences in the estimated ages, with values ranging from 2.3 kyr Table 1. Blue data points denote our new measurements from public survey images, while the green ones correspond to literature flux estimates. The straight line represents the best fit with a power-law model in the form S ν ∝ ν α , which yields a spectral index value α = −0.488±0.023. Darker and lighter pink-shaded regions around the straight line denote a variation in the fitted spectral parameters of 1 and 2σ, respectively. to as high as 64 kyr depending on factors such as ionisation timescales, τ, and electron densities, n e (τ ∼ 1-3 × 10 12 cm −3 s, n e ∼ 0.4-2.3 cm −3 , Pannuti et al. 2014, and references therein). Besides, according to the proposal that Kes 17 is a member of the MM SNRs group and considering a thermal conduction model, Gelfand et al. (2013) derived an age range from 2 to 5 kyr. Additionally, they determined an upper age limit of 40 kyr by assuming that clump evaporation into the inter-cloud medium is primarily responsible for the observed X-ray emission. In this section, we employ a standard evolution model and examine continuum and line emissions at centimetre wavelengths to estimate the age of Kes 17. The relative intermediate extent of Kes 17's shock front (as illustrated in the inset of Fig. 2a) compared to other SNRs discovered in the Galaxy, coupled with the absence of optical signatures of radiative shocks in observations from surveys like the SuperCOSMOS H-alpha Survey (SHS, Parker et al. 2005) 5 or the STScI Digitized Sky Survey (DSS), 6 lends support to the hypothesis that Kes 17 is in the Sedov expansion stage of its evolution. Based on this picture, the time elapsed since the explosion can be estimated via the relation (Cox 1972): where R S is the shock radius at present (in pc), n 0 is the ambient interstellar density (in cm −3 ), and ϵ 0 is the initial explosion energy (in units of 0.75 × 10 51 erg). The radius, measured in the radio continuum image of Kes 17 at 843 MHz, is ∼3.5 ′ or ∼8 pc according to the revisited kinematic distance 7.9 ± 0.6 kpc to the SNR obtained by Ranasinghe & Leahy (2022) from neutral hydrogen H i absorption features. For the ambient interstellar density, we considered it is well represented by the neutral hydrogen gas density, and estimated it via n 0 = N H /L, the ratio of the hydrogen column density over its depth in the line of sight in the region of the SNR. To calculate N H we considered the 21 cm line emission of H i from the SGPS data. Our focus was to identify any sign of neutral gas that could have been swept up by the SNR shock or by the stellar winds of the progenitor star. If we detect accumulation of H i around the radio continuum boundary of the remnant, it can provide us with a rough estimate of the pre-shock medium density under the assumption that the accumulated atomic gas was uniformly distributed inside the volume of the H i shell before the stellar explosion. We did not find, however, any neighbouring structure of neutral atomic gas that could be feasibly associated with Kes 17. Moreover, an inspection of the HI datacube shows the remnant in absorption in the complete velocity range from ∼0 km s −1 to the tangent point velocity (v TP ≃ −42 km s −1 , according to the Galactic rotation curve of Reid et al. 2014). Therefore, we simply hypothesised that n H could be well represented by the mean density value measured in circular test-areas of radius ≃ 8 ′ distributed around the remnant (we tested different values and determined that our result remains consistent, within uncertainties, regardless of the size chosen). Under the common assumption that the H i emission is optically thin, after subtracting an appropriate mean background level to each H i velocity channel, the mean column density around Kes 17, calculated by integrating the H i emission between −31 and −14 km s −1 is N H ≈ 8 × 10 20 cm −2 . This velocity interval is in accord with the interstellar molecular matter traced in CO associated in space and velocity with the SNR (further analysis of this topic is provided in Sect. 3). According to all the assumptions we made previously, our analysis produces a number density in the H i ambient environment n 0 ≈ 7 cm −3 , larger than the typical value ∼1 cm −3 averaged over the cold, warm, and hot gas phases of the ISM (McKee & Ostriker 1977). Therefore, using Eq. 1 and adopting the value ∼ 4 × 10 50 erg for the energy released in the SN event, as derived by Leahy et al. (2020) incorporating both uniform ISM and stellar wind SNR evolutionary models, we estimated that Kes 17 is approximately 11 kyr old. Notably, when using the kinetic energy 10 51 erg for a canonical SN the age decreases to roughly 7 kyr, not critically different within the uncertainties from our calculation. We are aware that our approach for the Kes 17's age provides a first-order approximation, since i) it assumes that the SNR is in the Sedov stage of its evolution, ii) the mean number density of atomic hydrogen, as measured from HI data, represents an upper limit because some of the H i may be unrelated gas located behind the SNR, and iii) our result ignores the possibility of the remnant evolving in an inhomogeneous ambient medium. The molecular environment of Kes 17 The properties of the molecular gas in the region of Kes 17, as traced by the emission from carbon monoxide (CO), did not receive attention in previous works. Dense interstellar material interacting with the western shock front of the SNR was only revealed in infrared wavebands (Lee et al. 2011). Here, we present the main results of the first study towards Kes 17 carried out by using both 12 CO and 13 CO emission data in their rotational transition J = 1-0. The datacubes for both species were extracted from the Three-mm Ultimate Mopra Milky Way Survey (ThrUMMS, Barnes et al. 2015). 7 The spatial and spectral resolutions are 72 ′′ and ∼0.35 km s −1 , respectively, with sensitivities ∼ 1 K each. .5 in size) used for analysing the kinematics of the CO gas. The H ii region G304.465-00.023 in the field is also labelled (Urquhart et al. 2022). (b) Collection of 12 CO (in red) and 13 CO (in green) J =1-0 spectra extracted from the boxes, numbered from 1 to 42, in panel (a). The original datasets were convolved to a resolution of 80 ′′ to reduce the graininess. The yellow contours superimposed on the spectra correspond to the radio continuum emission from Kes 17. After carefully inspecting the CO data cubes throughout their velocity ranges (−65, +55) km s −1 , we only found molecular structures in projected correspondence with the radio continuum emission from the SNR's shell in two intervals, with velocity ranges from ∼ −45 to −37 km s −1 and ∼ −31 to −14 km s −1 , respectively. The CO structure in the first range peaks at −41 km s −1 and in the plane of the sky lies towards the eastern border of Kes 17 with an angular size of approximately 8 ′ × 4 ′ (in the south-north and east-west directions, respectively). We assigned to this molecular material a distance ≃ 5 kpc, as its velocity ≃ −41 km s −1 is largely consistent with that of the tangent point in the direction of Kes 17. Consequently, we discard this cloud as possibly associated with Kes 17 (d SNR ≃ 8 kpc, Ranasinghe & Leahy 2022), and it will not be analysed in the following. The emission from the second velocity compo-nent originates from a cloud at the eastern part of the SNR shell. The distribution of integrated 12 CO and 13 CO emissions are presented in the colour-coded Fig. 2a overlaid with contours of the radio continuum emission from the SNR shock wave. Overall, the 12 CO compared to the 13 CO emission appears spatially more extended within the region of interest. This difference in distribution can be explained by the fact that 13 CO emission is optically thinner than that from 12 CO, whereby it gives account of more internal and denser regions in the cloud (Wilson et al. 2013). Notably, in the brightest part of the uncovered molecular structure, corresponding to the yellowish regions where 12 CO and 13 CO emissions overlap, Kes 17 exhibits (within the 80 ′′ resolution of the radio image smoothed to match the resolution of the CO data) a significant deviation from a spherical symmetry. In order to gain further insight into the characteristics of the molecular gas, we extracted 12 CO and 13 CO spectra across the entire region where the radio continuum emission from, and the CO line emission towards, Kes 17 show a line-of-sight superposition. To cover this region comprehensively, we employed a grid of 1 ′ .5 boxes, as depicted in Fig. 2a. To enhance the signal-tonoise ratio of the spectra, they were smoothed by averaging intensity values within the 5 nearest-neighbour velocity channels. The resulting spectra, presented in Fig. 2b, show a broad emission region in 12 CO with a velocity span ∆v ≃ 17 km s −1 and an intensity varying from approximately 2 to 4 K. Multiple 12 CO kinematic components contribute to the observed emission with peak velocities at approximately −30, −25, and −20 km s −1 . Examples of profiles showing this behaviour correspond to boxes 16-18, 22-24, and 28-30, all of them within the outermost radio contour of the SNR shell. At the easternmost border of Kes 17, these three velocity components appear to be less distinguishable and exhibit blending. We stress that the 13 CO profiles (Fig. 2b) do not reproduce the triple-peaked structure observed in the 12 CO gas, but a broad peak at about −20 km s −1 , in complete agreement with the velocity of the H i absorption features used to constrain the distance to Kes 17 (Ranasinghe & Leahy 2022). The intensity of the 13 CO emission peaks is ∼ 1-2 K. Figure 3 displays position-velocity (p-v) diagrams of the molecular gas emission. They were constructed by integrating the 12 CO emission along the R.A. direction in seven slices, each covering a range of 100 ′′ . These slices span the entire cloud of interest and constitute an appropriate tool to effectively capture the spatial heterogeneity of the individual velocity components observed in the spectral distribution shown in Fig. 2b. By inspecting the p-v diagrams, it is evident that the molecular emission is mostly concentrated at −20 km s −1 , adjacent at the position where the radio shell of the remnant is highly distorted and it appears to branch off to the interior of the SNR (panels e to g in Fig. 3). Bright knots are noticeable in the cloud's interior. By combining the CO emission and H i absorption profiles (not shown here) extracted over the brightest part of the molecular concentration emitting at −20 km s −1 , we determined that it is located at its far kinematical distance ≃ 8 kpc. The remaining velocities components of the cloud are at around 7 kpc (−30 km s −1 ) and 7.5 kpc (−25 km s −1 ). Taking the associated uncertainties (≃1.0 kpc) in these determinations into account, it can be concluded that these peaks arise from different components of the same structure. The average distance to this structure is estimated to be approximately 7.5 kpc, completely compatible with the distance determined for Kes 17 (≃ 7.9 kpc Ranasinghe & Leahy 2022). The error in the distance determination for the molecular gas stem from various factors. One of these contributions is the uncertainty in the peak velocity value for each gas component. Additionally, accurately measuring the properties of individual molecular components can be challenging, especially when they are not completely resolved. Lastly, the use of a Galactic rotation curve, such as the one proposed by Reid et al. (2014), involves assumptions and uncertainties. We notice that despite our analysis of the molecular material through 12 CO and 13 CO lines supports the coexistence of the discovered eastern cloud and the remnant, we have not observed distinct broadenings in the CO emission attributable to turbulence caused by the impact of Kes 17's shock front. Therefore, we propose that the spectral behaviour of CO might illustrate a soft contact between the surrounding cloud and the remnant's shockwave. Certainly, the process of impacting the cloud might be at an initial stage. We have also estimated the total mass M and mean density n(H 2 ) of the molecular gas in the newly-detected cloud at v LSR ≃ −31 to −14 km s −1 by using both the 12 CO and 13 CO (J= 1-0) emissions. The procedure involves calculating the molecular hydrogen column density N(H 2 ), and deriving both M and n(H 2 ) from it. N(H 2 ) is obtained from the integrated emission of the CO by using appropriate conversion factors relating the integrated emission of 12 CO and the H 2 column density (X 12 = 2.0 × 10 20 cm −2 (K km s −1 ) −1 , Bolatto et al. 2013), and also between the column density N( 13 CO) and N(H 2 ) (X 13 = 7.7 × 10 5 , Kohno et al. 2021). We refer the reader to the work of Wilson et al. (2013), where the expressions and assumptions (related to local thermodynamic equilibrium) employed for obtaining column densities are explained in detail. The mass is calculated through the relation M = µ m H Ω D 2 N(H 2 ), where µ = 2.8 is the mean molecular mass of the cloud, 8 m H is the hydrogen atom mass, and Ω is the solid angle subtended by the cloud located at the distance D. On the other side, the mean molecular density is n(H 2 ) = N(H 2 )/l, where l denotes the extent of the cloud in the line of sight assumed to be equal to the average of the mean size of the structure in R.A. and Dec. For the integration of the CO emissions we used a circular region with a radius of 5 ′ (or ∼ 11 pc at a distance of ∼ 7.5 kpc to the cloud) centred at 13 h 06 m 30 s , −62 • 43 ′ 20 ′′ . From this integration, we derived a molecular column density N(H 2 ) ≈ 1 × 10 22 cm −2 , consistent for both 12 CO and 13 CO gases. Therefore, the resulting mean mass and molecular density for the eastern cloud were estimated to be M ≈ 4.2 × 10 4 M ⊙ and n(H 2 ) ≈ 300 cm −3 , respectively. The uncertainties in these measurements are of the order of 40%, and comprise errors in the distance and the definition of the structure in the plane of the sky, as well as in the velocity space. We also notice that differences in the obtained values using emissions from both 12 CO and 13 CO were found to be within 20%. The fact that the values estimates from both 12 CO and 13 CO are in agreement indicates that both isotopologues provide consistent measurements and can be used for deriving cloud parameters. Now we focus on the molecular gas distribution towards the western side of the SNR shell. Of particular interest is the absence of CO emission above 3σ (where σ ∼ 1 K) spatially correlated with the radio continuum bright region, which is roughly 2 ′ × 4 ′ in size (centred at 13 h 05 m 30 s , −62 • 41 ′ 10 ′′ ). In this region, molecular hydrogen and ionic lines at infrared wavelengths have revealed the expansion of the SN shock on a molecular cloud, as reported by Lee et al. (2019). Based on these findings, we tentatively propose the existence of a "CO-dark" gas component to the west of Kes 17. In this scenario, the gasphase carbon could be in atomic form, while the hydrogen is in molecular form. A similar phenomenon have been observed in CTB 37A (Maxted et al. 2013) and RX J1713.7-3946 (Sano & Fukui 2021). More sensitive CO molecular-line measurements are needed to shed more light on this scenario for Kes 17. It is, however, worth noting that the detection of γ-ray radiation in the direction of the remnant can indirectly trace the dark-molecular gas component if it is generated by cosmic-ray collisions with the gas (Wolfire et al. 2010). In the case of Kes 17, a γ-ray excess at GeV energies has indeed been detected in projected coincidence with the remnant. The analysis of this emission is addressed in Sect. 4 of this work. In passing by, a peculiar wall-like structure of 13 CO is observed at a distance of approximately 1 ′ .5 from the outermost radio contour towards the west. However, the straight vertical border of this structure and the absence of a counterpart in the 12 CO data covering the same region strongly suggest that this is not a real feature. The field of Kes 17 at γ-ray energies The first reports of emission in the γ-ray domain spatially projected onto Kes 17 were presented by Wu et al. (2011) and Gelfand et al. (2013), based on statistics of 30-39 months data from Fermi-LAT. The latest Fermi-LAT catalogue of GeV sources (4FGL-DR3, 9 Abdollahi et al. 20209 Abdollahi et al. , 2022 identifies the observed γ-ray excess directed towards Kes 17 as 4GL J1305.5−6241. To date, there have been no reports of TeV radiation detected in the Kes 17's field. In this section we provide an update on the GeV emission in direction to this SNR and investigate the nature of the high energy photons by modelling the spectral energy distribution (SED) combining the updated γray data with the firstly-obtained radio continuum spectrum of Kes 17 presented in Sect. 2.1. The treatment of GeV data from Fermi-LAT Our analysis comprises the largest statistics of events for Kes 17 to date, consisting of approximately 14.5 yr of continuous data acquisition with the Fermi-LAT, spanning from the beginning of the mission on August 4 th , 2008, to February 24 th , 2023. 10 This represents a significant improvement of ∼450% in observing time compared to the previous study conducted by Gelfand et al. (2013). The processing of the LAT data was conducted using the fermipy module version 1.1.6 (Wood et al. 2021), which uses the Science Tools package version 2.2.0. 11 Events were selected using the Pass 8, 3 rd release (P8R3) of photon reconstructions and the latest instrument response functions (P8R3_SOURCE_V6). The region of interest (ROI) used to extract the events was a circle 15 • in size centred at Kes 17 (13 h 05 m 53 s , −62 • 42 ′ 10 ′′ ). To extract valid events, we employed the tasks gtselect and gtmktime applying standard filters for good-time intervals (GTIs) 12 and selecting "source" class events (evtype = 3). An additional cut for the zenith angle at 90 • was implemented to minimise potential contamination from cosmic-ray (CR) interactions in the upper atmosphere. We considered events with reconstructed energies above 0.3 GeV to mitigate the adverse effects of systematic uncertainties in the effective area and the degradation of the point spread function (PSF) at the lowest energies (Ackermann et al. 2012). Furthermore, we excluded photons above 300 GeV due to the limited amount of events at the highest energies. The set of filtered events was used to fit a sky model through a maximum likelihood optimisation procedure (Mattox et al. 1996). For the optimisation, we implemented a binned likelihood analysis over the ROI. The spatial bins were set at 0 • .01 for both morphological and spectral analysis, and the energy range from 0.3 to 300 GeV was divided into 10 logarithmic bins per decade. In the analysis we included all sources from the LAT 10-year 9 https://fermi.gsfc.nasa.gov/ssc/data/access/lat/ 10yr_catalog/. 10 Corresponding to the time range from 239557417 to 681653590 seconds of the mission elapsed time (MET). 11 fermipy routines were implemented through a JupyterLab Notebook, https://jupyter.org/. 12 Information about the definition of events and GTIs can be found in the Fermi-LAT Science Support Center (FSSC) web page, https: //fermi.gsfc.nasa.gov/ssc/. (4FGL-DR3) located within the ROI, except 4FGL J1305.5-6241 in projected coincidence with Kes 17. The models used for the optimisation were gll_iem_v06 for the diffuse Galactic background and iso_P8R2_SOURCE_V6_v06 for the isotropic background. The spectral parameters and normalisations of these models were allowed to freely vary during the optimisation process. Convergence was achieved by means of the Minuit optimiser, fixing the source parameters beyond a radius of 4 • from the ROI centre. Morphological and spectral characteristics of the GeV emission To investigate the spatial distribution of the γ-ray emission, we used the fermipy tool tsmap to construct a test-statistics (TS) map. Each pixel in this map represents the likelihood of having a point source at the corresponding coordinates, compared to the null hypothesis of the sky model without the point source. The TS parameter is calculated as TS = 2 ln(L/L 0 ), where L and L 0 are the likelihoods including and excluding the source in the sky model, respectively. The TS map allows us to assess the significance of the source detection, σ ∼ √ TS (Mattox et al. 1996). In Fig. 4 we present the TS map obtained, overlaid with contours depicting the radio emission from Kes 17. The global TS value after the likelihood optimisation for the GeV source projected in coincidence with Kes 17 was TS ≃ 730, corresponding to a detection significance σ ≃ 27. This represents an improvement of about 2.5 times over the value reported by Gelfand et al. (2013). (Urquhart et al. 2022) and the two known pulsars, PSR J1306-6242 (Kramer et al. 2003) and PSR J1305-6256 (Manchester et al. 2001), lying in the field are also labelled. The plus symbol and dark-green ellipse within the Kes 17 radio contours mark respectively the best-fit position and 95% confidence region for the GeV emission obtained with a point-source spatial template. The dashed brown circle traces the 95%confidence fitted size of the γ-ray source. The inset shows a ∼15 ′ × 15 ′ close-up view of the Kes 17 region, centred at the position of the γ-ray excess. We tested the possibility that the GeV emission be extended by using the extension tool from fermipy, which makes a likelihood analysis by modelling the source as a circular region with variable radius. After convergence, the fitted value of the radius at 95% confidence is 0 • .09, and the significance of the extension is TS ≃ 16.3 (σ ∼ 4). The low significance of the fitted size is indicative that the emission is not significantly extended. We then consider in the following that the γ-ray excess detected by the Fermi-LAT corresponds to a point-like source and, consequently, we modelled it as a point source. Under this assumption, the localise tool yields the following location of this source: R.A. = 13 h 05 m 40.71 s ± 01.66 s , Dec. = −62 • 42 ′ 01.6 ′′ ± 21.6 ′′ , within a 95% confidence limit. From our analysis the localisation of the source, indicated by a green ellipse in Fig. 4, has been significantly improved by approximately one order of magnitude compared to the previous value reported in the 4FGL catalogue. The molecular gas, traced by the bright IR filaments (Lee et al. 2011, and references therein) could extend to cover the γ-ray region. As we proposed in Sect. 3, this region is suspected of containing "CO-dark" molecular gas in interaction with Kes 17's shock front. To investigate the spectral characteristics of the GeV γ-ray excess detected towards Kes 17, we generated an SED by performing a binned likelihood analysis in the 0.3-300 GeV range, implemented through the fermipy tool sed. To ensure a balance between energy resolution and statistical significance, the data were grouped into 5 equally-spaced bins per decade on a logarithmic energy scale. Additionally, we incorporated energy dispersion corrections to mitigate systematic effects on the fitted spectral parameters. In our treatment, systematic contributions arise from the effective area, the PSF of the Fermi-LAT, the energy scale, and variations in the spectral parameters due to the normalisation of the diffuse background. 13 The contribution related to the effective area is variable and can reach approximately 10% at the extremes of the energy range considered in our analysis. For energies below 100 GeV, the systematic error related to the PSF containment radius is around 5% and increases linearly to about 20% for higher energies. Energy scale uncertainties are within 5% throughout the energy range. On the other hand, systematics associated with the Galactic diffuse background were estimated following the procedure from Abdo et al. (2009), which consists in artificially varying and fixing the normalisation by ±6% with respect to the original fit and examining the resulting variations in the fitted spectral parameters. In the analysis of the broadband SED (Sect. 4.3), both statistical and systematic effects were considered. The contributions from both sources of uncertainties were added in quadrature to obtain a final error band for each energy bin. The data were fitted using a power-law model dN/dE = ϕ 0 (E/E 0 ) −Γ , where ϕ 0 is the differential flux (in units of cm −2 s −1 MeV), E 0 is the "pivot" energy, and Γ represents the energy spectral index. Through the bin-by-bin likelihood procedure, we obtained a spectral index value Γ = 2.39 ± 0.04 +0.063 −0.114 (± stat ± syst), which is well in agreement with the 4FGL value, but it is softer and marginally consistent with that from Gelfand et al. (2013). Our result is also in agreement with the correlation observed by Acero et al. (2016) between the radio spectral index α and the GeV photon index Γ for SNRs interacting with molecular clouds. The data points are shown in Fig. 5, where blue and red error bars denote the statistic and systematic un-13 Further details about systematic errors can be found in the FSSC web page. certainties, respectively. The plot also shows the power-law fit to the data points, as well as the 1-σ confidence interval for the fit. The integrated flux is determined to be F(0.3−300 GeV) = (2.98 ± 0.14) × 10 −11 erg cm −2 s −1 , 14 corresponding to a luminosity L γ (0.3−300 GeV) = (2.22 ± 0.45) × 10 35 erg s −1 at the distance of 7.9 ± 0.6 kpc. The phenomenology of how the luminosity in γ-rays for SNRs competes with factors such as distance uncertainties, molecular gas repository densities, or time evolution effects is complex and a detailed analysis of this topic is beyond the scope of this work. However, despite this limitation, we can provide a brief comparison of our L γ estimate for Kes 17 with those derived in a similar energy range (0.1-100 GeV) for emitters identified as advanced (i.e., from middleaged to old, ≳ 10 kyr) and young-aged (≲ 3 kyr) SNRs associated with molecular clouds (for a more detailed discussion, refer to Acero et al. 2022). For instance, when considering the older sources W44 and IC 443 with high local densities (∼ 10 2 -10 4 cm −3 Yoshiike et al. 2013;Dell'Ova et al. 2020), they would be 5 or even 60 times more luminous than Kes 17 if we place them at the distance of ∼8 kpc. 15 . Additionally, the γ-ray luminosity of the mature remnant Cygnus Loop, evolving in a lowdensity environment (∼1-10 cm −3 , Fesen et al. 2018), would be comparable to our estimate for Kes 17 if they were located at the same distance. 16 We also point out that all of these middle and advanced-age remnants are more luminous, by one to two orders of magnitude, compared to the young SNRs Tycho and Kepler, which are expanding in low-density media (∼10 cm −3 , Acero et al. 2022;Zhang et al. 2013). 17 14 Equivalent to an integrated photon flux of (1.87 ± 0.09) × 10 −8 ph cm −2 s −1 . 15 The reported γ-ray luminosities values turns out to be 10 34 -10 35 erg s −1 in the 0.1-100 GeV range for W44 and IC 443 according to Acero et al. (2022) Yu et al. 2019). 16 The estimated luminosity in 0.1-100 GeV for Cygnus Loop is ≃ 10 33 erg s −1 ) at 0.7 kpc (Fesen et al. 2018). 17 L γ for Tycho and Kepler spans the range ≃ 10 33 -10 34 erg s −1 at 4 and 5 kpc, respectively , and references therein). 4.3. Analysis of the spectral energy distribution of Kes 17 from radio to γ rays In this section we study the spectral energy distribution of Kes 17, incorporating the new nonthermal continuum radio spectrum extracted from the SNR shell and the high-energy spectrum obtained from new observations by the Fermi-LAT. For the nonthermal X-ray emission, we used an upper limit derived from Suzaku data by Gelfand et al. (2013). At very high energy, we used an upper limit above 1 TeV from the H.E.S.S. Galactic Plane Survey (Fernández Gangoso 2014;H. E. S. S. Collaboration et al. 2018). To model the multiwavelength emission, we considered an electron population that produces synchrotron radiation, inverse Compton scattering (IC), and nonthermal bremsstrahlung. Additionally, we incorporated a proton population that interacts with the surrounding gas, resulting in the subsequent production and decay of neutral pions (π 0 ). The parameters characterising these models are presented in Table 2 and were derived using the Naima Python package (Zabalza 2015). To investigate the plausibility of a scenario where the leptonic component is dominant, we considered a one-zone model with an electron population distributed in energies according to a power-law. The updated radio continuum spectrum, extended to cover frequencies from 88 to 8800 MHz, allowed us to further constrain the spectral index of the synchrotron emission, at α = −0.488 ± 0.023 (Sect. 2.1). We used this measurement to fix the initial power-law index Γ e = 1 − 2α of the electron energy spectrum to a value ≈ 1.9. Then, the X-ray upper limit derived by Gelfand et al. (2013) imposes a tight constrain on the magnetic field (B), the energy cut-off (E c ) and the energy density (W e ) of the electron population, which are degenerate. Following Gelfand et al. (2013), we assumed a magnetic field B = 35 µG, which yields a maximum value of the cut-off energy estimated to be E c ≈ 2 TeV and a total energy density W e = 4.3 × 10 48 (d SNR /7.9 kpc) 2 erg, which appears reasonable for a middle-aged system (see discussion in Gelfand et al. 2013 about the cut-off in the spectrum due to synchrotron losses of middle-aged to old systems). We first consider the electron population distribution obtained from the radio spectrum synchrotron fit to compute the associated Inverse Compton emission in order to reproduce the observed level of measured γ-ray emission. We consider three interstellar radiation fields: the cosmic microwave background (CMB) (T CMB = 2.72 K, u CMB = 0.26 eV cm −3 ), the far-IR (FIR) radiation (T FIR = 27 K, u FIR = 0.415 eV cm −3 ), and the near-IR starlight radiation (T SL = 2800 K, u SL = 0.8 eV cm −3 ), computed from the GALPROP 18 model at the position of the remnant (Galactocentric distance of ∼6 kpc) (Strong et al. 2004;Porter et al. 2006). However, as shown in Fig. 6, the IC radiation produced by this electron population fails to reproduce the new Fermi-LAT spectrum. Particularly, the shape of the GeV emission cannot be reproduced by a simple electron population with an index of 1.9. Furthermore, the upper limit from H.E.S.S. places strong constraints on the level of IC emission at very high energies, and constrains the maximum value of the energy break in the electron spectrum to be 1.5 TeV and sets a minimum value for the magnetic field strength at 35 µG (see Table 2). Therefore, we conclude that it is not possible to adequately model the broad-band emission using a purely leptonic scenario, at least within the framework of a one-zone model. We now investigate a scenario where the major part of the γ-ray emission is attributed to π 0 decay. At this stage, it is im-18 https://galprop.stanford.edu/index.php Fig. 6. Broadband SED from radio to γ rays for Kes 17. Fluxes in the radio band correspond to those in Table 1 (in units of erg cm −2 s −1 ), while those at γ-ray energies correspond to the new Fermi-LAT data from this work (Sect. 4.2). Upper limits at X-ray and TeV energies where derived from the non detection of X-ray synchrotron emission and the H.E.S.S. observatory, respectively (see text for details). (a) SED modelling with a unique electron population producing synchrotron and IC radiation over the broadband spectrum. (b) SED modelling with an ExpCut-off protondominated model producing synchrotron, IC, and Bremsstrahlung radiation from a parent electron population, and γ-ray emission via the decay of π 0 created from a proton population interacting with the surrounding gas (see Table 2 for a complete list of parameters). portant to recall that the Fermi source is spatially coincident with a part of the shell which has bright radio emission and exhibits IR filaments. While no molecular structures were detected in the region of bright γ-rays (Sect. 3), the study of molecular hydrogen and ionic lines, made it possible to highlight a signature of a shock due to an interaction of the SNR shell with a cloud (Lee et al. 2011). Consequently, in our analysis, we consider only the molecular gas in the western region of Kes 17, as the dominant contributor to the GeV photon flux through hadronic interactions. In the following we will consider a cloud density of 400 cm −3 in accord with the estimation made by Lee et al. (2011) from the IR emission. This value is an order of magnitude higher than the ∼ 10 cm −3 used in Gelfand et al. (2013). To analyse the hadronic origin of the γ-ray radiation, we have assumed two models: a power-law proton spectrum with an index ∼ 2.4 and a cut-off above a few TeV (ExpCutoff), and a broken power-law (BPW) with indices Γ p,1 = 2.40 and Γ p,2 = 3.5 below and above E b = 2 TeV, up to a maximum energy of 10 TeV, which is strongly constrained by the steep spectrum deduced from Fermi data above few tens of GeV and the non detection of Kes 17 by H.E.S.S. We report all-proton dominated model parameters in Table 2. Both models fit the data equally well. To simplify the presentation, we have plot- Table 2. Parameters results from the two models used to reproduce the broadband SED of Kes 17 (see Fig. 6). One model considers a unique electron population, and the other one is a proton-dominated model that includes both electrons and protons. B is the magnetic field at the shock, n 0 is the average density of the surrounding ISM, while Γ e , Γ p , E b,e , and E b,p are respectively the indices and the energy cut-off or break in the parent electrons and protons spectra (see text for details). E min,e and E min,p are the minimum energy of the electron and proton spectra, whereas W e and W p are the total energy density of accelerated electrons and protons, respectively. ted only the ExpCut-off model in Fig. 6, which clearly shows that the γ-ray spectrum strongly supports the hadronic origin of the radiation. Since the hadronic γ-ray emission is proportional to the product of the kinetic energy in protons and the density of the medium, these parameters are closely correlated. Assuming that the total mass of the molecular cloud acts as the target material, we derived a total energy of cosmic-ray protons W p = 2.97 × 10 49 (n p /400 cm −3 ) −1 (d SNR /7.9 kpc) 2 erg. As can be appreciated in Fig. 6, the first two data points at the lowest γ-ray energies seem to deviate from a pure power-law shape and appear to be compatible with the so-called "pion-bump" feature observed below a few hundred MeV. As discussed in Tang (2018), the combination of this rising feature in the spectrum with a steep spectrum beyond a few tens of GeV, is widely recognised as a characteristic signature of π 0 -decay, illuminating hadronic emission in SNRs. Such a π 0 signature is particularly observed in the growing class of advanced-aged GeV emitters SNRs interacting with MCs, as W44 (Giuliani et al. 2011), IC 443 (Ackermann et al. 2013), and W51C (Jogler & Funk 2016). On the other hand, from the modelling of the data depicted in Fig. 6, it is evident that the contribution of bremsstrahlung radiation from the previously considered electron population, with a similar density, is clearly a minor component in the GeV energy range and its spectral behaviour does not accurately reproduce the spectrum shape. In the final proton-dominated model we include this marginal contribution, as well as that from IC emission of an electron population with a total energy density of W e = 1.5×10 48 (d SNR /7.9 kpc) 2 erg. The W e /W p ratio is 0.05 (see Table 2), larger than that in the proton-dominated scenario (0.01) discussed by Gelfand et al. (2013), but significantly smaller than their IC dominated scenario (0.1). It can be attributed to the reduction of W p as a consequence of the higher density of target matter considered here. On the basis of the new measurements of the GeV and radio spectra, we conclude that although a single electron population is considered to reproduce the overall synchrotron emission of the remnant, the π 0 decay process may be primarily responsible for the point-like γ-ray emission detected towards Kes 17 in the GeV energy range. Summary and Conclusions Based on our comprehensive update on the radio and γ-ray radiations from Kes 17, along with our analysis of the molecular environment brightening in the 12 CO and 13 CO (J = 1-0) lines, we have arrived at the following picture: 1-Kes 17 was created in a stellar explosion that occurred approximately 11 kyr ago in an ambient medium with a density roughly 7 cm −3 . 2-The observed spectral shape of the shock front in Kes 17 emitting from 88 to 8800 MHz is adequately fitted with a sim-ple power law model of index α = −0.488 ± 0.023. The available radio data suggest that there is not ionised gas located in, around, or anywhere along the sightline to Kes 17 that significantly impacts the integrated spectrum. If present, this ionised gas may produce an spectral curvature below 100 MHz due to free-free thermal absorption. More sensitivity and resolution low frequency radio data are key to spatially resolving spectral curvatures due to intrinsic properties related to the shock in Kes 17 and its interaction with the immediate SNR's surroundings observed in CO and infrared lines. 3-The eastern part of Kes 17 is wrapping a CO cloud. The main evidences of such an interaction include the distortion of the SNR shock and the distance to the CO cloud that is found to be completely compatible with the distance to the remnant. The average mass and density of this cloud are determined to be 4.2×10 4 M ⊙ and 300 cm −3 , respectively. Noteworthy, there is not appreciable CO emission detected towards the western region of the radio shell, where molecular hydrogen has been proven to be shocked by the Kes 17's shock front. This suggests the presence of a CO-dark molecular gas. 4-No features of atomic hydrogen physically connected to Kes 17 are detected at the sensitivity and resolution of the SGPS data used in this work. 5-In its evolution, Kes 17 produces γ-ray photons at GeV energies, which have been observed by the Fermi-LAT telescope. The flux and luminosity at 7.9 kpc in the 0.3 -300 GeV energy band are estimated to be (2.98 ± 0.14) × 10 −11 erg cm −2 s −1 and (2.22 ± 0.45) × 10 35 erg s −1 , respectively. The spectra of this high-energy emission has an index Γ = 2.39 ± 0.04 +0.063 −0.114 . 6-Based on observational evidence and modelling of the broadband SED ranging from radio to γ rays, it has been determined that a purely leptonic (IC) scenario is not favoured as an explanation for the emission from Kes 17 observed at GeV energies. Instead, the evidence suggests that the primary contribution to the γ-ray flux originates from the collision between the western part of the SNR shock front and a dense IR emitting region. Consequently, our analysis adds Kes 17 to the list of SNRs whose emission at GeV energies is hadronic-dominated. γ-ray luminosities measured in this class of remnants exhibit differences that can be interpreted in terms of the amount of molecular gas, which serves as target material for cosmic-ray interactions, as well as time-evolution effects. Future observations conducted using state-of-the-art instruments operating at the highest energies of the electromagnetic spectrum, coupled with improved resolution and sensitivity, will contribute to refining our understanding of the spectral and morphological behaviour of Kes 17 in the γ-ray regime.
2023-07-31T06:42:49.807Z
2023-07-28T00:00:00.000
{ "year": 2023, "sha1": "b83e81f57d6d6a8171684246207c64bfa998fc76", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b83e81f57d6d6a8171684246207c64bfa998fc76", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226988745
pes2o/s2orc
v3-fos-license
Identification of a New QTL Region on Mouse Chromosome 1 Responsible for Male Hypofertility: Phenotype Characterization and Candidate Genes Male fertility disorders often have their origin in disturbed spermatogenesis, which can be induced by genetic factors. In this study, we used interspecific recombinant congenic mouse strains (IRCS) to identify genes responsible for male infertility. Using ultrasonography, in vivo and in vitro fertilization (IVF) and electron microscopy, the phenotyping of several IRCS carrying mouse chromosome 1 segments of Mus spretus origin revealed a decrease in the ability of sperm to fertilize. This teratozoospermia included the abnormal anchoring of the acrosome to the nucleus and a persistence of residual bodies at the level of epididymal sperm midpiece. We identified a quantitative trait locus (QTL) responsible for these phenotypes and we have proposed a short list of candidate genes specifically expressed in spermatids. The future functional validation of candidate genes should allow the identification of new genes and mechanisms involved in male infertility. Introduction In approximately half of the 15% of couples that suffer from sterility, the cause is ascribed to male infertility, which encompasses a wide variety of syndromes. In more than half of infertile men, the cause is classified as idiopathic and may be due to genetic factors. Genetic defects leading to male infertility often affect spermatogenesis or sperm function [1]. Spermatogenesis is a biological process composed of a series of highly complex cellular events. It can be broadly divided into five discrete events: (1) renewal of spermatogonial stem cells and spermatogonia via mitosis, (2) proliferation (via mitosis) and differentiation of spermatogonia, (3) meiosis, (4) spermiogenesis (transformation of round spermatids into elongated spermatids and spermatozoa) and (5) spermiation, the release of sperm from the epithelium into the tubule lumen. All steps are highly regulated and dysfunctions in this key physiological process can result in infertility. Regulation of this complex process depends on the cooperation of many genes, which are expressed at these different steps [2]. Problems during spermatogenesis are most often reflected in a no or low production of spermatozoa and are described by routine semen analysis as azoospermia, oligozoospermia, teratozoospermia, asthenozoospermia or a combination of the last three (oligoasthenoteratozoospermia). Teratozoospermia is defined by morphologic sperm abnormalities. Globozoospermia is a rare and severe form of teratozoospermia characterized by round-headed spermatozoa lacking an acrosome, an important and specific organelle that plays a crucial role during fertilization. During spermiogenesis, diverse cellular and molecular processes allow sperm head formation and organization. Mutant models have improved the understanding of the etiology of teratozoospermia and clarified some of the mechanisms involved in sperm head formation and organization [3]. Identification of genes that play a crucial role in spermatogenesis is mainly based on observations in rodents. In particular, the mouse model appears to be an excellent model to study human infertility due to the conservation of the great majority of genes and processes involved in sperm production [4]. In order to identify genetic causes of infertility, we used the interspecific recombinant congenic mouse strains (IRCS) panel developed at the Institut Pasteur (Paris, France) [5]. The IRCS model harbors about 2% of Mus spretus (SEG/Pas) genome at a specific and known location inside a Mus musculus domesticus (C57BL/6J strain) background. The high polymorphism between spretus and musculus makes it possible to identify the quantitative trait locus (QTL) regions responsible for phenotypic variation in a given IRCS strain compared to the reference strain, C57BL/6J (B6) [5][6][7][8][9]. The phenotyping of the Rc3 strain, a congenic strain derived from the BcG-66H IRCS, led us to identify spermatogenesis defects such as acrosomal or flagellar anomalies that are absent in the B6 reference strain. The combination of phenotypic and fine mapping approaches allowed us to map the Mafq1 (male fertility QTL chromosome 1) locus on a unique spretus fragment on mouse chromosome 1 and to identify a short list of candidate genes responsible for the spermatogenesis defects and male hypofertility of the Rc3 strain. In Vivo Phenotyping We have previously presented the generation of the subcongenic substrains (named Rc) from the 66HMMU1 strain by recombination events inside the MMU1 spretus segment [9]. Each of these substrains differ from B6 (control) due to the presence of a unique spretus fragment on the chromosome 1 in their genome that overlaps with those of the other substrains ( Figure 1). Here we present the phenotyping of the Rc3 substrain compared to the B6 strain, by using an in vivo ultrasonic method to evaluate the implantation rate at E7.5 and E9.5. For this, we carried out inbred crosses between males and females from each substrain and determined the mean of implanted embryos. We observed a significant reduction in the mean number of implanted embryos in the Rc3 × Rc3 crosses compared to B6 × B6 controls. To know whether the defect observed in Rc3 orginated from the female or the male side, we performed two reciprocal F1 crosses: Rc3 males with B6 females and B6 males with Rc3 females. The number of implanted embryos was reduced only in the crosses involving Rc3 males (p < 0.001) and we concluded that the defect was due to the male. The Rc4 substrain, which carries a partially overlapping spretus chromosomal fragment did not show a hypofertility phenotype (Figure 2). Phenotype Characterization To identify the origin of the male defect(s), we focused on mating between Rc3 males and B6 females. The next day, to verify in vivo fertilization, we collected and counted the number of fertilized oocytes from the ampullae of the females presenting with a vaginal plug. We compared the results from the B6 × Rc3 with those from the B6 × B6 controls and we observed a significant (p = 0.036) reduction in the fertilization rate with Rc3 males ranging from 73 ± 6% for the B6 control to 53 ± 7% for the Rc3 substrain ( Figure 3). In Vivo Phenotyping We have previously presented the generation of the subcongenic substrains (named Rc) from the 66HMMU1 strain by recombination events inside the MMU1 spretus segment [9]. Each of these substrains differ from B6 (control) due to the presence of a unique spretus fragment on the chromosome 1 in their genome that overlaps with those of the other substrains ( Figure 1). Figure 1. Genotypes of the different recombinant strains (Rc) and the parental strains (spretus and B6). Recombinant strains (Rc) were generated at the Institut Pasteur (Paris) from the 66HMMU1 strain by recombination events inside the MMU1 spretus segment. Black regions correspond to B6 background, grey regions to spretus fragments and the minimal spretus region responsible for the phenotype of interest is highlighted in hatched on chromosome 1. Marker positions are given in mega base pairs (Mb). Genome version GRCm38.p6. Here we present the phenotyping of the Rc3 substrain compared to the B6 strain, by using an in vivo ultrasonic method to evaluate the implantation rate at E7.5 and E9.5. For this, we carried out inbred crosses between males and females from each substrain and determined the mean of implanted embryos. We observed a significant reduction in the mean number of implanted embryos in the Rc3 × Rc3 crosses compared to B6 × B6 controls. To know whether the defect observed in Rc3 orginated from the female or the male side, we performed two reciprocal F1 crosses: Rc3 males with B6 females and B6 males with Rc3 females. The number of implanted embryos was reduced only in the crosses involving Rc3 males (p < 0.001) and we concluded that the defect was due to the male. The Rc4 substrain, which carries a partially overlapping spretus chromosomal fragment did not show a hypofertility phenotype (Figure 2). Phenotype Characterization To identify the origin of the male defect(s), we focused on mating between Rc3 males and B6 females. The next day, to verify in vivo fertilization, we collected and counted the number of fertilized oocytes from the ampullae of the females presenting with a vaginal plug. We compared the results from the B6 × Rc3 with those from the B6 × B6 controls and we observed a significant (p = 0.036) reduction in the fertilization rate with Rc3 males ranging from 73 ± 6% for the B6 control to 53 ± 7% for the Rc3 substrain ( Figure 3). We also performed in vitro fertilization (IVF) using B6 oocytes and Rc3 versus B6 epididymal sperm. We observed a significant reduction of Rc3 (B6 oocytes and Rc3 epididymal sperm) fertilizing ability with a Fertilization Index (FI: mean of sperm number fused by egg) of 0.33 ± 0.06, compared to B6 controls (B6 oocytes and B6 epididymal sperm) with a FI of 1.18 ± 0.06 (p < 0.0001). Interestingly, when we increased the concentration of Rc3 sperm (10 times), the FI was brought back to normal values (FI = 1.08 ± 0.05, p = 0.47) ( Figure 4). **** We also performed in vitro fertilization (IVF) using B6 oocytes and Rc3 versus B6 epididymal sperm. We observed a significant reduction of Rc3 (B6 oocytes and Rc3 epididymal sperm) fertilizing ability with a Fertilization Index (FI: mean of sperm number fused by egg) of 0.33 ± 0.06, compared to B6 controls (B6 oocytes and B6 epididymal sperm) with a FI of 1.18 ± 0.06 (p < 0.0001). Interestingly, when we increased the concentration of Rc3 sperm (10 times), the FI was brought back to normal values (FI = 1.08 ± 0.05, p = 0.47) ( Figure 4). We also performed in vitro fertilization (IVF) using B6 oocytes and Rc3 versus B6 epididymal sperm. We observed a significant reduction of Rc3 (B6 oocytes and Rc3 epididymal sperm) fertilizing ability with a Fertilization Index (FI: mean of sperm number fused by egg) of 0.33 ± 0.06, compared to B6 controls (B6 oocytes and B6 epididymal sperm) with a FI of 1.18 ± 0.06 (p < 0.0001). Interestingly, when we increased the concentration of Rc3 sperm (10 times), the FI was brought back to normal values (FI = 1.08 ± 0.05, p = 0.47) ( Figure 4). **** Characterization of the Sperm Defect of the Rc3 Males To identify the origin of this sperm dysfunction, we first counted the sperm of B6 and Rc3 males used for IVF assays and we did not observe any significant difference between the two groups (29.67 ± 1.85 × 10 6 (n = 3) for the B6 males and 32.33 ± 3.18 × 10 6 (n = 3) for the Rc3 males) (Figure 5a, p = 0.5). DAPI (4′,6-diamidino-2-phenylindole) for nucleus. A slight difference (p = 0.03) in the presence of sperm acrosome was observed on non-capacitated freshly recovered epididymal sperm, in which the acrosome was present in 86.4 ± 4.5% of B6 sperm and in 71.8 ± 4.6% of Rc3 sperm (Figure 5b). After 90 min of capacitation in Ferticult medium supplemented with 3% Bovine Serum Albumin (BSA) at 37 °C under 5% CO2, this percentage remained approximately the same for B6 sperm (89.4 ± 2.6%), while in sperm from Rc3 males this percentage dropped drastically to only 26.8 ± 4.5% (p < 0.0001, Figure 5c). These results likely reflect a defect at the acrosome level, which leads to early loss of the acrosome during capacitation by spontaneous acrosome reaction (sAR). These results were completed by observing the sperm motility under a microscope, as carried out during IVF experiments. While B6 sperm showed a total mobility exceeding 50% on average, those of Rc3 rarely exceeded 20%. When we looked at the fraction of sperm that showed progressive motility, this percentage dropped from 20% to 25% for B6 sperm to 10% at best for Rc3 sperm. After an inconclusive morphological observation under phase-contrast microscopy, Rc3 sperm were observed and compared to B6 sperm using fluorescence microscopy with PSA-FITC (Pisum sativum agglutinin conjugated to fluorescein) labelling to detect the presence of sperm acrosome and DAPI (4 ,6-diamidino-2-phenylindole) for nucleus. A slight difference (p = 0.03) in the presence of sperm acrosome was observed on non-capacitated freshly recovered epididymal sperm, in which the acrosome was present in 86.4 ± 4.5% of B6 sperm and in 71.8 ± 4.6% of Rc3 sperm (Figure 5b). After 90 min of capacitation in Ferticult medium supplemented with 3% Bovine Serum Albumin (BSA) at 37 • C under 5% CO 2 , this percentage remained approximately the same for B6 sperm (89.4 ± 2.6%), while in sperm from Rc3 males this percentage dropped drastically to only 26.8 ± 4.5% (p < 0.0001, Figure 5c). These results likely reflect a defect at the acrosome level, which leads to early loss of the acrosome during capacitation by spontaneous acrosome reaction (sAR). These results were completed by observing the sperm motility under a microscope, as carried out during IVF experiments. While B6 sperm showed a total mobility exceeding 50% on average, those of Rc3 rarely exceeded 20%. When we looked at the fraction of sperm that showed progressive motility, this percentage dropped from 20% to 25% for B6 sperm to 10% at best for Rc3 sperm. Alterations in sperm could explain the high rate of spontaneous acrosomal reaction that occurred during capacitation. In order to investigate whether discrete alterations exist at the ultrastructural level, a second observation of Rc3 and B6 sperm was performed using electron microscopy (EM). A slight acrosome defect consisting of a detachment between the anterior part of the acrosome and the nucleus was observed in about half of Rc3 sperm (Figure 6a). This acrosomal defect was not seen in B6 control sperm. EM also revealed anomalies of the flagella (Figure 6b). Indeed in Rc3 sperm, residual bodies at the flagella, also not found on B6 sperm, were observed in 15 to 20% of sperm. These residual bodies contain large lipid droplets that likely reflect a metabolic dysfunction and/or spermiation defect. Alterations in sperm could explain the high rate of spontaneous acrosomal reaction that occurred during capacitation. In order to investigate whether discrete alterations exist at the ultrastructural level, a second observation of Rc3 and B6 sperm was performed using electron microscopy (EM). A slight acrosome defect consisting of a detachment between the anterior part of the acrosome and the nucleus was observed in about half of Rc3 sperm (Figure 6a). This acrosomal defect was not seen in B6 control sperm. EM also revealed anomalies of the flagella (Figure 6b). Indeed in Rc3 sperm, residual bodies at the flagella, also not found on B6 sperm, were observed in 15 to 20% of sperm. These residual bodies contain large lipid droplets that likely reflect a metabolic dysfunction and/or spermiation defect. Figure 6. Observation of B6 and Rc3 sperm heads (a) and flagella (b) using electron microscopy. No abnormality was observed on B6 sperm where nuclear and inner acrosomal membranes seem joined, whereas the acrosome seems abnormally attached to the nucleus in approximately 50% of the sperm head of Rc3 males (a, arrows), in addition to the presence of lipid droplets (b, arrowheads) in the residual bodies at the level of the midpiece in approximately 15% of the sperm. QTL Fine Mapping The Rc3 substrain differs from the B6 strain by a spretus fragment of about 14 Mb on the MMU1 chromosome delimited by D1Mit438 and rs13476005 markers (see Figure 1). This spretus region contains gene(s) responsible for the observed phenotype. We undertook to decrease the size of the fragment. No abnormality was observed on B6 sperm where nuclear and inner acrosomal membranes seem joined, whereas the acrosome seems abnormally attached to the nucleus in approximately 50% of the sperm head of Rc3 males (a, arrows), in addition to the presence of lipid droplets (b, arrowheads) in the residual bodies at the level of the midpiece in approximately 15% of the sperm. QTL Fine Mapping The Rc3 substrain differs from the B6 strain by a spretus fragment of about 14 Mb on the MMU1 chromosome delimited by D1Mit438 and rs13476005 markers (see Figure 1). This spretus region contains gene(s) responsible for the observed phenotype. We undertook to decrease the size of the fragment. Since the Rc4 strain does not show the hypofertility phenotype (Figures 1 and 2), we excluded the spretus region of 4 Mb between the two markers, D1Mit305 and rs13476005 that are shared by Rc3 and Rc4. Therefore, we defined a QTL region of about 4.2 Mb on chromosome 1 between D1Mit438 and D1Mit305 (hatched zone in Figure 1) as responsible for male hypofertility. We named this QTL Mafq1 (Male fertility QTL chromosome 1). Identification of Candidate Genes in the QTL Region Responsible for Rc3 Sperm Phenotype We explored the Mafq1 interval to identify candidate genes whose presence in the spretus haplotype within a B6 background could be responsible for the hypofertility phenotype. Table S1 compiles all sequences described in different databases as protein coding genes (n = 71), processed transcripts (n = 37), pseudogenes (n = 50) and non coding RNAs (n = 48). We also documented their expression profile, when available, at the RNA and protein level, in order to focus on genes expressed in the testis and preferably at the spermatid stage to explain the dysfunction of the acrosome. Some genes have already been invalidated in mouse models (n = 28), and although some of these knockout (KO) models expressed phenotypes that are overall related to reproduction, notably, none of them could recapitulate the particular features observed in the Rc3 mouse. The supposed or validated function of the protein has also been documented in the literature. Finally, we counted the single nucleotide polymorphisms (SNPs) with putative functional impact between spretus and B6, particularly in the open reading frames where they could be responsible for structural changes affecting protein interactions. Therefore, we applied these filters to select the most relevant candidates genes. We identified 8 genes highly expressed in the testis: Trip12, Fbxo36, Spata3, Tex44, Pde6d, Efhd1, Dnajb3 and Mroh2a, of which two are exclusively expressed in the testis (Spata3 and Tex44). Because the differences in the expression level (mRNA) depended on the databases, we performed qRT-PCR experiments in order to analyze the expression of several candidate genes in the testis. The different cellular populations of the testis were sorted by flow cytometry, and the expression level of Spata3, Dnajb3, Pde6d and Efhd1 was evaluated (Figure 7). While three of them appeared to be expressed at different stages of spermatogenesis, Spata3 was exclusively expressed at the spermatid stage. Since the Rc4 strain does not show the hypofertility phenotype (Figures 1 and 2), we excluded the spretus region of 4 Mb between the two markers, D1Mit305 and rs13476005 that are shared by Rc3 and Rc4. Therefore, we defined a QTL region of about 4.2 Mb on chromosome 1 between D1Mit438 and D1Mit305 (hatched zone in Figure 1) as responsible for male hypofertility. We named this QTL Mafq1 (Male fertility QTL chromosome 1). Identification of Candidate Genes in the QTL Region Responsible for Rc3 Sperm Phenotype We explored the Mafq1 interval to identify candidate genes whose presence in the spretus haplotype within a B6 background could be responsible for the hypofertility phenotype. Table S1 compiles all sequences described in different databases as protein coding genes (n = 71), processed transcripts (n = 37), pseudogenes (n = 50) and non coding RNAs (n = 48). We also documented their expression profile, when available, at the RNA and protein level, in order to focus on genes expressed in the testis and preferably at the spermatid stage to explain the dysfunction of the acrosome. Some genes have already been invalidated in mouse models (n = 28), and although some of these knockout (KO) models expressed phenotypes that are overall related to reproduction, notably, none of them could recapitulate the particular features observed in the Rc3 mouse. The supposed or validated function of the protein has also been documented in the literature. Finally, we counted the single nucleotide polymorphisms (SNPs) with putative functional impact between spretus and B6, particularly in the open reading frames where they could be responsible for structural changes affecting protein interactions. Therefore, we applied these filters to select the most relevant candidates genes. We identified 8 genes highly expressed in the testis: Trip12, Fbxo36, Spata3, Tex44, Pde6d, Efhd1, Dnajb3 and Mroh2a, of which two are exclusively expressed in the testis (Spata3 and Tex44). Because the differences in the expression level (mRNA) depended on the databases, we performed qRT-PCR experiments in order to analyze the expression of several candidate genes in the testis. The different cellular populations of the testis were sorted by flow cytometry, and the expression level of Spata3, Dnajb3, Pde6d and Efhd1 was evaluated (Figure 7). While three of them appeared to be expressed at different stages of spermatogenesis, Spata3 was exclusively expressed at the spermatid stage. Discussion In previous studies, we have shown that IRCS mice can provide novel phenotypes of interest and that they have the power to identify the responsible gene [9][10][11][12]. As an example, we have proposed Fidgetin-like1 (on chromosome 13) as a strong candidate for the dynamic impairment of Discussion In previous studies, we have shown that IRCS mice can provide novel phenotypes of interest and that they have the power to identify the responsible gene [9][10][11][12]. As an example, we have proposed Fidgetin-like1 (on chromosome 13) as a strong candidate for the dynamic impairment of male meiosis, which leads to reduced testis weight in mice [8]. This gene has been described as pivotal for meiotic recombination [13]. This approach allows the identification of new loci and genes that can be important for gametogenesis and whose study can improve our knowledge about infertility disorders. Similarly, here, using these IRCS lines, we have shown an association between a QTL on chromosome 1 (Mafq1) present in the IRCS-derived Rc3 subcongenic strain and male hypofertility. For this, we first used the ultrasonic method to estimate the number of implanted embryos in reciprocal crosses involving B6 and Rc3 females mated with either Rc3 or B6 males. The number of implantation sites was significantly lower in both B6 and Rc3 females when mated with Rc3 males, although we did not observe any difference in terms of embryonic resorption. This led us to assess the fertilizing ability of Rc3 sperm. Indeed, in vivo as in vitro, the fertilization rate was found to be lower with the sperm obtained from Rc3 males. We did not observe any obvious morphological alterations under light microscopy but found reduced mobility in Rc3 sperm; thus, we increased the sperm concentration of Rc3 by 10 during IVF. This made it possible to restore the fertilization index to the same level with B6 sperm. The decrease in fertilization ability observed in vitro was greater than that observed in vivo. This difference is probably explained by the optimal conditions and high efficacy of in vivo fertilization compared to in vitro fertilization [14]. The second explanation is that the sperm make a long journey in the female genital tract. The sperm motility is one of the elements contributing to the success of this journey. As a result, a form of selection takes place and sperm with significantly reduced motility do not reach the oviduct. Conversely, in vitro, sperm are directly deposited in contact with oocytes and even those with reduced motility participate to the fertilization. This could further reduce the fertilization rate. Therefore, IVF better reveals minor functional alterations, as seen in our other examples regarding sperm from Spaca6 heterozygous mutant males [15]. We then looked for the presence of the acrosome. While this was similar on non-capacitated sperm, the acrosome seemed to be absent more often after capacitation in Rc3 sperm, reflecting an early sAR on Rc3 sperm and the potential fragility of the acrosome. Early sAR may not enable the fertilization process to proceed but the timing of the acrosome reaction (AR) seems to be flexible [16]. The spontaneous acrosomal reaction is a physiological phenomenon, but its frequency increases in the Rc3 strain [17,18]. In humans, sperm with high proportion of sAR result in poor success in IVF [19]. Via its ability to prevent early sAR before reaching the female genital tract, Paraoxonase 1 activity seems to have a positive effect on fertility [20]. In mice, this question remains debatable. For example, the disruption of mouse CD46 causes an accelerated sAR, but also the facilitation of fertilizing ability of males [21]. Supporting this idea, Sebkova et al. showed that the relocation of Izumo1 takes place normally even after sAR [18]. This apparent contradiction could probably be explained by the exact timing and magnitude of this early acrosomal reaction. These questions could be addressed by invalidating candidate genes. The electron microscopy analysis finally made it possible to specify the defect in Rc3 sperm by highlighting a detachment of the acrosome that does not adhere correctly to the nucleus. This indicates that the protein involved in this phenotype could be localized at the level of the inner membrane of the acrosome, of the nuclear membrane, or even between the two at the level of the acroplaxome. If not, this protein could be localized elsewhere but have an indirect action via a regulatory role. Such a phenotype has already been observed in Hipk4 (homeodomain-interacting protein kinase 4) KO mice demonstrating that this gene is essential for murine spermiogenesis as a regulator of the shape of the sperm head [22], or in germ cell-specific Sirt1 KO infertile mice where disrupted spermiogenesis caused defects in acrosome biogenesis, which resulted in a phenotype similar to that observed in human globozoospermia [23]. In addition, the absence of DPY19L2, an inner nuclear membrane protein, causes globozoospermia in human and in mice by preventing the anchoring of the acrosome to the nucleus [24,25]. The phenotype that we describe here suggests an abnormal organization of the acrosome. Admittedly, the observed phenotype is not as drastic as those mentioned above, which might suggest that the genetic defect in the Rc3 strain, probably resulting from the presence of a protein with altered function, could be different from the one observed in the complete absence of the protein as in the case of KO mice. However, this difference could be sufficient to prevent, at least partially, normal protein interactions, thus reinforcing the idea of the "transcriptomic shock" resulting from the brutal contact between two divergent genomes [6]. This morphological abnormality seems to characterize a teratozoospermia, like partial globozoospermia. The second phenotype revealed by electron microscopy is the presence of residual cytoplasm that contains numerous lipid droplets on a fraction of Rc3 sperm. This phenotype reflects some defects in spermiation. Because the epidydimal sperm concentration in Rc3 strain was similar to that of the controls, we can conclude that the production and disengagement of spermatids, and their release towards the lumen of the seminiferous tubules was normal. Nevertheless, about 20% of Rc3 sperm still present residual bodies, which suggests a defective phagocytosis process. This phenotype could have its origin in a dysfunction either at the spermatids level or at the Sertoli cells level and could also explain the persistence of lipid droplets in the residual bodies. Indeed, the phagocytosis of residual bodies is associated with a peak in the number of lipid droplets at the base of the Sertoli cell, as observed in various species [26]. Alternatively, the presence of these lipid droplets in epidydimal sperm could also reflect a metabolic dysfunction. At this stage, it is not possible to know whether one and the same protein is responsible for the two distinct observed phenotypes. Further investigations are needed. These will probably require invalidating one or more genes present in the Mafq1 interval. We hypothesize that one or more genes from the interval have an epistatic interaction with one or more genes outside of the interval; in the Rc3 strain, the spretus allele(s) fail to interact properly with the B6 allele(s). This relation could be the participation of heterodimers to a proteic complex or ligand-receptor binding for example. One of these candidate genes could be related to the Hipk4, Sirt1 or Dpy9l2 genes, whose invalidation affects acrosome biogenesis in mice or humans. The production and fine genetic characterization of partially overlapping strains defined a genetic interval associated with the above-described phenotypes. The number of genes is important in this interval (71 coding genes); therefore, we then applied several filters in order to prioritize those that are more likely to cause the phenotypes observed during our study. According to the expression database that was consulted, 22 genes out of a total of 71 are reported to be expressed in the testis (see Table S1). The inactivation of 14 of them did not result in spermatogenesis defects (for references, see Table S1). Of these 14 genes, the case of Dnajb3 is questionable. Indeed, this gene was initially described as exclusively expressed in the testis and particularly at the spermatid stage [27,28] in the developing acrosomal vesicle and sperm centriolar region [29]. However, other studies found its expression in other tissues [30][31][32][33]. Its role seemed to be related to proteasome and protein quality control [34] but also to insulin signaling, glucose uptake and oxidative stress [30,35]. Although only two coding SNPs differ between B6 and spretus, the spretus allele of one of them is predicted to be deleterious on protein structure (SIFT, Ensembl (www.ensembl.org)) (Table S1). Finally, a KO strain available from the Jackson Laboratory (https://www.mousephenotype.org/data/genes/MGI:1306822) showed no reproductive phenotype. These latest data seem to downplay the role of Dnajb3 in spermatogenesis, either because its participation in this process is not essential or because its function is compensated for by another gene in KO models. Among the genes remaining in the short list and for which the KO model does not exist, Armc9 gene mutations have been described in human Joubert syndrome, but the variety of phenotypes does not include a phenotype related to reproduction. In addition, Van De Weghe et al. found that CRISPR/Cas9-mediated KO of armc9 in zebrafish resulted in curved body shape, retinal dystrophy, coloboma, reduced cilia number in ventricles, and shortened cilia in photoreceptor outer segments [36]. Cops7b and Chrnd are not expressed in mouse testis but their expression is described in human testis. Mutations of CHRND gene in humans have been reported to be associated with congenital myasthenic syndrome [37][38][39]. Efhd1 is expressed in testis but also in ovary, kidney and placenta (Table S1). Mroh2a shows ubiquitous expression but with a higher level of expression in the testis while Hjurp is not expressed in mouse testis. The testicular expression of the Glrp1 gene is described in one database but not in another and no orthologous gene is described in humans. The filters that we applied make it possible to exclude some genes such as those that are not expressed in the testis, those for which the KO does not show a spermatogenesis phenotype and even those without significant SNPs. These filters make it possible to highlight two genes that are probably involved in the observed phenotype. These are Spata3 and Tex44, which present an expression profile restricted to the testis, and particularly in the spermatid. Our verification by qRT-PCR indicated the post-meiotic expression (at the spermatid stage) of Spata3, which confirmed the data found in the expression databases. The expression profile at the protein level given by Protein Atlas indicates their presence only at the spermatid stage in human testis. These two genes have not been studied in the literature, and sequence analysis does not help to relate them to known protein families and to suggest a particular function. Although the name "Spata", meaning "spermatogenesis associated" is common to several dozen genes, no particular functions or domains link these genes. As for Tex44, Spata3 was identified as a gene very recently. Nevertheless, they both accumulate SNPs in coding sequences that could modulate protein structure and/or function and interactions with other partners expressed in a B6 version in the IRCS. Therefore, they appear as the best candidates. Animals The generation of the recombinant substrains from BcG-66H and 66HMMU1 at the Institut Pasteur (Paris) has been previously reported [5,9]. After weaning, 4-week-old mice were maintained in an animal facility at the Cochin Institute (Paris) at normal temperature (21-23 • C) and 14 h light/10 h dark photoperiods with free access to water and food. For all experiments, animals were sacrificed by cervical dislocation. Phenotyping by High Frequency Ultrasonography Evaluation of in Vivo Fertilization To evaluate the implantation rate, mice from B6 (reference strain) and Rc strains were crossed and phenotyped at the small animal imaging facility of the Cochin Institute using high frequency ultrasonography (VEVO 770, Visulasonics, Toronto, Canada). Each female was used one time to collect phenotypic data from the primo-gestation. Briefly, a chemical hair remover was used to eliminate abdominal hair. Ultrasonographic contact gel was used to ensure contact between the skin surface and the transducer. Body temperature, electrocardiographic and respiratory profiles were monitored using the device's integrated heating pad and monitoring device (THM150, Indus Instruments, Webster, TX, USA). The implantation rate was determined early in the gestation, at E7.5 and E9.5. At these stages, the small size of the embryos permits a fluent count and resorbed embryos are also visible. Evaluation of in Vivo Fertilization B6 females were mated with Rc3 or B6 males at the night (one couple per cage) and the next day females with a vaginal plug were isolated. Oocytes were collected from ampullae of oviducts and freed from the cumulus cells by brief incubation at 37 • C with hyaluronidase (Sigma, St. Louis, MO, USA) in M2 medium (Sigma). Oocytes were rinsed and kept in M2 medium in a humidified 5% CO 2 atmosphere at 37 • C before mounting. The number of fertilized oocytes was evaluated using DAPI immune-fluorescent staining to visualize the DNA. In Vitro Fertilization Assays Oocytes Preparation: Five week-old B6 females were injected with 5 IU of PMSG (pregnant mare's serum gonadotrophin; Intervet, France) followed by 5 IU of hCG (human chorionic gonadotrophin; Intervet) 48 h later. After super-ovulation, cumulus-oocyte complexes were collected from ampullae of oviducts about 14 h after hCG injection. Oocytes were freed from the cumulus cells by 3-5 min of incubation at 37 • C with hyaluronidase (Sigma) in M2 medium (Sigma). Oocytes were rinsed and kept in Ferticult medium (FertiPro, Belgium) at 37 • C under 5% CO 2 atmosphere under mineral oil (Sigma). Zona pellucida was then dissolved with acidic Tyrode's (AT) solution (pH 2.5, Sigma) under visual monitoring. The zona-free eggs were rapidly washed in medium and kept at 37 • C under 5% CO 2 atmosphere for 2 to 3 h to recover their fertilizability. Sperm preparation: Mouse spermatozoa were collected from the caudae epididymis of 8-13-week-old B6 and Rc3 males and capacitated at 37 • C under 5% CO 2 for 90 min in a 500 µl drop of Ferticult medium supplemented with 3% BSA (Sigma), under mineral oil. In Vitro Fertilization: Zona-free eggs were inseminated with capacitated sperm for 3 h in a 100 µl drop of Ferticult 3% BSA medium at a final concentration of 10 5 /mL or 10 6 /mL when the aim was to try to recover fertilizing ability of Rc3 sperm by concentrating it ten times. Then, they were washed and directly mounted in Vectashield/DAPI (Vector laboratories, CA, USA) for microscopy observation. The oocytes were considered fertilized when they showed at least one fluorescent decondensed sperm head within their cytoplasm. Assessment of Sperm Acrosome Reaction Freshly recovered or capacitated sperm were washed in PBS containing 1% BSA, centrifuged at 300× g for 5 min and immediately fixed in 4% paraformaldehyde (Electron Microscopy Sciences, Hatfield, PA, USA) in PBS with 1% BSA at 4 • C for 1 h. In order to detect the sperm acrosomal status, after washing, the fixed spermatozoa were treated for 30 min in 95% ethanol at 4 • C, washed by centrifugation through PBS and stained with FITC-conjugated lectin PSA (25 µg/mL in PBS) for 10 min. Nuclei were stained with DAPI (blue). After repeated washing with double distilled water, a drop of sperm suspension was smeared on a slide, air-dried, and mounted with Vectashield. Detection was performed using a Nikon Eclipse E600 microscope Zeiss Axiophot epifluorescence microscope and images were digitally acquired with a camera (Coolpix 4500, Nikon). Transmission Electron Microscopy Analysis of Sperm Cells Mouse spermatozoa from three different males were prepared as described above (in vitro fertilization) and fixed by incubation in PBS supplemented with 2% glutaraldehyde (Grade I, Sigma) for 2 h at room temperature. Samples were washed twice in PBS and post-fixed by incubation with 1% osmium tetroxide (Electron Microscopy Sciences), after which they were dehydrated by immersion in a graded series of alcohol solutions and embedded in Epon resin (Polysciences Inc., Warrington, PA, USA). Ultra-thin sections (90 nm) were cut with a Reichert Ultracut S ultramicrotome (Reichert-Jung AG, Wien, Austria) and were then stained with uranyl acetate and lead citrate. Sections were analyzed with a JEOL 1011 microscope and digital images were acquired with a Gatan Erlangshen CCD camera and Digital Micrograph software. The integrity of sperm organelles was checked including the acrosome, nucleus, flagellum, etc. Acrosomes with a space between the nuclear membrane and the inner acrosomal membrane were considered abnormal. Acrosomes with membranes in close contact were considered normal. The presence of residual bodies, cytoplasmic leftovers, was recorded. Digital and Bibliographic Tools Used to Shorten the List of Candidate Genes Databases were searched for the presence of genic sequences including UCSC GenomeBrowser (genome.ucsc.edu), Ensembl (www.ensembl.org), and NCBI Gene (www.ncbi.nlm.nih.gov/gene). The nature of these sequences was reported as protein coding gene, processed transcript, pseudogene, or non coding RNA, although some discrepancies exist between databases. The RNA expression level was searched in the BioGPS (biogps.org) and NCBI Gene databases, in mouse and in human when available. Results from antibody labelling on human tissues were obtained from ProteinAtlas (proteinatlas.org). Data relative to mouse knock-out models and human diseases were collected from the Mouse Genome Informatics website (www.informatics.jax.org). Protein function was obtained from the Gene Ontology website (www.geneontology.org). SNPs were retrieved from the Mouse Genomes Project (www.sanger.ac.uk/sanger/Mouse_SnpViewer) and MGI databases. All analyses were done on the GRCm38 version of the mouse genome. Suspension Cell Sorting Testis cells from 2-month-old B6 mice were obtained as described previously [40]. The testis albuginea was removed and seminiferous tubules were dissociated using enzymatic digestion by collagenase type I at 100 u/mL for 15 min at 32 • C in HBSS supplemented with 20 mM HEPES pH 7.2, 1.2 mM MgSO 4 7H 2 O, 1.3 mM CaCl 2 2H 2 O, 6.6 mM sodium pyruvate, and 0.05% lactate. After an HBSS wash and centrifugation, the pelleted tubules were further incubated in cell dissociation buffer (Invitrogen) for 25 min at 32 • C. The resulting whole cell suspension was successively filtered through a 40 µm then 20 µm nylon mesh to remove cell clumps. After an HBSS wash, the cell pellet was resuspended in incubation buffer (HBSS supplemented with 20 mM HEPES pH 7.2, 1.2 mM MgSO 4 7H 2 O, 1.3 mM CaCl 2 2H 2 O, 6.6 mM sodium pyruvate, 0.05% lactate, glutamine and 1% fetal calf serum) and stained with Hoechst 33,342 (5 µg/mL) for 1 h at 32 • C in a water bath. Cells were then labeled with monoclonal antibodies (1 µg per 10 6 cells) from BD Pharmingen: anti-c-Kit-biotin (2B8), and anti-α-6 integrin-PE (GoH3). The cell sorting was performed on ARIA (Becton Dickinson). RNA Extraction and Quantitative RT-PCR Total RNA was extracted using TRIzol Reagent (Invitrogen, Carlsbad, CA, USA) in accordance with the manufacturer's instructions. After RNA preparation, total RNA was treated with DNase I (Invitrogen Life Technologies) for 10 min at room temperature followed by inactivation with EDTA (Sigma). Total RNA was reverse transcribed to obtain cDNA using M-MLV Reverse Transcriptase (Invitrogen, Carlsbad, CA, USA) following the manufacturer's protocols. Quantitative PCR was carried out using the fast SYBR Green Master Mix (Applied Biosystems) and a real time PCR system (Light Cycler 1.5, Roche Diagnostics, Division Applied Sciences, Meylan, France) according to standard PCR conditions. To validate the primers used in qRT-PCR, four pairs of primers were tested for each candidate gene and reference gene. For quantitative calculations, values were normalized to mouse Cyclophiline A expression. Statistical Analysis Results are expressed as mean ±SEM of at least three independent experiments. For statistical analysis, one-way ANOVA multiple comparisons test or t-Test were performed using GraphPad Prism version 7.00 for Windows, (GraphPad Software, La Jolla California USA). Differences were considered statistically significant when p < 0.05.
2020-11-18T14:06:57.676Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "e9e163e11f21f2867444e1122ff5dc888597b158", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/22/8506/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4a47eb72aa4b729667aa91f5ad4f6dc1a90c187b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233388154
pes2o/s2orc
v3-fos-license
On the defocusing semilinear wave equations in three space dimension with small power By introducing new weighted vector fields as multipliers, we derive quantitative pointwise estimates for solutions of defocusing semilinear wave equation in $\mathbb{R}^{1+3}$ with pure power nonlinearity for all $1<p\leq 2$. Consequently, the solution vanishes on the future null infinity and decays in time polynomially for all $\sqrt{2}<p\leq 2$. This improves the uniform boundedness result of the second author when $\frac{3}{2}<p\leq 2$. Introduction In this paper, we continue our study on the global pointwise behaviors for solutions to the energy subcritical defocusing semilinear wave equations in R 1+3 with small power 1 < p ≤ 2. The existence of global solutions in energy space is well known since the work [5] of Ginibre-Velo for the energy sub-critical case 1 < p < 5. The long time dynamics of these global solutions concerns mainly two types of questions: The first type is the problem of scattering, namely comparing the nonlinear solutions with linear solutions as time goes to infinity. A natural choice for the linear solution is the associated linear wave. Since linear wave decays in time in space dimension d ≥ 2, the nonlinear solution approaches to linear wave in certain sense for sufficiently large power p. This was shown by Strauss in [14] for the super-conformal case 3 ≤ p < 5 in space dimension three. Extensions could be found for example in [1], [6], [9], [13], [17]. The latest work [18] of the second author shows that the solution scatters to linear wave in energy space for 2.3542 < p < 5 in R 1+3 . However the precise asymptotics of the solutions remains unclear for small power p. One of the difficulties is that the equation degenerates to linear Klein-Gordon equation when p approaches to the end point 1. Another question is to investigate the asymptotic behaviors of the solutions in the pointwise sense, which is plausible in lower dimensions d ≤ 3. The obstruction in higher dimension is that taking sufficiently many derivatives for energy sub-critical equations is not possible for general p. Even for energy critical equations, the global regularity result only holds in space dimension d ≤ 9 (see for example [10]). The energy super-critical case remains completely open (see recent breakthrough in [4] for the blowing up result for the defocusing energy super-critical nonlinear Schrödinger equations). The scattering result of Strauss is based on the time decay t ǫ−1 of the solution, which has been improved in [2] by using conformal compactification method. However this method only works for super-conformal case when p ≥ 3. To study the asymptotic behaviors of the solutions with sub-conformal power p < 3, Pecher in [13], [7] observed that the potential energy decays in time with a weaker decay rate (comparing to t −2 for the super-conformal case). This allows him to obtain polynomial decay in time of the solutions when p > 1+ √ d 2 +4 2 in space dimension d = 2 and 3. However, Pecher's observation was based on the conformal symmetry of Minkowski spacetime, that is, the time decay of the potential energy is derived by using the conformal Killing vector field as multiplier. As we have seen, the smaller power p leads to the slower decay of the nonlinearity, hence making the analysis more difficult. More precisely, the power p is closely related to the weights in the multipliers. For the super-conformal case p ≥ d+3 d−1 , one can use the conformal Killing vector field with weights t 2 , which leads to the time decay t −2 of the potential energy. For the end point case p = 1, so far as we know, there is no similar weighted energy estimates for linear Klein-Gordon equations, without appealing to higher order derivatives. This suggests that it is important to use multipliers with proper weights depending on the power p in order to reveal the asymptotic behaviors of the solutions. As the power p varies continuously, it in particular calls for a family of weighted vector fields which are consistent with the structure of the equations. The robust new vector field method originally introduced by Dafermos and Rodnianski in [3] provides such family of multipliers r γ (∂ t + ∂ r ) with 0 ≤ γ ≤ 2. Combined with the well known integrated local energy estimates (see for example [12]), a pigeon-hole argument leads to the improved time decay of the potential energy. This enables the second author in [18] to show that the solution decays at least t − 1 3 for 2 < p < 5 in three space dimension. The lower bound p > 2 arises due to the fact that the pigeon-hole argument works only for γ > 1. However the multipliers can be used for all 0 ≤ γ ≤ 2 and hence uniform boundedness (or spatial decay) of the solution holds for 3 2 < p ≤ 2. We see that there is a gap regarding the time decay of the solutions between the cases p > 2 and p ≤ 2. Moreover this method fails in lower dimensions d ≤ 2. The philosophy that suitable weighted multiplier yields the time decay of the potential energy inspires us to introduce new non-spherically symmetric weighted vector fields as multipliers in [16], [15] to show the polynomial decay in time of the solution for all p > 1 in space dimension one and two. This in particular extends the result of Lindblad and Tao in [11], where an averaged decay of the solution was shown for the defocusing semilinear wave equation in R 1+1 . The aim of this paper is to investigate the asymptotic behaviors of the solutions in three space dimension with small power p ≤ 2. Again, the essential idea is to introduce some new weighted vector fields as multipliers, which are partially inspired by our previous work [15] in space dimension two. This allows us to obtain potential energy decay for all 1 < p < 5 and time decay of the solutions for all p > √ 2, and hence filling the gap left in [18]. To state our main theorem, for some constant γ and integer k, define the weighted energy norm of the initial data We prove in this paper that Theorem 1.1. Consider the defocusing semilinear wave equation (1) with initial data (φ 0 , φ 1 ) such that E 1,2 is finite. Then for all 1 < p < 2, the solution φ to the equation (1) exists globally in time and verifies the following asymptotic pointwise estimates for some constant C depending only on p. For the quadratic nonlinearity with p = 2, it holds that for all 0 < ǫ < 1 2 with constant C ǫ depending only on ǫ. We give several remarks. Remark 1.1. The theorem implies that the solution decays along out going null curves (|t| − |x| is constant) for all 1 < p ≤ 5 (see [18] for the case when 2 < p < 5 and [8] for the energy critical case). In other words, the solution vanishes on the future (and past) null infinity and blowing up can only occur at time infinity. It will be of great interest to see whether such blow up can happen particularly for p close to 1. Remark 1.2. In view of the energy conservation, one can easily conclude that the solution grows at most polynomially in time t with rate relying on the power p. The theorem improves this growth for 1 < p ≤ √ 2 and shows that the solution decays inverse polynomially in time for √ 2 < p ≤ 2. In particular, it fills the gap left in [18] by the second author, in which only uniform boundedness of the solution was obtained for 3 2 < p ≤ 2 while time decay with rate at least t − 1 3 was shown for 2 < p < 3. Remark 1.3. The proof also indicates that the potential energy decays in time for all 1 < p ≤ 3, that is, with some constant C depending only on p. This time decay estimate is stronger than that in [17]. Now let's review the main ideas for studying the asymptotic behaviors for defocusing semilinear wave equations. The early pioneering works (for example [6], [13], [7]) relied on the following time decay of the potential energy obtained by using the conformal Killing vector field t 2 ∂ t + r 2 ∂ r (r = |x|) as multiplier. The new vector field method of Dafermos and Rodnianski can improve the above time decay in the following way: First the r-weighted energy estimates derived by using the vector fields r γ (∂ t +∂ r ) with 0 ≤ γ ≤ 2 as multipliers show that However in order to obtain time decay of the potential energy, one then needs to combine the r-weighted energy estimate with the integrated local energy estimates. A pigeon-hole argument then leads to the energy flux decay through the outgoing null hypersurface H u (constant u hypersurface with u = t−r 2 ) Integrating in u, we end up with a weighted spacetime bound R 1+3 (1 + |u|) γ−1−ǫ |φ| p+1 dxdt ≤ C, ∀0 < ǫ < γ − 1 by assuming γ > 1, which requires p > 2. In view of the above r-weighted energy estimate, one then derives the time weighted spacetime bound This improves the above time decay (2) for the sub-conformal case p < 3 and is sufficient to conclude the time decay estimates of the solutions for p > 2 in [18]. However, the above new vector field method works only in space dimension d ≥ 3, due to the lack of integrated local energy estimates in lower dimensions. To improve the asymptotic decay estimates of the solution in space dimension two, we in [15] introduced non-spherically symmetric vector fields as multipliers applied to regions bounded by the null hyperplane {t = x 1 } and the initial hypersurface. The advantage of using such non-spherically symmetric vector fields is that we can make use of the reflection symmetry x 1 → −x 1 as well as rotation symmetries. This enables us to derive the time decay of the potential energy in space dimension two. This decay rate is consistent with that in higher dimensions. However we emphasize here that this method works for all p > 1 while a lower bound p > 1 + 2 d−1 was required in higher dimensions (see [17]). For the three dimensional case when p ≤ 2, we observe that it is not likely to use multipliers with weights higher than t. Although the vector fields r γ (∂ t + ∂ r ) can be used for all 0 ≤ γ ≤ 2, it does not contain weights in time. In particular these vector fields can only lead to the spatial decay of the solution instead of time decay. Inspired by our previous work in space dimension two, we introduce new non-spherical symmetric weighted vector fields as multipliers. By applying this vector field to the region bounded by the null hyperplane {t = x 1 − 1}, the initial hypersurface and the constant t-hypersurface, we can derive that Using the symmetry x 1 → −x 1 , we then conclude the time decay of the potential energy In view of this, we believe that such method can also lead to the time decay of potential energy in higher dimensions for the full energy sub-critical case. Regarding the pointwise decay estimates for the solution, we rely on the representation formula. To control the nonlinearity, we apply the above vector fields to the region bounded by the backward light cone N − (q) emanating from the point q ∈ R 1+3 . To simplify the analysis, we can assume that q = (t 0 , x 0 ), x 0 = (r 0 = |x 0 |, 0, 0). This gives the weighted energy estimate which is sufficient to conclude the pointwise estimate for the solution in the interior region |x| ≤ t. The better decay estimates in the exterior region are based on the weighted energy estimate obtained by using the Lorentz rotation vector field x 1 ∂ t + t∂ 1 as multiplier. Preliminaries and notations Additional to the Cartesian coordinates (t, x) = (t, x 1 , x 2 , x 3 ) for the Minkowski spacetime R 1+3 , we will also use the null frame x) be the new Cartesian coordinates centered at q. More precisely, definẽ By translation invariance, note that Here ∇ is the spatial gradient while∇ is the associated one centered at q. For vector fields X, Y in R 1+3 , we use the geometric notation X, Y meaning the inner product of these two vector fields under the flat Minkowski metric m µν with non-vanishing components Raising and lowering indices are carried out with respect to this metric in the sequel. As the wave equation is time reversible, without loss of generality, we only consider the case in the The boundary contains the past null cone N − (q) emanating from q, that is, For r > 0, we use B q (r) to denote the spatial ball centered at q = (t 0 , x 0 ) with radius r. More precisely The boundary of B q (r) is the 2-sphere S q (r). Finally to avoid too many constants, we make a convention that A B means there exists a constant C, depending only on p and the small constant 0 < ǫ < 1 2 such that A ≤ CB. Weighted energy estimates through backward light cones Following the framework established early in [14] and developed in [13], [18], potential energy decay is of crucial importance to deduce the asymptotic long time behavior for the solution. We begin with the following weighted potential energy estimate through backward light cones. Then the solution φ of the nonlinear wave equation (1) verifies the following weighted energy estimates for some constant C depending only on p. Here dσ is the surface measure Proof. Recall the energy momentum tensor for the scalar field φ For any vector fields X, Y and any function χ, define the current Then for solution φ of equation (1) and any domain D in R 1+3 , we have the energy identity Here π X = 1 2 L X m is the deformation tensor for the vector field X. In the above energy identity, choose the vector fields X, Y and the function χ as follows: for u ≥ 0. Since ✷χ = 0, we therefore can compute that Since we are restricting to the range 1 < p ≤ 2 < 3, in view of the definition of the function f , we note that when u ≤ 0, In particular we always have which implies that the bulk integral is nonnegative. Now take the domain D to be J − (q) with boundary B (0,x0) (t 0 ) ∪ N − (q). In view of Stokes' formula, the left hand side of the energy identity (5) is reduced to integrals on the initial hypersurface B (0,x0) (t 0 ) and on the backward light cone N − (q). Since the bulk integral on the right hand side is nonnegative, we conclude that For the integral on the initial hypersurface B (0,x0) (t 0 ), we compute it under the coordinates (t, x) Here we used the relation L 1 u = 0, Su = u and the following computation Since 0 < f (u) ≤ 1 and u = −x 1 on B (0,x0) (t 0 ), by using Hardy's inequality to control |φ| 2 , we can bound the integral on the initial hypersurface by the initial weighted energy Next we compute the boundary integral on the backward light cone N − (q). The surface measure is of the form Here we recall that the null frame {L,L,ẽ 1 ,ẽ 2 } is centered at the point q. Since we have Sg + 3g = S(f (u)|φ| 2 ) + 3f (u)|φ| 2 = f (u)S|φ| 2 + (uf ′ (u) + 3f (u))|φ| 2 . We therefore can compute that Here the vector fields Z, Z are given by Now we write the vector field X as The vector field X 0 can be further written as For the second term, note that For the first term we claim that At any fixed point of the backward light cone N − (q), we prove the above claim by discussing three different cases: (i) If the vector field X 0 vanishes, that is X 0 = 0, then In particular, we have Here recall that q = (t 0 , r 0 , 0, 0) and x 2 = x 3 = 0 for this case. This implies that Hence the above claim holds. (ii) If X 0 = 0 and the vector fields X 0 ,L are linearly dependent, by comparing the coefficients of ∂ t = ∂t, we conclude that X 0 = λL with λ = u 2 + x 2 2 + x 2 3 > 0. Recall the definition for the vector fields X 0 , Z. We can show that which in particular implies that Z = 2uL. Note that We then can demonstrate that The above claim follows as L , X 0 = 0. (iii) The remaining case is when X 0 = 0 and the vector fields X 0 ,L are linearly independent. We write Here we may note that ∇ =∇. In particular we see thatL, X 0 are null vectors which are linearly independent. We thus can construct a null frame {X 0 ,L,ê 1 ,ê 2 } such that L ,ê j = X 0 ,ê j = 0, ê j ,ê j = 1, ê 1 ,ê 2 = 0 for j = 1, 2. Notice that The above computation in particular shows that Z ∈ span{ê 1 ,ê 2 }. We hence can write that On the other hand, we also have L , X Let Then the above computations show that We therefore can compute that This means that the above claim (9) always holds. Note that on the backward light cone N − (q), we also have t = − r. We thus can compute that For the case when u ≤ 0, by the definition of f (u), we have the lower bound as 1 < p ≤ 2 and | ω 1 | ≤ 1. For the case when u > 0, note that on N − (q) We therefore can bound that −f (u) L , X 0 + 2f (u) + 1 = 1 + 2(1 + u 2 ) p−3 2 Here again we used the assumption that p < 3. Hence in any case, we have shown that In other words, we have the lower bound for the integral on the backward light cone N − (q) which together with estimates (7), (8) implies that To conclude estimate (3) of the proposition, we make use of the reflection symmetry additional to the spherical symmetry. More precisely, by changing variable x 1 → −x 1 in the above argument, that is, setting u = t + x 1 and L 1 = ∂ t − ∂ 1 , L 1 = ∂ t + ∂ 1 accordingly (the point q is still fixed), we also have Alternative interpretation is that the above estimate (10) also holds at the point q − = (t 0 , −r 0 , 0, 0) r 0 < 0 (with positive sign of ω 1 ). Then by spherical symmetry, the associated estimate is valid at point q = (t 0 , r 0 , 0, 0), which is exactly the estimate (11). These two estimates lead to (3). To finish the proof for the Proposition, it remains to show estimate (4), which will be mainly used to control the solution in the exterior region. Inspired by the method in [16], we make use of the Lorentz rotation in this region. In the energy identity (5), choose the vector fields and function χ as follows Then π X = 0 and By using Stokes' formula, we have the weighted energy conservation adapted to these boundaries. For the integral on the initial hypersurface B (0,x0) (t 0 ), we have On the null hypersurface {x 1 = t} ∩ D, we have Thus the surface measure is of the form This in particular shows that Here keep in mind that we only consider the estimates in the future t ≥ 0. Finally for the integral on the backward light cone N − (q) ∩ D, similarly, we first can write the surface measure as Now we need to write the vector field X under the new null frame {L,L,ẽ 1 ,ẽ 2 } centered at the point q. Note that Then we have Here∇ / =∇ −ω∂r. Then we can compute the quadratic terms Since restricted to the region N − (q) ∩ D where x 1 ≥ t ≥ 0, the pure quadratic terms are nonnegative In particular on N − (q) ∩ D, we have This leads to the lower bound For such choice of vector fields, we have the weighted energy conservation In view of the above estimates (12), (13), (14), we conclude that Under the coordinates (t,x) centered at q = (t 0 , r 0 , 0, 0), we have (t, x) = (t 0 + t, r 0 + x 1 , x 2 , x 3 ) = (t 0 + t, r 0 + r ω 1 , r ω 2 , r ω 3 ). Note that t = − r on N − (q). We then can write The uniform bound (4) then follows from (15) by noting that Asymptotic pointwise behaviors for the solutions Following the framework developed in [18], we now use the weighted energy estimates through the backward light cone obtained in the previous section to control the nonlinearity. For this purpose, we need the following integration bound: for constants A > 0, B > 0, γ > 1, there holds with constant C depending only on γ. Now we prove the main Theorem 1.1. For any point q = (t 0 , x 0 ) in R 1+3 , recall the representation formula for linear wave equation The first two terms are linear evolution, relying only on the initial data. Standard Sobolev embedding leads the decay estimate We control the nonlinear term by using the weighted energy estimates derived in Proposition 3.1. Without lose of generality (or by spatial rotation), we can assume that x 0 = (r 0 , 0, 0) with r 0 = |x 0 |. Let Since 0 < p − 1 ≤ 1, it holds that Therefore we have the lower bound 1]. In view of Proposition 3.1, we derive that We first consider the case when 1 < p < 2. In the exterior region when t 0 ≤ r 0 , note that the backward light cone N − (q) entirely locates in the region {x 1 ≥ t}. Moreover for ω 1 ∈ [−1, 1]. Then by Proposition 3.1 as well as the standard energy estimate, we have Under the coordinates centered at q, the surface measure dσ can be written asr 2 drdω. By using the integration bound (16) with γ = p, we can estimate that In particular, the solution φ verifies the following decay estimate in the exterior region In the interior region when t 0 ≥ r 0 , we rely on the following improved weighted energy estimate In fact, from the above weighted energy estimate (18), we conclude that N − (q)∩{(1+ ω1)r≤2u0} (1 + ω 1 )ru p−2 On the other hand, for the point (t, x) ∈ N − (q) such that (1 + ω 1 )r ≥ 2u 0 , note that (t, x) = (t 0 + t, r 0 + x 1 , x 2 , x 3 ) = (t 0 − r, r 0 + r ω 1 , r ω 2 , r ω 3 ), 0 ≤ r = − t ≤ t 0 In particular we have This shows that Moreover note that r 0 + t 0ω1 = r 0 − t 0 + (1 + ω 1 )t 0 = 1 − u 0 + (1 + ω 1 )t 0 ≥ − 1 2 (1 + ω 1 )r + (1 + ω 1 )r = 1 2 (1 + ω 1 )r. The improved estimate (19) then follows from (18) In view of Remark 1.2, M is finite for all T > 0. Choose small constant ǫ such that 0 < ǫ < 1 2 . From the weighted energy estimate (18) as well as the integration bound (16), similarly we can show that
2021-04-26T01:15:50.440Z
2021-04-23T00:00:00.000
{ "year": 2021, "sha1": "aa6a0c5a4f9b622c45374e3aa78fb239d9cf22af", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "aa6a0c5a4f9b622c45374e3aa78fb239d9cf22af", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
235297008
pes2o/s2orc
v3-fos-license
Experimental Analysis in Hadoop MapReduce: A Closer Look at Fault Detection and Recovery Techniques Hadoop MapReduce reactively detects and recovers faults after they occur based on the static heartbeat detection and the re-execution from scratch techniques. However, these techniques lead to excessive response time penalties and inefficient resource consumption during detection and recovery. Existing fault-tolerance solutions intend to mitigate the limitations without considering critical conditions such as fail-slow faults, the impact of faults at various infrastructure levels and the relationship between the detection and recovery stages. This paper analyses the response time under two main conditions: fail-stop and fail-slow, when they manifest with node, service, and the task at runtime. In addition, we focus on the relationship between the time for detecting and recovering faults. The experimental analysis is conducted on a real Hadoop cluster comprising MapReduce, YARN and HDFS frameworks. Our analysis shows that the recovery of a single fault leads to an average of 67.6% response time penalty. Even though the detection and recovery times are well-turned, data locality and resource availability must also be considered to obtain the optimum tolerance time and the lowest penalties. Introduction MapReduce is the most popular data processing model [1], used for Big Data-related applications and services over the cloud. Hadoop is the state-of-the-art industry standard implementation of MapReduce that provides tremendous opportunities to handle dataintensive applications like IoT, web crawling, data mining and web indexing. Hadoop, in addition to with MapReduce, offers flexibility for developers to design their applications in any high-level programming languages. Due to the given flexibility, organisations like Yahoo, Google and Facebook utilise Hadoop MapReduce to successfully manage their data-intensive computations in large-scale computing environments. In addition, Hadoop MapReduce is also used for supporting the implementation of complex algorithms that require high computation power in a distrusted manner such as anomaly analysis, network intrusion detection, and calculating the network centrality [2][3][4]. However, in such environments, faults from a node, service or task are common, and they significantly impact the system performance if the fault-tolerance is not properly handled. Fault-tolerance is the property of a system that allows consistent operation during faults [5,6]. Hadoop handles fault-tolerance using the master-slave communication through heartbeat messages. If the master node does not receive a heartbeat message from a slave node within a configurable timeout value, the slave node will be labelled as failed. Simultaneously, the successful progress made by the failed node before it fails will be neglected, which incurs a huge waste of processing time and resource usage. Meanwhile, Hadoop must wait for the resource scheduler to assign a free slot to restart the faulty tasks intended to be executed on the failed node for recovery. This problem encourages researchers to optimise the time spent for detecting a fault and recovering from a fault towards achieving minimal performance penalties under faults. Thousands of hardware and software faults occurred during the first year of Hadoop operation [7]. Code bugs, corrupted data, bad sectors, and out-of-memory faults are also important factors to consider while making the system fault-tolerant. Another study reported that a single tiny hardware fault increases the response time of Hadoop by 39% [8]. Additionally, the non-uniformity or node heterogeneity is also common in today's environments [9], especially when Hadoop is deployed on a cluster of shared resources. Thus, users are willing to run heavy CPU job requests. The job workload needs virtualised tasks to be spread among the nodes. If other users are running different tasks on these nodes at some point in time, the cluster will experience extreme dynamic behaviour. Consequently, the threats mentioned above must be handled gracefully, otherwise there will be a serious performance violation. Motivation To date, experimental studies on the fault-tolerance issues of Hadoop MapReduce have been scarce, as seen in [10][11][12]. Meanwhile, these studies only used the old version of Hadoop that which was not incorporated with the YARN framework, making their experimental results not up to date. Furthermore, the studies have not provided insights on the infrastructure parameters that affect the tolerance conditions under faults and failures. To the best of our knowledge, this is the first research effort that provides an in-depth analysis of the impact of node, service, and task faults, including two main conditions: fail-stop and fail-slow on Hadoop MapReduce response time. This research also investigates the priority of fault penalties and the relationship between fault detection and recovery techniques under various fault circumstances to be considered when designing a fault-tolerance solution. The purpose of this analysis is to address the following research questions: 1. How does the current fault-tolerance in Hadoop MapReduce handle faults when they occur at various infrastructure levels and in different fault conditions? 2. Why do the existing fault detection and recovery techniques in Hadoop MapReduce lead to significant response time penalties? Our Contributions The main contributions of the paper are the following: • We conducted a series of experiments on a real-world Hadoop YARN cluster to examine the impact of fail-stop and fail-slow when they occur at the node, service, and task; • We simulated actual production faults: fail-stop and fail-slow to be injected at runtime and monitor their implications for the response time; • We highlight the limitations of the current fault-tolerance method, including its fault detection and recovery techniques. Paper Organisation Section 2 reviews the related literature, including experimental studies and proposed fault detection and recovery techniques in Hadoop MapReduce. Section 3 provides detailed information on Hadoop MapReduce and fault-tolerance. Section 4 shows the experiment setup and the evaluation parameters, followed by the analysis and discussion of the results in Section 5. Section 6 concludes the paper. Experimenting with Hadoop Fault-Tolerance Faghri et al. [11] proposed a model called failure scenario as a service (FSaaS) to be utilised across the cloud to examine the fault-tolerance of cloud applications. The study focused on Hadoop frameworks to test real-world faults implications on MapReduce applications. Kadirvel et al. [10] and Dinu and Eugene Ng [12] conducted experimental studies to examine the performance penalties under faults using an experimental testbed of Hadoop frameworks. These studies mainly focused on node and task faults, but they used the early implementation of Hadoop that was not integrated within the YARN framework, making their experimental results outdated. In the early version of Hadoop, the responsibility of data processing was carried by two components, namely JobTracker and TaskTracker. Subsequently, YARN was designed to allow flexibility and scalability by separating the resource management functions from the programming model [13]. Thus, YARN becomes the essential resource management framework for Hadoop implementation. Furthermore, a recent study on the implications of faults on MapReduce applications by simulating node and task faults proposed by Rahman et al. [14]. Although [14] is quite similar to our study, it has not provided perception when modifying the fault-tolerance parameters offered by Hadoop frameworks and has not focused on the correlations between the fault detection and recovery stages. Unlike the related works, our study uses the latest implementation of Hadoop to cope with its new architectural changes and provides detailed analysis on the common types of faults with their implications. We also examined the tolerance time when modifying the default fault-tolerance parameters to confirm the problem. Improving Fault Detection LATE [15] and SAMR [16] were first proposed and adopted in the current speculative execution strategy of Hadoop. These strategies work by comparing the estimated time to completion between tasks; then, the detected struggler tasks will be duplicated on another node. The estimated completion time in LATE was static; thus, SAMR provides a dynamic calculation of the progress rate for each task to achieve more accurate results. In another study, Memishi et al. [9] also proposed an approach that estimates the completion time of the workload and calculates the progress rate of each task to adjust the timeout value dynamically. Other studies by [17][18][19][20] provided predictive models based on machine learning and AI algorithms to estimate and set an optimal heartbeat timeout on the fly or to predict the failures before they occur. These approaches reduce the task fault occurrences and improve their overall performance with low latency in fault detection. Furthermore, works by Yildiz et al. [21] and Kadirvel et al. [22] aimed to decrease resource usage during failures by adopting a lightweight pre-emption technique and dynamic resource scaling to reduce the cost of the additional resources when removing failures. Improving Fault Recovery The standard solution to address the problem of re-computing the entire data block from the beginning during fault recovery is checkpointing. Checkpointing works by transferring the continuous processing output to external storage to be restored in the event of failures. In MapReduce, the output of MT tasks is stored locally on the same node, which will be inaccessible when the node encounters a fail-stop. Therefore, research works by [23][24][25] proposed checkpointing algorithms to efficiently transfer the output to external storage to avoid faulty task re-execution from scratch. Although the proposed checkpointing algorithms improved the recovery process when the failed tasks are rescheduled on a different node, the re-computation still occurs because the stored checkpoints are not accessible by active nodes in the cluster. Zhu et al. [24] designed a novel fault-tolerance strategy that uses a combination of distributed checkpointing and a proactive push mechanism for low latency recovery. When a failure happens, the recovered task continues computing based on the last checkpoint without the necessity to re-compute the entire data block. Liu et al. [25] made further improvements by reducing the task recovery delay and improving the processing efficiency. The proposed approach splits the intermediate data into small piece groups instead of merging them into one single file. The recovery task attempt can start with a specific amount of progress according to the valid checkpoint generated along with spills. Faults in Hadoop MapReduce In this work, we used Apache Hadoop 2.9.2 (https://hadoop.apache.org/docs/r2.9.2/ (accessed on 3 May 2021)), which is the stable implementation of MapReduce that has adopted as the latest generation of MapReduce with YARN framework [13]. Hadoop MapReduce is designed based on the centralised architecture in which one node stands as the master node and other nodes are workers. The basic components of Hadoop MapReduce implementation are HDFS, YARN and MapReduce, as presented in Figure 1. First, HDFS [26] splits the original dataset into data blocks and distributes them amongst its distributed file system that is managed by DN services. The size, replication and distribution of the blocks are handled by the master node of HDFS called NN. Then, YARN is also a centralised framework responsible for workload scheduling and resource management. YARN operates RM and Scheduler on the master node and NM, AM, and Containers on the worker nodes. RM along with the Scheduler maintains the resource scheduling and monitoring of worker nodes. RM launches NM on each worker node to offer a collection of physical compute resources such as (memory and CPU) in the form of containers for handling MapReduce applications. NM is also liable for sending heartbeat messages to RM to report the worker nodes' liveness. Furthermore, AM has the responsibility to negotiate appropriate resources for containers and monitor MapReduce tasks isolated in containers. On the other hand, MapReduce is a parallel data processing model that consists of the map phase, shuffle phase and reduce phase. map and reduce are the central progressing aspects of MapReduce that make the key/value pairs data structure. All the MTs are distributed and executed in parallel in YARN containers. During these processes, the intermediate output of every complete task will be generated in the local storage of each node. Then, the output files are shuffled and stored to be finally received by the corresponding RTs. The reduce phase stores the desired output on HDFS. In parallel systems, three categories of fault models are primarily considered, which are fail-arbitrary, fail-stop and fail-slow [27]: • Fail-arbitrary is also called byzantine, which impacts the system's behaviour by setting incorrect data values, returning a value of incorrect type, interrupting, or taking incorrect actions; • Fail-stop causes unresponsive behaviour that is continuous for a fixed period [28]; • Fail-slow is also known struggler, which makes the system accessible, but with poor performance [29]. Hadoop MapReduce tolerates fail-stop and the fail-slow while fail-arbitrary is beyond the original design of MapReduce [6]. Fail-stop and fail-slow typically occur in one or more components at different infrastructure levels. Therefore, fault-tolerance techniques can be applied to tolerant faults and are fundamentally conducted based on two main stages: fault detection and fault recovery [30]. Fault Detection Fault detection is the first building block of a fault-tolerant framework that desires to detect faults as soon as they occur [31]. Hadoop MapReduce uses heartbeat monitoring based on the push model as a default approach. As illustrated in Figure 2a, all the monitored processes periodically send heartbeat messages at a specific timeout frequency to the monitor process. The absence of heartbeats from a given process beyond the specified timeout value indicates that the process is failed, as shown in Figure 2b. The heartbeat timeouts in Hadoop MapReduce are defined as configurable parameters [32], discussed in Section 4. Although push-based monitoring is a scalable approach as it occupies less bandwidth due to the small size of the heartbeat messages [31], it has major limitations. First, it only indicates possible fail-stop, whereas a process with fail-slow may still send heartbeats. Second, achieving an optimum timeout is challenging because the longer the timeout is, the longer the MTTD and the MTTR. In contrast, the shortest timeout is the shortest MTTD, however, it may decrease the performance and increase resource consumption (e.g., bandwidth and CPU usage) because of the elevated messages exchange between the active processes when the timeout value is small. Hence, configuring the timeout impacts the overall response time and availability. Fault Recovery Upon fault detection, recovery is the second major stage that aims to return a faulty component to its normal state without service interruptions. From the data storage perspective, HDFS replicates the original data blocks into multiple copies to be stored on various nodes. A replica number determines the replication in HDFS. For instance, storing 1TB of data requires 3TB of actual storage space when the number factor is set to 3. Although data replication is lacking in terms of storage efficiency [33], it maintains high fault-tolerance and data availability. Furthermore, a simple fault recovery technique is implemented for handling task faults, i.e., the re-execution or restart upon failures. This approach effectively recovers fail-stop tasks by rescheduling them to re-execute their attended functions from the very beginning. However, the re-execution technique doubles the workload execution time, especially when the task fails to process the final allocated data record. Then, an additional fault recovery approach is also employed, which is checkpointing [34]. This approach aims to efficiently recover from faults with minimal performance overhead. Hadoop separates MT and RT into two categories: complete and incomplete. In the event of failure, the complete tasks do not have to start over because their outputs are transmitted to the next task that could be hosted on another healthy node. However, when the incomplete tasks are in the latest progress rate and a fault occurs, Hadoop starts them over just like the incomplete tasks regardless of their progress, which will lead to excessive performance overhead. Furthermore, Hadoop recognises fail-slow as slow/struggling tasks by comparing their progress rate with other active tasks. When Hadoop realises them, speculative attempts will be made to process the same input data block of each slow task, hoping that these attempts are complete sooner than the slow ones. Experiment and Evaluation The evaluation experiments aimed to analyse the current fault detection and recovery techniques of Hadoop MapReduce, focusing on the performance violation in terms of response time when tolerating faults and failures. Based on the analysis, further study was conducted to examine the detailed impact of the infrastructure parameters, fault types and fault-tolerance conditions. Experiment Setup Our experimental testbed consisted of 9 servers that ran 1 master node and 8 slave nodes. The master node had an 8-core CPU, 80 GB of hard disk space and 16 GB of RAM. Other nodes consisted of a 4-core CPU, 40 GB of hard disk space, and 8 GB of RAM. These nodes were hosted on a Cavium server with ThunderX 88XX 48CPUs, 128 GB RAM and 3 TB of hard disk space in total. All the nodes operated on CentOS7 and Hadoop with HDFS, YARN and MapReduce frameworks installed. Figure 3 shows our experiment setup divided into four parts as discussed in the following subsections. Workload Application and Dataset Since MapReduce is a general programming paradigm, a very diverse set of applications can be constructed using the basic map, shuffle/copy, and reduce phases. To ensure our analysis was applicable in the experiment environment, we used WordCount benchmark application as it includes all the basic phases of typical MapReduce programming paradigm [35]. WordCount is a count value for every distinct word in the input dataset where the count function is demonstrated by MTs [10]. WordCount is not only used as a benchmark application, as observed in [10][11][12]14], but it is also being used in production data processing environments. For instance, WordCount represents 70% of production jobs in the Facebook cluster [36]. Furthermore, most workloads in production clusters >98% had small to medium execution times (seconds to a few tens of minutes) as noticed in Yahoo! and Facebook studies [35]. Therefore, we used 1 GB to 8 GB randomly generated datasets to represent a small to medium-workload size and have execution times of a few minutes. Benchmark and Fault Injection Frameworks Our experiments were automated based on frameworks and techniques for performance monitoring and fault injection. First, we used HiBench (https://github.com/Intelbigdata/HiBench (accessed on 10 April 2021)) as the standard benchmark framework for implementing MapReduce jobs and monitoring the performance of each job. Intel developed HiBench as an open source project, and it has been used for evaluating Big Data related systems, including Spark and Hadoop MapReduce. Second, we used manual Linux scripts and popular cloud-based fault-injection frameworks for fault injection, namely AnarchyApe (https://github.com/david78k/anarchyape (accessed on 10 April 2021)) and Stress-ng (https://github.com/ColinIanKing/stress-ng (accessed on 10 April 2021)). These frameworks provide various options for simulating faults in the Hadoop environment, as outlined in Table 1. Framework/Techniques Fault Type Command AnarchyApe Node fail-stop java -jar ape.jar -L -F AnarchyApe Service fail-stop java -jar ape.jar -L -k <serviceName> Manual script Task fail-stop Sudo kill -9 <processID> Stress-ng Fail-slow stress-ng -cpu 8 -io 8 -vm 1-vm-bytes 16G -timeout 150s In the experiments that involve various fault occurrence points, we group the fault points according to the primary MapReduce phases; before 20% of the map phase progress is 'Initial', after 20-50% as 'Early', before 70% as 'Middle' and those after 70% as 'Late'. In the case of the late point, the fault impacts the active reduce tasks only, while the fault at the middle point impacts both the map and reduce tasks. Fail-stop and fail-slow for node, service and task are injected for a fixed period until Hadoop realises them and applies treatment actions for recovery. Moreover, since the replication factor is set to two where one data block is at least available on two active nodes, we injected one fault of different types (node fail-stop, service fail-stop, task fail-stop and fail-slow) at each experimental run. Furthermore, the monitoring of fault detection and recovery stages was performed via manual invocation of Hadoop log files after each job completed. The manual exploration process was necessary for the flexibility of data collection of the history files generated by the nodes with their active services and tasks. Parsing the generated data indicates the timestamp when Hadoop detects the fault and the timestamp for initiating the recovery action. Since the faults are injected at known points, the fault detection and recovery times can be calculated accordingly. Fault Scenarios We classify the faults according to their occurrence into three scenarios: node, service, and task, as demonstrated in Figure 4, when the entire node fails due to a certain memory, CPU, or network error, all the running services and tasks hosted by the failed node are impacted, and the tolerance complexity increases. Likewise, when a service fails, all its associated tasks fail as well and are subject to re-execution. Moreover, if a single task fails, Hadoop intends to re-execute it on the same or another active node. With those three scenarios, we cover all the possible cloud-based fault and failure implications due to either fail-stop or fail-slow. Evaluation Parameters To measure the performance of Hadoop MapReduce under faults and failures, we use three main evaluation parameters as explained in the following subsections. Moreover, we also use nine configurable parameters provided by Hadoop frameworks that impact the job response time and the tolerance time in the event of failure, as briefly described in Table 2. Finally, we focus on the timeout value and the slow-task threshold in Section 5.2 to show the influence of fault recovery when modifying the default fault detection parameters. The default and tuned values of the fault detection parameters are outlined in Table 3. Infrastructure Parameters Description Node number The number of the active nodes in the cluster. Data size The size of input data to be processed by the active nodes. Block size The size of each chunk of data after distribution across the nodes. Map task number The total number of map tasks to be executed on the active nodes. Reduce task number The total number of reduce tasks to write the output on HDFS. Replication number The number of replicas per each data block. Timeout value The time difference between each heartbeat message sends to check the instance liveness. Slow-task threshold The standard deviations number for a task average progress. Fault occurrence point The actual timestamp of injecting a fault during the job lifetime. Time in seconds before a task will be terminated if it neither reads an input, writes an output, nor updates its status string. mapreduce.job.speculative.slowtaskthreshold 1.0 0.1 Standard deviations number by which a task average progress must be lower than the average of all running tasks. Response Time Response time is the time taken by the MapReduce job from submission J s to completion J c and it can be calculated according to Equation (1): Fault Detection Time Fault detection time is the time taken by the Hadoop MapReduce to detect a fault from the fault occurrence F o timestamp to the fault detection F d timestamp, and it can be calculated according to Equation (2): Fault Recovery Time Fault recovery time is the time taken by the Hadoop MapReduce to recover from a fault from the fault detection F d timestamp to the fault recovery F r timestamp, and it can be calculated according to Equation (3): Response Time The first series of experiments investigated the impact of faults and failures on the response time when modifying the infrastructure parameters; (a) increasing the dataset size, (b) setting the different distribution of data blocks among the nodes, and (c) injecting faults at various occurrence points during the execution time. We used the default faulttolerance values provided by the framework, which are 600 s expiry interval timeout and 1.0 slow task threshold. Fail-stop are simulated by permanently killing an active node, service/daemon (e.g., NM, DN or AM), MT or RT. We also used AnarchyApe, and Stress-ng to inject fail-slow such as CPU hog, Memory hog and network drop packets. Throughout our experiments, Hadoop recovered from all the injected faults and failures at various response times. Figure 5 shows that the service fail-stop has led to the highest response time penalties all over the experiments. The absence of a service does not impact the parallelism nor the MT progress among the nodes. However, when the scheduler launched the RT, the entire job is suspended because there were no data pushed from the impacted node, even though the hosted MTs were completed successfully. Furthermore, the scheduler has not triggered a speculative task for recovery because the progress of all the running MTs was identical at that moment. In addition, the RM waited until the next timeout cycle expired to confirm the service has failed to restart all the impacted tasks associated with the failed service on another healthy node from scratch. In brief, the response time of the impacted MTs were doubled, and the total response time extended per each healthy task workload. Then, the entire node was forcibly terminated from the cluster, including its active services and tasks (NM, DN, MT, or RT) at runtime to evaluate node fail-stop. The loss of a node leads to network disconnection where the input data provided by HDFS became unreachable. Thus, the progress rate of the running tasks stopped at the exact moment when the failure occurred. Thereafter, the scheduler detected the impacted tasks and restarted them on another healthy node. Therefore, the response time penalties for node fail-stop are slightly lower than the service fail-stop because the RM depends on the successive heartbeats (by default 600 s) to be received from the failed node to finish the job, and the scheduler had already re-executed the impacted tasks before the heartbeat timeout expired. Furthermore, task fail-stop and fail-slow incurred the lowest penalties in all the cases because Hadoop scheduler speculates the impacted tasks on another node once it detected they are terminated, or their progress rates are lower than the other active tasks without relying on the node expiry timeout. Figure 6 compares the response times when using various distributions of data blocks among the cluster. By default, HDFS splits the dataset into multiple blocks, and each block has 128 MB of data to be distributed on the active DNs. For instance, 1 GB of data is divided into 8 blocks of 128 MBand multiplied by the replication factor. We used Equations (4) and (5) to obtain accurate results and ensure an equivalent load of tasks among the active nodes in the cluster, where each task corresponds to a data block: where the task parallelism will be set as follows: Task parallelism = Number o f blocks replication f actor The result reveals that the service and node fail-stop still have a higher response time penalties than task fail-stop and fail-slow. The result also shows that task-fail-stop and fail-slow incurred greater penalties when the block size increases. The penalty confirms that when the MT processing time increases due to a large block of data, Hadoop spends longer time recovering from the failure due to the restart. As a result, one task failure extended the overall job response time by 29.47%, 31.66% for the worst-case of 256 MB, 512 MB block sizes, respectively. Figure 7 compares the injected faults at various occurrence points at runtime. Furthermore, it shows the response time differences between WordCount and Terasort applications when faults accumulated initially, and at the early, middle, and late points of the job lifetime, as recorded in Figure 7a,b. The tolerance time for the node and service fail-stop slightly decreased when they occur late because both depend on the master node to wait for the next heartbeat cycle to detect them and start the recovery actions. Since the fault-free job for both WordCount and Terasort had only one timeout cycle, faults at the late point of the timeout period required a shorter time to complete the cycle. On the other hand, task fail-stop penalties increased by 18.17%, 34.27% for the late point compared to the initial because the entire tasks waited for the re-execution of the faulty task from the beginning. However, the fail-slow response time penalties decreased at the late points by 25.4%, 23.94%, because the impacted node only suffered slow performance for a few seconds before it was completed without triggering the speculative task. The results also show no significant difference in terms of fault-tolerance when using various MapReduce applications because the current fault-tolerance method does not consider the programming logic of the applications to detect and recover the faults and failures. Fault Detection and Recovery Times The second series of experiments was conducted to investigate the fault detection and recovery time when manipulating the default fault-tolerance parameters provided by the frameworks. The parameters involved the expiry timeout values of the node, service and task. Figure 8 shows that the response times for both node and service fail-stop were remarkably decreased by 60% and 63.71%, respectively, in the case of 10 s timeout compared to the default value of 600 s. Although the optimal response time results incurred at the smallest timeout value, Hadoop recovery procedure took 171 s, 183 s to recover from node and service fail-stop, respectively, which is still very slow, as reported in Figure 9a,b. Moreover, if Hadoop detects the fault in a very short time, it needs a longer time for recovery, especially when the cluster runs on full resource capacity because the scheduler waits until allocating available slots to re-execute all the impacted tasks of the faulty node or service. Furthermore, reducing the timeout value for obtaining a rapid detection time is not an ideal option to achieve minimal response time penalties because a short timeout incurs a higher resource consumption [10], due to the elevated messages' exchange between the active processes in the cluster in a short frequency. The result reveals that the small timeout value incurred higher CPU usage than the default one. Likewise, network throughput, disk throughput, and the overall system load also acquired higher usage, as recorded in Figures 11-13, respectively. However, short timeouts could be an acceptable approach for specific use cases when the application must comply with response time constraints and sacrifice resource consumption. Finally, we intend to observe the minimal recovery time when setting aggressive slow-task threshold https://hadoop.apache.org/docs/r2.9.2/hadoop-mapreduce-client/ hadoop-mapreduce-client-core/mapred-default.xml (accessed on 10 April 2021) of 0.1 and timeout values https://hadoop.apache.org/docs/r2.9.2/hadoop-yarn/hadoop-yarncommon/yarn-default.xml (accessed on 10 April 2021) of 10 s and injecting the faults at various occurrence points of the job lifetime regardless of the resource consumption. Figure 14 shows that even though the detection time is optimised to approximately 5 s and the response times are reduced compared to previous scenarios, the average recovery time for Hadoop is 55.84 s for a workload of 90 s. A comparison between the best, median and extreme values in terms of fault recovery time, fault point and response time penalty is recorded in Table 4. Discussion According to the obtained results, a single faulty task extends the overall job response time by 30%, and even though the framework was well tuned for fault-tolerance, the faulty task incurs an 18.24% response time penalty after recovery. We also confirmed experimentally that this penalty was due to the slow fault detection and the waste of resources by fault recovery in the typical Hadoop fault-tolerance. We summarise our findings based on the experiment results as follows: • Service fail-stop has the highest response time penalties compared to the other faults; • When the size of a task's workload increases due to a large data block, the recovery time also increases because of the re-computation of the entire block; • Fault occurrence at the late point of the job lifetime incurs higher penalties for node and service fail-stop and lower penalties for task fail-stop and fail-slow; • The current fault-tolerance method does not consider the programming logic of the application to detect and recover faults and failures; • The response time decreases when setting small timeout values but with higher resource consumption; • The recovery of a single fault leads to an average of 67.6% response time penalty. We answer the research questions 1 and 2 stated in Section 1.1 in the following: 1. The current fault-tolerance method of Hadoop handles fail-stop and fail-slow from node and service based on the static heartbeat messages and re-execution techniques. If the entire node fails, all its active services and tasks fail as well regardless of their progress, and they will be subject to re-execution. The identification of fail-stop only happens when the node is entirely inactive within a static timeout interval. When a service fails, Hadoop is not able to realise the failure because the master node still receives heartbeat messages from the same node that runs the failed service, which leads to a substantial waiting time. Fail-slow has a direct impact on the task level only, and Hadoop detects task fail-slow and fail-stop by comparing the slow task progress with other healthy tasks. Slow tasks are also subject to restart from scratch on another node and based on the scheduler decision in terms of data locality and resource availability. 2. The significant response time penalties happen because of the static waiting time spent by Hadoop to detect a failure. This waiting time is critical because if one node fails out of thousands, the entire job response time is extended per one node fault detection and recovery times. Even though the detection time is optimised based on short monitoring intervals, the recovery time can still be long because it depends on the cluster behaviour in the recovery stage in terms of resource capacity and the locations of data blocks. In summary, the limitations of fault-tolerance capability are due to the centralised manner in which Hadoop applies fault detection and recovery. In this way, the timeout is hard to set dynamically to mitigate the fault detection problem because it repeatedly requires examining the unpredictable behaviour of all the active nodes in the cluster. On the other hand, we argue that although the generated data by MTs and RTs can be checkpointed and distributed for faster recovery, this method still leads to network and I/O delays using the current centralised architecture because of checkpoints transfers from one node to another [25]. Conclusions and Future Works Hadoop MapReduce has been widely used by business and academia sectors due to the scalability and the built-in fault-tolerance capability. However, Hadoop experiences numerous types of faults which must be handled carefully; otherwise, there will be a severe response time violation. In this study, we provide an in-depth analysis of the implications of node, service, and task faults, including the two main conditions: fail-stop and fail-slow on Hadoop MapReduce response time. We simulated actual faults that happen in any distributed environments based on popular fault injection frameworks. We also conducted a series of experiments on a real-world Hadoop cluster to examine the fault-tolerance problem and highlight the limitations, including the fault detection and recovery techniques. The validity of our experimental results is limited to small to medium workload execution times where the size of datasets ranges between 1 GB and 8 GB. Although most workloads in production clusters run MapReduce applications for seconds to a few tens of minutes execution times, more computation resources are necessary to handle the scale and to extend the validity of the experiments. In future works, we intend to design and implement a new fault-tolerance method based on an explicit node-to-node relationship to address the identified limitations. With this relationship, the timeout values can be set independently for every two pairs of nodes for faster fault detection, rather than being solely controlled by the master node. This way, the scalability and resource efficiency can be further improved because the timespan by which a node checks the status of its pair is not affected by the total number of nodes in the cluster. On the other hand, since the checkpointing approach has been proven in improving the efficiency of fault recovery in Hadoop, we intend to apply an in-memory distributed database to the checkpoint and transfer the output of the active nodes after each heartbeat message being sent to guarantee a consistent progress rate of two running instances and to prevent any wasting the processing progress in the case of failures. The network and I/O delays would be decreased as the pairs will be pre-defined before executing the jobs where the data locality and resource availability can be considered.
2021-06-03T06:17:23.090Z
2021-05-31T00:00:00.000
{ "year": 2021, "sha1": "8bf33fb8ca2b28f7b6687d445dad602268b83b8f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/11/3799/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a8951bc1c20c75c5e38394d1389450e4610fd39", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
244652848
pes2o/s2orc
v3-fos-license
Biodegradable Polymer DES (Ultimaster) vs. Magnesium Bioresorbable Scaffold (BRS Magmaris) in Diabetic Population with NSTE-ACS: A One-Year Clinical Outcome of Two Sirolimus-Eluting Stents Background Cardiovascular disease (CVD) with significant involvement of coronary artery disease (CAD) remains a major cause of death and disability among the diabetic population. Although percutaneous coronary intervention (PCI) continues to evolve, type 2 diabetes mellitus (T2DM) is a well-established marker of poor clinical prognosis after PCI, which is mainly attributed to the rapid progression of atherosclerosis requiring recurrent revascularizations. Hence, the use of bioresorbable materials could provide some solution to this problem. Material and Methods. The study was divided into two arms. For the first one, we qualified 169 patients with NSTE-ACS treated with PCI who received the drug-eluting stent (DES) coated with a biodegradable polymer Ultimaster (Terumo, Tokyo, Japan). The second arm was composed of 193 patients with ACS who underwent PCI with a magnesium bioresorbable scaffold Magmaris (Biotronik, Berlin, Germany). Both arms were divided into two subsequent groups: the T2DM (59 and 72) and the non-DM (110 and 121, respectively). The primary outcomes were cardiovascular death, myocardial infarction, and in-stent thrombosis. The main secondary outcomes included target lesion failure (TLF) and were recorded at a 1-year-follow-up. Results There were no significant differences between the diabetic and nondiabetic populations in the primary endpoints or main secondary endpoints (TLF, scaffold restenosis, death from any reason, and other cardiovascular events) either in the Ultimaster or Magmaris group. At a 1-year follow-up, the primary endpoint in the DM t.2 population was recorded in 2.7% Ultimaster vs. 5.1% Magmaris, respectively. Conclusion Both, Ultimaster and Magmaris revealed relative safety and efficiency at a one-year follow-up in the diabetic population in ACS settings. The observed rates of TLF were low, which combined with a lack of in-stent thrombosis suggests that both investigated devices might be an interesting therapeutic option for diabetics with ACS. Nevertheless, further large randomized clinical trials are needed to confirm fully our results. Introduction Among patients with acute coronary syndrome, diabetes mellitus in particular is a marker of poor clinical prognosis. Diabetics tend to have rapid progression of atherosclerosis, leading to an increased rate of multivessel disease, which commonly requires recurrent revascularization. According to the current European Society of Cardiology (ESC) guidelines on myocardial revascularization [1], coronary artery bypass grafting (CABG) is preferred over percutaneous coronary intervention in diabetic patients. This recommendation is strictly related to a higher rate of short-and long-term adverse cardiovascular outcomes demonstrated after PCI. However, due to the aging and numerous comorbidities, PCI often remains the only available revascularization option. Many factors are postulated to play a role in the pathophysiological background of unfavorable results. Chronic vascular inflammation, endothelial dysfunction with increased oxidative stress, and increased platelet activation are cardiovascular responses to hyperglycemia [2]. In addition, these chronic inflammatory responses are often exacerbated by the drugeluting stent [3] which can lead to delayed endothelialization of stent and subsequently impaired vascular healing process. To overcome these limitations, the bioresorbable materials have been widely used to develop new generations of scaffolds. These devices focus on suppressing the persistent inflammatory stimulus of the vascular wall by the stent surface. Recently, the new generation of sirolimus-eluting bioresorbable polymer DES Ultimaster (Terumo, Tokyo, Japan) has demonstrated a favorable 1-year safety and efficacy profile with concomitant rapid vascular wall healing and a high degree of strut coverage [4]. A thin, biodegradable gradient coating is a novel feature of the scaffold design. Thus, the bioresorbable DES technology refers not only to the polymer but also the entire stent platform. Bioresorbable vascular scaffolds (BRS) constitute a novel vessel-supporting technology that enables the vessel restoration without permanent presence of foreign material in the vessel wall. The initial enthusiasm for the first generation of BRS Absorb (Abbott, Chicago, United States) subsided following publication of the long-term results [5]. However, the second generation of magnesium BRS Magmaris (Biotronik, Berlin, Germany) has recently entered the market and has shown promising short-term outcomes [6]. The aim of this study is to investigate the performance of sirolimus-releasing bioresorbable polymer stents (Ultimaster) compared to bioresorbable magnesium scaffold (Magmaris) and to evaluate the theoretical advantages of this new technology in high-risk population patients with diabetes mellitus in the setting of ACS. Materials and Methods Patients with acute coronary syndrome-NSTE-ACS (with exclusion of the STEMI cases) and clinical indication for percutaneous coronary intervention (PCI) were enrolled in this retrospective, observational, study. This study consisted of two major arms (Figure 1). The first arm included 193 patients who received a bioresorbable magnesium scaffold-Magmaris. The second arm was composed of 169 patients who were implanted with a scaffold covered with a biodegradable polymer-Ultimaster. The decision to implant Magmaris BRS was based on operator dissertation in accordance with the inclusion and exclusion criteria (Figure 1), which were closely followed the manufacturer's recommendations [7]. Patients in the second arm were selected among all ACS-Ultimaster cases (541) from our cardiac departments between January 2015 and March 2020. The criteria for inclusion in the registry were the same as for the Magmaris group. In addition, scaffolds in the Ultimaster group-in parallel to the Magmaris group-had to meet the additional size-related criteria (diameter 3.0 mm or 3.5 mm). Coronary Stenting Procedure. All patients receive a periprocedural medication regimen according to the routine practice in accordance with current revascularization guidelines [8]. Initially, mandatory aggressive (balloon-artery ratio 1 : 1 size according to angiographic assessment) and successful (without significant more than 20% of diameter-residual stenosis) lesion preparation was performed. In the next step, after successful stent delivery and implantation, obligatory high-pressure (at least 15 atm.) postdilation was performed with a NC balloon, which has a size at least equal to the size of the scaffold. Endpoints and Definitions. The primary outcome included death from cardiac causes, myocardial infarction, and stent thrombosis. The main secondary outcome was a target-lesion failure (TLF) composed of cardiac death, target vessel myocardial infarction (TV-MI), or target lesion revascularization (TLR). Also, other secondary outcomes (scaffold restenosis, death from any reason, and all revascularization procedures as well as myocardial infarction [9]) were recorded. Diabetes (type 1 or type 2) was defined as a previously diagnosed DM treated with pharmacologic or nonpharmacologic, and a new-onset DM was defined according to the American Diabetes Association [10]. Statistical Analysis. The analyses were performed using the R language [11]. Continuous variables were characterized with their mean and standard deviation, while frequencies were used for categorical variables. Patients were compared between groups using the nonparametric two-sample Mann-Whitney's test for continuous variables and Fisher's exact test for categorical variables. Bonferroni correction was applied to adjust for multiple comparisons, p values ≤ 0.05 were accepted as a threshold for statistical significance. The characteristics of the PCI procedures performed in both study arms were heterogeneous. The only statistically significant differences were found in the Ultimaster arm and related to the radiation dose used during the PCI procedure, which was higher in the diabetic group (1396:56 ± 802:95 vs. 1162:52 ± 728:34, respectively, p = 0:029). All procedural characteristics are shown in Table 2. All clinical outcomes data are summarized in Tables 3 and 4. There were no statistically significant differences in clinical outcomes between the diabetic and control populations in either study arms (Magmaris and Ultimaster). We did not find any significant differences between the two diabetic study populations (Magmaris vs. Ultimaster). The only exception was a higher number of all types-revascularization at 30-day follow-up in the diabetic Ultimaster group, compared to the diabetic Magmaris group (5 vs. 0, respectively, p = 0:016). Noteworthy, the rates of the primary outcome were higher in the diabetic population in the Ultimaster group (3.4% vs. 0%, respectively, p = 0:121) at short follow-up (30 days). A similar trend was observed at long-term follow-up (1 year) for principal secondary outcome in the Magmaris arm (4.1% vs. 0%, respectively, p = 0:051). Discussion Despite worldwide public health interventions taken to stop the global growth of diabetes prevalence, it is inexorably increasing. A disproportionate burden of the increase in type 3 Journal of Diabetes Research 2 diabetes affects the middle-to-high-income countries, particularly Western Europe and the Pacific Ocean island nations [12]. Cardiovascular disease (CVD) including coronary artery disease (CAD) as a major contributor, remains a leading cause of death and disability among the diabetic population. Although percutaneous coronary intervention (PCI) continues to evolve, the data from randomized trials demonstrate the superiority of coronary artery bypass grafting (CABG) over percutaneous coronary intervention in the diabetic population [13]. The reasons for this are multifactorial and not fully understood. Some data link this to a chronic local inflammatory response in response to the presence of a foreign body in the vessel wall, leading to neointimal hyperplasia and increased platelet activation and adhesion [14]. Therefore, the use of bioresorbable material design to limit immuneadverse reactions is believed to be a new revolution in the field of coronary interventions. In current practice, two development paths for bioresorbable materials have been proposed. The first, also referred to as third-generation DES, involves abluminal coating of a thin metallic backbone with a bioresorbable polymer that degrades uniformly to release the antimitotic drug sirolimus. An example of this technology is the Ultimaster. The second concept pursues complete biosorption of the scaffold. In this scenario, BRS provides short-term performance equivalent to existing drug-eluting stents (DES); however, it avoids permanent caging of the vessel. After the widespread use of first-generation Absorb (Abbott) was discontinued, the second generation of BRS (Magmaris) with a metallic backbone (magnesium) sirolimus-eluting BRS containing an active bioabsorbable coating BIOlute poly-Llactide (PLLA) entered the market and is currently available for commercial use. Data on the performance of the Ultimaster in the allcomers population are encouraging and demonstrated low late lumen loss, resulting in low rates of in-stent thrombosis, restenosis, and TLR [15][16][17]. Clinical outcomes in the longterm follow-up were comparable to those obtained with the Xience scaffolds [18]. The long-term safety of Ultimaster was confirmed by the low rate of late in-stent thrombosis. These favorable antithrombotic properties of the scaffold have been demonstrated in the in vitro models [19] and are associated with an accelerated tissue coverage and scaffold apposition [3,20] leading to improved vessel healing. Noteworthy, the presence of the "class effect" for all bioresorbable polymer stents is very likely [21]. It is well known that diabetes mellitus and ongoing ACS are independent risk factors for poor clinical outcomes after PCI. Although there is a lack of convincing data for Ultimaster, few studies conducted so far seem to confirm this paradigm [22,23] mainly due to an increased rate of TLF. However, the data from our studies do not confirm this observation. There were no statistical differences between the diabetic and control groups in primary clinical outcomes and TLF. Moreover, the rate of TLF in diabetics was significantly lower than in the study of Beneduce et al. [23] (3.3% vs. 8%). A similar trend is observed when we consider substudies in the ACS group [24]. This could be due to the fact that only patients implanted according to the accordance "4P technique"(patient selection, proper sizing, predilatation, and postdilatation strategy) were analyzed. It has been shown that the negative effects of diabetes on patients treated with BRS-ABSORB implantation can be minimized [25,26]. On the other hand, our favorable results may be related to the detailed lesion selection that we adopted from inclusion Journal of Diabetes Research criteria of the Magmaris Registry. We avoided high-risk patients with heavy calcification, the STEMI patients with present thrombus, or low-size of the treated vessel. However, the latter factor has been proven to have no effect on clinical outcome after Ultimaster implantation [22,23]. Therefore, concerning the results of the LEADERS trial [27], it seems to be no "class effect" of DES with abluminal biodegradable coating. Data regarding the performance of Magmaris in the diabetic population are strictly limited [28], yet encouraging. In the contrast, the data on implantation of Magmaris in ACS conditions are more comprehensive and reliable. Several observational registries confirmed favorable short-term and long-term outcomes [6,29] Furthermore, recently published data from the largest all-comers Magmaris registry [30] which included 2054 subjects showed that the one-year TLF rate was 4.3% with only one subacute in-stent thrombosis event. The results obtained are far more favorable than the first generation of BRS and comparable to the newest DES. There is only one study comparing Magmaris to third generation of DES (Orsiro) [31]. The study population consisted mainly of the patient with stable CAD. The authors observed Magmaris and Orsiro unadjusted TLF rates at levels 6.0 and 6.4% with no significant difference between the groups. To the best of our knowledge, this is the first in human study designed to compare the efficacy and safety of fully bioresorbable magnesium scaffold (Magmaris) with a third generation of metallic DES with bioresorbable polymer, in DM t.2 population in ACS settings. We found no differences between the two scaffolds in the diabetic subpopulation. As [30] and even better [23,31] than in the previously mentioned studies. Diabetes, especially when treated with insulin, is a wellestablished risk factor of scaffold thrombosis, particularly in the first generation of BRS (Absorb) [32]. Our data contradict such an association. We did not observe the in-stent thrombosis in any of the tested devices. Noteworthy, both used scaffolds released the same antimitotic drug (sirolimus), and therefore, the results are not differentiated by this factor. 4.1. Limitations. This was a nonrandomized study with retrospective data collected in the relatively short observation Conclusions In our study both biodegradable polymer DES (Ultimaster) and Magnesium bioresorbable scaffold (Magmaris) revealed relative safety and efficiency features at a one-year follow-up in the diabetic population in ACS settings. The observed rates of TLF were low, which combined with a lack of instent thrombosis suggests that both investigated devices might be an interesting therapeutic option for diabetics with ACS. Nevertheless, further large randomized clinical trials are needed in order to confirm fully our results. Data Availability Data not included in manuscript available on request from corresponding author due to local law and privacy restrictions.
2021-11-26T16:40:01.183Z
2021-11-23T00:00:00.000
{ "year": 2021, "sha1": "7eec78d516c29730afcdd5e8a955995ab1df6783", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jdr/2021/8636050.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3b63cece80d308b8df895f4c975e1ab9243ca449", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202007251
pes2o/s2orc
v3-fos-license
Geometric purity, kinematic scaling and dynamic optimality in drawing movements beyond ellipses Drawing movements have been shown to comply with a power law constraining local curvature and instantaneous speed. In particular, ellipses have been extensively studied, enjoying a 2/3 exponent. While the origin of such non-trivial relationship remains debated, it has been proposed to be an outcome of the least action principle whereby mechanical work is minimized along 2/3 power law trajectories. Here we demonstrate that such claim is flawed. We then study a wider range of curves beyond ellipses that can have 2/3 power law scaling. We show that all such geometries are quasi-pure with the same spectral frequency. We then numerically estimate that their dynamics produce minimum jerk. Finally, using variational calculus and simulations, we discover that equi-affine displacement is invariant across different kinematics, power law or otherwise. In sum, we deepen and clarify the relationship between geometric purity, kinematic scaling and dynamic optimality for trajectories beyond ellipses. It is enticing to realize that we still do not fully understand why we move our pen on a piece of paper the way we do. Highlights Several curves beyond ellipses have power-law kinematics with 2/3 exponent. The curvature spectrum of each of such geometries is quasi-pure at frequency 2. Their dynamics are shown to comply with minimum of jerk. But the 2/3 power law is not an outcome of minimizing mechanical work. Yet, equi-affine displacement is invariant upon different kinematics. Graphical abstract “We must represent any change, any movement, as absolutely indivisible.” — Henri Bergson INTRODUCTION In 1609 Kepler published in the book Astronomia Nova (Kepler, 1609) his celebrated First Law of planetary motion: Mars moves along an elliptical trajectory with the sun at one of its foci. This left behind Ptolomaic and Copernican models; not circles, but ellipses. In the same book we find Kepler's Second Law, which specifies an invariant (which was later understood as conservation of angular momentum): the area of between the Sun, Mars and any previous point of Mars is constant along the motion of the planet. In sum: equal areas in equal times. This was generalized to all other planets. We move faster when close to the sun (fastest when nearest, at the perihelion), and slower when far away (slowest when furthest, at the aphelion). Ten years later, Kepler published in Harmonices Mundi (Kepler, 1619) his Third Law of motion: the semi-major axis A is related to the period P of a planet by means of the following relation: A=k· P 2/3 (the parameter k is a constant, which can be renormalized by using the Earth's semi-major axis and number of years as units). It was Kepler's big achievement to establish such a lawful regularity despite the fact that nobody understood why planets would care to follow it. No one could derive Kepler's celebrated two-thids power law until Newton's Law of Universal Gravitation (Newton, 1687) was proposed nearly seventy years later. From geometric properties and kinematic laws one would then strive to "climb up" in order to establish dynamic laws that frame the former. Physics is full of celebrated examples of this sort, where constraints of motion are first discovered and later explained by other more general empirical laws, which in turn are then shown to derive from even more fundamental theoretical principles. Such is a hallmark understanding phenomena, from the motion of planets across the solar system to the movement of Picasso's brush along a canvas (in preparation). However, when it comes to biology, the zeitgeist is mechanistic. The explanatory work seems to be done when a molecule or a circuit is shown to be "necessary and sufficient" for the appearance (or disappearance) of the phenomenon under investigation (Gomez-Marin, 2017). In the midst of the reductionistic zeitgeist obsessed with efficient mechanical causes in the form of counterfactual reasoning within purely interventionist approaches (Krakauer et al. 2017), it is conceptually refreshing (and empirically exciting) to realize that relationships like Kepler's laws can be understood as formal causes. Science is actually the art of interpreting correlations, be it in terms of efficient causation or, in arguably more mature sciences, by actually giving up causation (or, rather, by framing it in) the notion of invariance (Bailly & Longo, 2011). Isn't it ironic that, while the stone falls for symmetry reasons, the insect is thought to fly for neural reasons? Scaling laws are a particularly relevant sub-class of deep relations, ranging from physics to psychophysics, ecology or language. They all point to unifying principles in complex systems (West, 2011). Note that not all power laws are statistical; some relate one degree of freedom to another (like the speed-curvature power law studied here), rather than expressing the functional dependency of a probability distribution. In curved hand movements, the instantaneous angular speed also scales with local curvature via a power law, whose exponent is 2/3 (Lacquaniti et al., 1983). This relationship, simple as it seems, is not a trivial mathematical fact nor is it given by physics (Zago, Matic et al., 2017). Cortical computations have been proposed as the controlling mechanism (Schwartz, 1994). However, it is still unclear how the neuro-musculo-skeletal system may actually do so. Moreover, the trajectories of insects also comply with the speed-curvature power law (Zago, Matic et al., 2016), suggesting that a much simpler explanation -perhaps via simple central pattern generators-may be at work (at least in the humble fruit fly). Nearly forty years later, the origins of the law remain debated. Most theoretical but also phenomenological studies of the power-law have concentrated on ellipses, also decomposing scribbling into monotonic segments (Lacquaniti et al., 1983). On a few occasions shapes other than ellipses have been studies, such as the cloverleaf, lemniscate or limaçon . Invoking optimality as a normative explanation, one can derive the power law by powerful mathematical frameworks. Requiring that the trajectory produces minimum jerk (jerk is the time derivative of acceleration, or equivalently the second derivative or speed, or the third derivative of position) naturally implies such speed-curvature constraints (Flash & Hogan, 1985). Also recently, a spectrum of power laws with different exponents has been empirically demonstrated upon drawing a whole range of "pure frequency" curves beyond ellipses, and shown to theoretically derive from minimization of jerk (Huh & Sejnowski, 2015). Notably, it has also been proposed that the 2/3 power-law is an outcome of the least action principle, namely, that imposing mechanical work to be minimal along the trajectory naturally produces the power law with its well-known 2/3 exponent (Lebedev et al., 2001). Here we correct such mistaken statement, which allows us to deepen into the relationship between geometrical purity, kinematic scaling and dynamic optimality beyond elliptical trajectories. Planets do not move at constant speed along their (quasi) elliptical trajectories around the Sun. Nor does your finger when tracing an ellipse on a tablet (Matic & Gomez-Marin, 2019). And yet, while planets do not follow the speedcurvature power law (Zago, Matic et al., 2016), nor do finger movements derive from the physical principle of least action, as we hope to show in what follows. Mathematical calculations Basic notation and equations. Let us use the following notation: A is the angular speed (A=V/R), where V is the instantaneous speed (the module of the velocity vector) and R is the local radius of curvature. Curvature is then defined as C=1/R. The speedcurvature power law then reads: A=k· C BETA , where k is a constant and BETA is the power law exponent. By definition, the power law can also be written as V=k· C BETA-1 . Space-time dilation for arbitrary power-law generation. Since V=ds/dt (where dt is the time differential, and ds the arc-length differential), then one can obtain an explicit relation for how time dilates with space at every infinitesimal increment along the trajectory: ds/dt=k· C BETA-1 . Since C can be numerically calculated as dtheta/ds, we arrive at the final equation that allows to transform any trajectory into a power law kinematics that respects the original geometry: dt = k -1 ds BETA dtheta 1-BETA . Numerical simulations. Trajectory generation. Trajectories were generated by numerically integrating (with a dt=0.001s) the x and y positions and their derivatives for curves expressed and governed via the following differential equation: d 3 x/dt 3 +x· q(t)=0 for x(t), and also for y(t): d 3 y/dt 3 +y· q(t)=0. Note that the initial conditions can be different but both x and y are governed by the same equation with the same time-dependent coefficient q(t). The four different curves explored in this manuscript were generated by choosing the function q(t) as follows: q(t)=1 corresponds to the ellipse, q(t)=t for the spiral-like ellipse, q(t)=|sin(t)| for "wobbly" curve, and q(t)=|3sin(4t)| for the flower-like curve (see Figure 1A). Curvature spectrum. Curvature frequency spectrum analysis is based on (Huh and Sejnowski, 2015), expanded to approximate also the frequency spectrum of nonmonotonic angle profiles. We calculate the first derivative of the unwrapped local angle profile, then take its absolute value, and find the anti-derivative. This anti-derivative profile is re-sampled to a uniform step in the local angle coordinate. We take the log of the profile, de-trend it, and apply the Fourier transform. Generating power-law kinematics of any exponent from arbitrary geometries. Selecting an arbitrary power law between angular velocity and curvature is solved by recalculating the time period between each point of the discretized curve, so that the angular velocity fits a desired relationship with curvature (the power-law relation; A=k· C BETA ), or equivalently, that tangential velocity fits equation (V=k· C BETA-1 ). First, we sample or construct the trajectory using constant step in time (dt). We calculate the arclength ds i , and curvature C i at each point (x i , y i ) of the trajectory. Next we construct a new time-difference vector, where each dt i follows equation dt=(ds/k)· C 1-BETA . We then construct a time vector T as a cumulative sum of all dt i. . Next, using a cubic spline, we fit the existing (x, y) points to times T. Then we sample the splined trajectory again with constant dt, obtaining new vector of points (x i , y i ) as a discrete approximation of an arbitrary power law trajectory. Modifying the parameter k then sets the total time of traversing the trajectory without changing the power law relationship. Behavioral experiments Ellipse trace. Using data from a previous study (Matic & Gomez-Marin, 2019), one of the authors traced an ellipse on an android tablet device in a fast and fluid manner. The data was recorded at 85Hz. Raw data was smoothed with a low-pass, 2 nd order Butterworth filter, with a cutoff at 8Hz. Homer's trace. A member of the lab traced a contour of Homer Simpson's head shown on a Wacom Cintiq interactive graphics monitor, using an electronic pen. The tracing movement was done without lifting the pen from the screen. Several practice traces preceded the trace used in this paper. The data was recorded at 150Hz. Raw data was smoothed with a low-pass, 2 nd order Butterworth filter, with a cutoff at 8Hz. , which generates those trajectories via the third order differential equation d 3 u/dt 3 +u· q(t)=0 satisfied for both x(t) and y(t). (C) Time course of instantaneous angular speed A and local curvature C for each curve. (D) The numerically estimated log-log plot of angular speed versus curvature reveals, as predicted, an exact power law relationship with exponent 2/3 for each of the curves. Thus an ellipse is not the only geometry that naturally admits kinematic scaling. A wide range of curves beyond ellipses naturally lead to a 2/3 power law The speed-curvature power law is the relation A=k· C BETA , where A is the instantaneous angular speed (defined as A=V/R) and C is the local curvature (defined as C=1/R), and V is the absolute instantaneous speed of movement and R the local radius of curvature of the trajectory. The term k is a proportionality factor that remains more or less constant empirically (and a precise constant theoretically), and BETA is the power law exponent. This relation is non-trivial since aspects of geometry (like curvature; which concerns only space) and aspects of kinematics (like speed; which concerns time) need not constrain one another in general (like in the motion of a pendulum). Using the definition of the radius of curvature R as a function of the time derivatives of the trajectory (we are always referring to movement in two dimensions here), it is not difficult to show that, if the power law holds, the term k=D 1/3 , where D=|v X a Y -v Y a X |. Obviously, v i and a i are the velocity and acceleration components in both orthogonal directions x and y. The 2/3 power law is often written as A=D 1/3 C 2/3 , with D constant. Now, if k is constant (namely, if the 2/3 power-law holds), then the term |v X a Y -v Y a X |should also be constant. This implies that its time derivative should be zero, and thus one gets: a X a Y + v X j Y -v Y j X -a Y a X = 0 (where "j", known as jerk, is the time derivative of acceleration; just as "a" is the time derivative of speed). Two terms cancel out, and thus we finally get that any trajectory that complies with the 2/3 power-law must satisfy the following differential equation: j X /v X =j Y /v Y . This geometric-kinematic constraint is very interesting because it dictates that both x(t) and y(t) must behave so that the ratio of their third and first time derivatives is equal which, without losing generality, can be expressed as j/v = q(t), where q(t) is any arbitrary temporal function. In other words, one can choose any q(t) at will and, by means of the equation d 3 u/dt 3 +u· q(t)=0 -where u(t) here denotes both x(t) and y(t), although initial conditions can be different-, generate geometric curves whose kinematics follow the 2/3 power law. Following these mathematical reasoning (Lebedev et al., 2001), we generated four different trajectories ( Figure 1A). Selection of the q(t) function determines the shape of the trajectory: for the ellipse, it is constant, q=1; for the elliptic spiral q=t; the wobbly ellipse, q =|sin t|; and for the elliptic flower we chose q=|3 sin t⁄4| ( Figure 1B). Not only are curvature and angular speed of these trajectories strongly correlated ( Figure 1C), they in fact follow the 2/3 speed-curvature power law exactly ( Figure 1D). Lebedev and colleagues explicitly listed the ellipse, hyperbola and parabola as the trajectories resulting from constant q(t), noting the relationship between constant q and the resulting geometry (q=0 for parabola, q < 0 for ellipse, and q > 0 for hyperbola). In Supplementary Figure 1 we analyzed those three curves in the same way as four mentioned curves in Figure 1. Curvature and angular speed are visibly constrained, following a power law with the exponent of exactly 2/3. Supplementary To the best of our knowledge, nobody has analyzed the family of curves that are generated with a non-constant q(t), some of whose exemplars we show in Figure 1. In what follows, we will concentrate on such four curves to gain further insights into geometric purity, kinematic scaling, and dynamic optimality. We will also correct an important physics error in (Lebedev et al., 2001). Two-thirds power law trajectories have quasi-pure geometrical spectra Let us now concentrate on the geometry of the curves presented in the previous section. It has been recently shown that the speed curvature power laws (of different exponents, not just 2/3) are achieved for so-called "pure frequently curves" (Huh and Sejnowski, 2015). Actually, (as we will see in the last section of the Results) trajectories with "mixed curvature frequencies" cannot comply with the kinematic scaling of the power law, unless they give up dynamic optimality. So, how does one estimate the "geometric purity" of a curve? Parametrization of local curvature can be done in many ways. To estimate power laws one usually parametrizes curvature in time, namely, C(t), so that it can be compared, moment to moment, with speed V(t), which is naturally defined as a function of time. Time parametrization of log curvature is convenient in the regression analysis with log angular velocity, also parametrized in time (as in Figure 1D). In Figure 2B we show curvature parametrized in time for the four study-case trajectories shown in Figure 2A. However, cumulative arc length (s) is the natural parametrization for curvature, since curvature is by construction a purely geometrical quantity, and so the time parametrization natural in kinematic quantities (such as speed) injects a temporal bias that geometry should be indifferent to. In Figure 2C we re-parametrize curvature now in terms of arc length, C(s). Note the subtle change in the functions with respect to the time parametrizations in Figure 2B. There is a third way to parametrize curvature: rather than time or length, one can use angle. Based on (Huh, 2015) we can parametrize curvature in the local angle coordinate, as shown in Figure 2D. This representation has many advantages in understanding essential properties of the curves, as well as revealing the connection between geometry and kinematics in power law constraints (Huh and Sejnowski, 2015). In particular, once any curve is parametrized in the angle, one can detect a shared feature in the four curves studied here: note how the profiles of Figure 2D have naturally rescaled with respect to those in Figure 2C and Figure 2B, so that now, every 2π, curvature undergoes exactly two complete oscillations. If these were temporal functions, a Fourier transform would immediately reveal a dominant frequency there. Following (Huh and Sejnowski, 2015) we apply the Fourier transform to the log of the curvature profile, once parametrized in the local angle coordinate ( Figure 2E). The resulting amplitude profile shows curvature frequency spectrum in angle space. The frequency of a curve is the number of curvature oscillations per unit of local angle (full oscillation is 2π radians), and the local angle is defined as the angular direction of the velocity vector. Despite their very different appearance in X-Y space (Figure 2A), all four curves share a main peak at exactly ν=2 (which corresponds to Huh's pure ellipse; see below) as well as some ripples. The quasi-pure spectrum of these geometries, and specially that of the ellipse shown in on the left side of Figure 2A, makes one wonder why they are not exactly pure (namely, with a single peak at ν=2, without any ripples). To better understand this, we went back to Huh's pure frequency curve with ν=2 (Huh, 2015), which is visually very similar to the classical ellipse, (x/a) 2 + (y/b) 2 =1. Both curves are shown in Supplementary Figure 2A, together with ellipses empirically traced on a tablet. Huh's ellipse (on the left) has a single strong peak at ν=2 by design, and no peaks at other frequencies, meaning that its log-curvature profile in angle space is a pure sinusoid. The classic ellipse, constructed with two orthogonal sine waves with 90° phase difference, has a few harmonics at frequencies multiples of ν=2 (4, 6, 8, etc), but it is still quasi-pure. The empirically recorded ellipse trace, similarly, shows some harmonics and also peaks at other frequencies (Supplementary Figure 2B). It is also decently pure. In sum, this precise geometrical analysis of the spectrum of curvature is both informative as to whether we shall expect a power law and of what exponent, but also a necessary condition to know that we are dealing with a pure frequency curve in the first place, which is very important when trying to determine whether the speed-curvature power law holds empirically. Finally, to gain even further insight into what these spectra are reflecting, we morphed an ellipse into a circumference by reducing the eccentricity of the former (Supplementary Figure 2C). The amplitude of the peak at ν=2 is progressively reduced, as well as all the other harmonic frequencies, until the circumference does not show peaks at any frequency (as it should, since its curvature is constant). We can proceed now with kinematic and dynamic considerations on these curves. The power law does not imply that mechanical work is constant nor minimal For 2/3 power law trajectories, we have seen that D is constant. It turns out that D is actually the magnitude of the cross product between the velocity and acceleration vectors. And so, for the trajectories displayed in (Figure 3A) that dot product should be constant too ( Figure 3B). Such magnitude can be represented as the surface of the parallelogram closed by such vectors (Figure 3D). So far, so good. Remember that one can rewrite the 2/3 power law (A=kC 2/3 ) as A=D 1/3 C 2/3 , and then simply as V= D 1/3 R 1/3 , so that V 3 /R=D. With similar mathematical manipulations (Lebedev et al., 2001) arrive at this last same equation and, rewriting D=V(V 2 /R) realize that the term in parenthesis is the magnitude of centripetal acceleration (A n ), and so D=V· A n . The fatal error comes in their equation (5), when they say that "[t]his product is known in physics as mechanical power", which they call P. The essential mistake that invalidates the main claim of their paper is that D=P. If that was the case, then a 2/3 power law would constraint movement along the trajectory to have constant mechanical power (because we have seen that D is constant). As we will unpack further below, the authors are naturally thrilled to discover that, mathematically, the time integral of D is minimal when D happens to be constant. In other words, the "optimal" way to move is to do so that D is constant, aka, the 2/3 power law. They are thrilled (as we would) because, if the physics were true, the mathematics would prove that "drawing movements [which fulfill the 2/3 speed curvature power law] are "an outcome of the Principle of Least Action" (which is precisely the title of their paper). But if D is not the mechanical power, then the claim evaporates. Why isn't D=P, then? Lebedev and colleagues equate mechanical power with D, namely, the authors take the product of centripetal acceleration with the speed to be proportional to physical force that would push a particle moving along such 2/3 power law trajectories. Mechanical power is the amount of mechanical work per unit of time. Mechanical work is the amount of energy transferred by a force. It is calculated as the integral of the force vector along the trajectory vector. Force is proportional to acceleration, and the trajectory vector can be rewritten as velocity times dt. Thus, in practice, mechanical work is proportional to the product of velocity and acceleration. But (and here comes the subtle mistake), it is the dot product (also called scalar product) of the vectors, rather than the simple product of their magnitudes. Put plainly, the dot product of two orthogonal vectors is zero, no matter how large they are; while the product of their magnitudes is large. In sum, mechanical work is calculated via the scalar product -rather than the cross product (which gives us D)-of velocity and acceleration. And thus, as shown in Figure 3C, work is far from constant along the trajectory, as opposed to D ( Figure 3B). Trajectories with constant D minimize the time integral of D The mathematical derivation that, by means of a variational analysis, shows that the time integral of D is minimal when D is constant (Lebedev et al., 2001) is still valid and somewhat insightful. Agnostic about the existence of a meaningful physical or mathematical interpretation of the term D, next we sought to numerically demonstrate that constant-D prescribes the most "economical" way to move amongst the infinitely many ways to do so. To our knowledge, such minimization hasn't been done numerically. Because we seek a numerical demonstration that trajectories complying with the 2/3 power law constrain their geometry (curvature) and kinematics (speed) so as to minimize the integral of D, we can only aspire to show local, rather than global, minima. To that end, we invented a way to systematically generate a range of different kinematics that would transverse the exact same geometry in the exact same total duration (see below). We take a segment (of a trajectory that complies with the 2/3 speed curvature power law) with starting points A and B (Figure 4A), and whose total time duration is T (vertical black line in the plots of Figure 4B). We then maintain the geometry but rescale the kinematics so that the same segment is traversed in the same amount of time but now with a kinematics that would still yield a power law but with an exponent different than 2/3 (say, with hypo-natural exponent 1/3, and hyper-natural exponent equal to 1). We then numerically calculate D as we integrate it in time all the way to t=T (Figure 4B) for such three different (power law) kinematics. Exponent 2/3 always yields the minimum value at the end of the segment. In Figure 4C In case it is not already clear by now, let us emphasize that a given geometry can in principle be traversed with any kinematics. Let us now have a brief interlude to explain and illustrate how to kinematically re-scale a given geometry with any kinematics to a power-law kinematics with our exponent of choice. For each β in the required range, we start with a generated path as an ordered list of points. Given the path, the β and k, we calculate the time periods between each point of the path, so that they satisfy the formula dt = (ds/k)· C 1-BETA , derived as explained in the Methods. The resulting trajectory does not necessarily have the desired average speed. The whole trajectory is then re-calculated with the same points and β, but with a different k parameter until the average speed is within tolerance from desired average speed. We illustrate the effects of such rescaling algorithm for the classical example of an ellipse (Supplementary Figure 4A). Generating an elliptical trajectory with orthogonal sine waves yields a β=2/3 power law (Supplementary Figure 4D, red line). We can rescale this trajectory into β=1 (blue line) and β=1/3 (green line) power laws. The Y coordinate over time (Supplementary Figure 4B) of the β=2/3 trajectory is shown in red, and is a pure sinusoid. A trajectory with exponent β=1 is more 'round' in the Y coordinate, and a trajectory with β=1/3 is more 'triangular'. The arc-length for the β=2/3 trajectory changes over time: an object moving on such trajectory slows down in more curved parts, and speeds up in straighter parts of the path (Supplementary Figure 4C). Because human participants produce speed profiles similar to these, the β=2/3 trajectory is called 'natural'. In comparison, a trajectory with β=1 is called 'hyper-natural' and has a constant arc length, because it has constant tangential speed. Trajectories with β=1/3 are called 'hypo-natural', as they slow down more and speed up more than β=2/3 'natural' trajectories. Similar relationships are visible in the speed over cumulative arc length plot (Supplementary Figure 4E), illustrating the transformations made by the rescaling algorithm. When shortening the time period of crossing the same distance, we get higher speed, as illustrated by the peaks of the β=1/3 (green) plot. For longer times, speed goes down, as in the valleys of the β=1/3 plot. Angular speed over time (Supplementary Figure 4F) shows some inverted relationships. Here, the hyper-natural trajectory has highest peaks and lowest valleys of the three trajectories. Equi-affine displacement is invariant under different kinematics We have seen how the time integral of the term D is minimal when D is constant. However, we have also seen that D does not correspond to mechanical power, and so the minimization of D does not imply that the power law is the outcome of the least action principle of physics. Is there any other quantity whose integral, when minimized, lends itself to a meaningful interpretation? The cube root of D has been identified as the so-called equi-affine speed (Pollick and Sapiro 1997;Flash and Hadzel, 2007): V EA =D 1/3 . Of course V EA is constant when D is constant. But note that the fact that the integral of D is minimal when D is constant does not mean that the integral of V EA is minimal when V EA is constant. What happens when we minimize the integral of V EA ? We can answer such question mathematically by means of variational calculus. When deriving Euler-Lagrange equation that results in minimizing the equi-affine speed as the Lagrangian, we found that the terms in such equation cancel out completely. Aren't there any particular solutions that make the functional an extremum? We then answered such question numerically. We followed in Figure 5 the same procedure as in Figure 4. We took our four main curves and chose a segment of duration T ( Figure 5A) and numerically estimated the time integral of V EA upon movement along the same geometry with three different kinematics (Figure 5B), this is, power laws with different exponents. To our surprise, and as opposed to the integral of D in Figure 4, the integral of V EA yields the same value at the end of the segment (t=T) regardless of the kinematics. There seems to be no minimal. Is it thus an invariant? Note that, generally, the time integral of speed along a path is precisely its total displacement. In fact, the integral of affine velocity is known as the equi-affine arc-length or the special affine arc-length (Izumiya and Sano, 1998). Our analytical and numerical results thus indicate that affine arc-length is invariant under different power law kinematics. Next we asked whether such invariance remains when the kinematics does not follow a power law (Supplementary Figure 5A) and/or when the geometry between A and B is different (Supplementary Figure 5B In a similar analysis to Figure 5, we show that the affine arc-length is the same for power law and non-power law kinematics. An elliptical trajectory segment ( Figure 5.1 A) is traversed with power law kinematics (with exponent β=2/3) (in black) and non-power law kinematics (with ellipse's sine angle theta increasing with time squared) (in red). The integral of equi-affine speed is the same for both (Supplementary Figure 5A). Let us note an interesting pathological case: in movement from A to B in a straight line at constant speed, there is no acceleration vector, and so V EA is zero and so is its integral. To explore the effect of different ways to get from one point to another in space (geometry), not just in time (kinematics), we also tested three pseudo-random paths from points A to B. Using the procedure described in (Supplementary Figure 4), we imposed power law kinematics (black lines), while colored lines had non-power law kinematics, as shown in the middle plot. The integral of equi-affine speed is the same for both kinematics, but not across different geometries (Supplementary Figure 5B). Equi-affine speed is not invariant under arbitrary transformations. It has been shown that equi-affine length is invariant under affine transformations using the signed volume of the parallelepiped created by vectors of first, second and third derivative with respect to time of the curve r, raised to the power 1/6 (Pollick et al., 2009). Equi-affine speed has also been shown to be piecewise constant along movement segments and so, rather than Euclidian, it becomes a natural geometric description of hand trajectories (Flash and Handzel, 2007;Polyakov et al., 2009;Bennequin et al., 2009;Meirovitch et al., 2016). However, we have not been able to find an explicit claim that the time integral of equi-effine speed is a kinematic invariant, as our findings suggest. Pure curves with two-third power law scaling minimize jerk Having found a way to numerically estimate whether certain functionals (such as D and V EA respectively in Figure 4 and Figure 5), are (locally) minimal for a fixed geometry upon different kinematics, we now apply the method to confirm (Huh and Sejnowski, 2015) mathematical derivations: minimum of total jerk is achieved for pure frequency curves when their kinematics follow a speed-curvature power law (where the exponent value β depends on the frequency ν of the curve). It is well known now that ellipses (which we have shown to have ν near to 2), when traversed with a power law kinematics of β=2/3 (which is how they are traced by humans), have minimum jerk (Wann et at, 1988;Viviani and Flash 1995;Huh and Sejnowski, 2015). But, our knowledge, nobody has estimated this numerically. Nor has this claim been shown for the large family of curves that, despite not being an ellipse, have ν=2 (like those in Figure 6A). So, to end we show that quasi-pure curves with a peak at ν=2, produce minimum jerk when kinematically traversed at β=2/3 ( Figure 6B). This confirms and expands the findings in (Huh and Seknowski, 2015), at the same time that provides a numerical method to estimate and predict the intricate relationship between geometric purity, kinematic scaling and dynamic optimality for any drawn movement beyond (the ultrastudied) ellipses. Curve spectrum reveals that the drawing is not a pure-frequency geometry, having several peaks at low frequencies and a decreasing tail. (F) One can transform the original kinematics so that both X and Y follow the same third-order differential equation with the shared time-dependent parameter q(t), which can be calculated as the ratio between the third and first derivatives of position, shown here. (G) Homer's face now must follow a 2/3 power law. (H) The term D, as compared to the original drawing (grey) is quasi constant along the trajectory (blue). The subtle relationship between curve purity, scaling and optimality Let us end with a fun and illustrative example to recapitulate. Tracing the contour of Homer Simpson's face ( Figure 7A) was drawn on an interactive graphics tablet, tracing the original image shown on the screen, in a single movement, without lifting the pen from the screen. The raw data are smoothed before analysis (see Methods). The X and Y coordinates over time ( Figure 7B) show constant movement with no breaks. Curvature and velocity look fairly correlated ( Figure 7C), but do not exactly conform to a power law ( Figure 7D). In fact, the log-log plot seems to indicate multiple segments with different power law exponents, perhaps related to different segments of the drawing. The geometry spectrum analysis shows multiple peaks at low frequencies, and we can see that this is not a pure frequency curve ( Figure 7E). Figure 7F-H show a transformed trajectory: the geometry is the same (still Homer's face), but the empirical kinematics of drawing are transformed to strictly follow the 2/3 power law ( Figure 7G). From the same trajectory we can extract the function q(t), and we can see it is near-identical in X and Y dimensions ( Figure 7F) which, as we saw at the beginning of this article, is a hallmark of a 2/3 power law trajectory. Beyond ellipses, or the other three main curves systematically analyzed in this study, there are infinitely many ways to have a 2/3 power law trajectory (Homer's face included). As such, the magnitude of the cross product (the term D) is now near constant, unlike the empirical one, which is more variable (Figure 7H). The plots in Unfortunately for Homer, since its geometry is not pure (Figure 7E), its tracing cannot enjoy both kinematic scaling ( Figure 7G) and dynamic optimality at the same time. In other words, drawing movements cannot be minimum jerk if speed scales with curvature unless their curvature spectrum is pure. However, in general, one could have minimum jerk using some unknown minimization procedure for any non-pure geometry, with nonpower law kinematics. DISCUSSION Nearly forty years later (Lacquaniti et al., 1983), the two-thirds speed curvature power law of human movement is still puzzling. Moreover, evidence for the same scaling law with different exponents has recently been discovered empirically (Huh and Sejnowski, 2015), and demonstrated to be derivable from normative principles that require the jerk (the time derivative of acceleration) accumulated along the trajectory to be minimal. Along those lines, it had been claimed that the 2/3 speed-curvature power law of movement is a consequence of minimizing mechanical power (Lebedev et al., 2001). If so, the power law could be seen as both an outcome of minimum jerk (Flash and Hogan, 1985) and "an outcome of least action" (Lebedev et al., 2001). That would be interesting if true. However, here we have demonstrated that this is not the case. We have discovered a flaw in the derivation of Lebedev and colleagues, which is due to a basic physics error in interpreting mechanical work. The connection the authors draw between the term D and mechanical work is inexistent. This invalidates the main claim of their paper. Drawing movements complying with the two-thirds power law do not minimize mechanical work. The origins of the speed-curvature power law remain debated to date. Therefore, we deemed it necessary that the so-far (and to the best of our knowledge) undetected mistake in (Lebedev et al., 2001) -and its corresponding unexpected link to equi-affine speed, in the line of the work by Flash and colleagues-does not continue unreported and uncorrected. However, two pieces of their mathematical treatment are still valuable when expanded upon. They provide more insights to further understand the 2/3 speed-curvature power law observed in humans while drawing. First, their mathematical treatment demonstrates that drawing movements complying with the 2/3 power law must obey a third-order linear ordinary differential equation that only depends on a time-dependent coefficient q(t). The authors explored only the family of x(t) and y(t) solutions when q(t) is constant, which comprises ellipses, hyperbolas and parabolas. Here we exploited some other nontrivial curves of the myriad of geometries that can stem from time dependencies in q(t). Second, the variational principle they put forth demonstrates that D is minimal when it is constant. We tested it numerically, and reformulated it to show that equi-affine displacement is invariant upon different power-law and non-power-law kinematics. We also demonstrated that β=2/3 power laws with ν=2 beyond ellipses have minimum jerk. Our work has limitations. First, note that except the hand-drawn ellipse and Homer's face, the rest of our analysis is based on mathematics and numerically simulated curves. Further studies should mirror our findings to experiments inspired by them. Second, all our numerical estimates regarding minimization demonstrate local, but not global, minima. Although it is unlikely, we cannot numerically rule out that a very particular kinematics beats the 2/3 power law when it comes to optimizing the functional of D, V EA or Jerk. Third, a very interesting aspect remains fairly unexplored: while the equation that generates all possible 2/3 power law movement trajectories is a third-order differential equation, in physics virtually all equations of motion do not go beyond second-order. Fourth, while in most human traces and drawings one constantly switches from clockwise to counter-clockwise movement, all curves explored in this manuscript (except Homer's) where monotonic in curvature. Fifth, it is still a challenge to robustly estimate jerk from empirically measured trajectories because of sensitivity to filtering and to noise in the derivatives. To end, let us emphasize that the discovery of non-trivial constraints in nature -like a power law-is always as puzzling as rewarding. Kepler established one for the motion of planets. Lacquaniti and colleagues found another one for the movement of hands. Both characterized by an exponent whose value is exactly 2/3. In 1981 Yoshio Koide uncovered a yet-unexplained relation between the masses of three elementary particles (the three charged leptons: the electron, the muon, and the tau): their sum divided by the square of the sums of their square roots is approximately equal to 2/3. If that wasn't enough, the same relation holds for the masses of the three heaviest quarks. It is tempting to dismiss such phenomenological discoveries as mere numerology or, at best, as simple descriptions awaiting for the hard-core science to take place. This is even more so in biology, where "mechanism" is king while "phenomenon" often enjoys negative connotations. Be it as it may, phenomena borrow from mechanisms that reasons by which they are explained, and restore them to mechanisms in the form of scientific questions which they have stamped with their own meaning. Or, put plainly, the depth that the answer provides very much depends on the quality of the question asked in the first place. Good science is, in a sense, like in good journalism. ***
2019-09-09T21:21:44.251Z
2019-08-16T00:00:00.000
{ "year": 2019, "sha1": "d1d8fe5cda9e714d1d4b8b1cfcb89f7fa506dd50", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2019/08/16/737460.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "7cc41070ff72128e3423df339f09934611bcaf16", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Biology", "Computer Science", "Mathematics" ] }
257067015
pes2o/s2orc
v3-fos-license
Inhibition of noradrenergic signalling in rodent orbitofrontal cortex impairs the updating of goal-directed actions In a constantly changing environment, organisms must track the current relationship between actions and their specific consequences and use this information to guide decision-making. Such goal-directed behaviour relies on circuits involving cortical and subcortical structures. Notably, a functional heterogeneity exists within the medial prefrontal, insular, and orbitofrontal cortices (OFC) in rodents. The role of the latter in goal-directed behaviour has been debated, but recent data indicate that the ventral and lateral subregions of the OFC are needed to integrate changes in the relationships between actions and their outcomes. Neuromodulatory agents are also crucial components of prefrontal functions and behavioural flexibility might depend upon the noradrenergic modulation of the prefrontal cortex. Therefore, we assessed whether noradrenergic innervation of the OFC plays a role in updating action-outcome relationships in male rats. We used an identity-based reversal task and found that depletion or chemogenetic silencing of noradrenergic inputs within the OFC rendered rats unable to associate new outcomes with previously acquired actions. Silencing of noradrenergic inputs in the prelimbic cortex or depletion of dopaminergic inputs in the OFC did not reproduce this deficit. Together, our results suggest that noradrenergic projections to the OFC are required to update goal-directed actions. Introduction Animals use their knowledge of an environment to engage in behaviours that meet their basic needs and desires. In a dynamic environment, an animal must also be able to update its understanding of the setting, particularly when the outcomes or consequences of its actions change. Numerous studies indicate that goal-directed behaviours are supported by the prefrontal cortex (PFC), and current research suggests a parcellation of functions within prefrontal regions in rodents (for reviews, see O'Doherty et al., 2017, Coutureau and. Specifically, the prelimbic region or Area 32 (A32) in Paxinos and Watson, 2014, of the medial PFC is needed to initially acquire goal-directed actions and learn the relationship between distinct actions and their outcomes Results Initial goal-directed learning does not require NA signalling in the OFC We first assessed if the initial acquisition and expression of goal-directed actions requires NA signalling in the OFC using the behavioural design shown in Figure 1A. To deplete NA projections, rats were given bilateral injections of the toxin anti-DβH SAP targeting the ventral and lateral regions of the OFC (vlOFC). Animals from the control (CTL) group were injected with inactive IgG SAP. Rats in the Pre group were injected with either anti-DβH SAP (group Pre-SAP n=15) or inactive IgG SAP (group Pre-CTL n=14) before the initial instrumental training, during which responding on one action (A1) earned O1 (sucrose or grain pellets, counterbalanced), and responding on the other (A2) earned O2 (grain or sucrose pellets, counterbalanced). Rats in group Post were similarly trained, but were injected with either anti-DβH SAP (group Post-SAP n=15) or inactive IgG toxin (group Post-CTL n=13) following this initial stage. As shown in Figure 1D, all rats acquired the lever pressing response, with their rate of lever pressing increasing across days (F (1,53) = 508.30, p<0.001). No differences were found between Pre and Post groups (F (1,53) = 0.96, p=0.33) or between CTL and SAP groups (F (1,53) = 0.287, p=0.59) and there were (all F (1,53) values < 3.7, p-values > 0.05). All groups also showed sensitivity to the change in outcome value during the outcome devaluation test, indicating that rats learned the A-O associations and the current value of the outcomes, that is, goal-directed behaviour was intact ( Figure 1E). Indeed, we found a significant effect of devaluation (Ndev vs. Dev;F (1,53) = 79.62, p<0.001), but no effect of group (Pre vs. Post; F (1,53) = 0.21, p=0.65) or treatment (CTL vs. SAP;F (1,53) = 0.15, p=0.70), and no significant interactions between these factors (all F (1,53) values <1.78, p-values >0.18). In addition, when given concurrent access to both outcomes, all groups consumed more of the non-devalued outcome, thereby demonstrating the efficacy of the satiety-induced outcome devaluation procedure (Figure 1-figure supplement 3A). Thus, NA depletion in PFC regions does not appear to affect the initial learning or expression of goal-directed actions. The online version of this article includes the following source data and figure supplement(s) for figure 1: Source data 1. Source files for the quantification of dopamine beta hydroxylase (DβH)-positive fibres (ventral orbitofrontal cortex [VO], lateral orbitofrontal cortex [LO], and Area 32 [A32]) and behavioural data for rats injected with saporin and inactive saporin. (F (1,53) = 7.35, p<0.01), while rats in the depleted groups did not (F (1,53) = 0.15, p=0.70). Importantly, all groups rejected the devalued food during the consumption test ( Figure 1-figure supplement 3B). These results show that depletion of NA innervation to the OFC and other prefrontal regions renders rats unable to associate new outcomes to acquired actions. Importantly, this deficit was present in rats that received NA depletion before (group Pre) and after (group Post) learning the initial A-O associations. All rats first underwent instrumental training and an initial outcome-specific devaluation test (see Figure 3-figure supplement 1 and Figure 3-figure supplement 2 for behavioural results from this initial phase). Following surgery and recovery, the animals were then trained with reversed instrumental associations. We found that rats with 6-OHDA infusions (n=12) responded more during reversal training than the CTL (n=8; F (1,18) = 14.74, p<0.01) ( Figure 3B). There was also an overall increase in response rate (F (1,18) = 44.69, p<0.001) across training and a significant group × day interaction (F (1,18) = 7.25, p<0.05), indicating that this increase in responding was greater for the 6-OHDA group than for CTL. The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. Source files for the behavioural data from the reversal phase for rats injected with we found that both groups showed goal-directed behaviour and biased their choice towards the action associated with the non-devalued outcome ( Figure 3E). Statistical analyses confirmed a main effect of devaluation (F (1,15) = 5.25, p<0.05), but no main effect of group (F (1,15) < 0.001, p>0.975) or group × devaluation interaction (F (1,15) = 0.17, p=0.69). Both groups also consumed more of the nondevalued food during the consumption test ( Figure 3-figure supplement 2D). These results further support a role for NA in updating previously learned goal-directed actions. We show that full CA depletion (DA + NA) in the OFC, A32, and M2 impairs performance in an outcomeidentity reversal task, while depletion restricted to DA innervation leaves performance intact. Selective expression of inhibitory DREADDs in LC: vlOFC or LC: A32 NA projections In the previous two approaches, we used pharmacologic ablation to target NA signalling in the OFC. However, injection of anti-DHβ SAP or 6-OHDA in the OFC also caused a significant reduction of fibres in other regions of the PFC, most likely because of NA fibres crossing the OFC before entering these other prefrontal regions (Chandler and Waterhouse, 2012). Most notably, in both of the previous experiments, we observed depletion of NA fibres in A32 (or prelimbic cortex), a region that has been heavily implicated in goal-directed behaviour (Corbit and Balleine, 2003;Killcross and Coutureau, 2003;Tran-Tu-Yen et al., 2009). As such, while we were able to demonstrate that NA, but not DA, signalling in the PFC is necessary to adapt to changes in outcome identity, we could not conclusively attribute our behavioural effects to NA depletion in the OFC and not in A32. Moreover, given that our approach involved permanent lesions of NA fibres, we were unable to ascertain if NA signalling was required to encode and/or recall the new A-O associations. Therefore, to address the regional and temporal specificity of the behavioural effect, we generated CAV2-PRS-hM4Di-mCherry, a canine adenoviral vector containing PRS, an NA-specific promoter, driving an HA-tagged hM4Di, an inhibitory DREADDs, and an mCherry expression cassette ( Figure 4A). The validation of the construct is described in the Methods section and the corresponding results are shown in Figure 4-figure supplement 1. CAV2 vectors are readily taken up at presynapse and trafficked via retrograde transport to the soma of projecting neurons. CAV2-PRS-hM4Di-mCherry was infused in either the OFC or A32 to target either LC: vlOFC or LC: A32 NA projections. Figure 4C shows retrograde transport of the vector and mCherry in NA cells of the LC following injection of CAV2-PRS-hM4Di-mCherry in the OFC. Figure 4D shows the colocalization of mCherry and HA immunoreactivity in the LC, indicative of a selective expression of HA-hM4Di. As expected, while mCherry staining is present at injection sites, reflecting local cortico-cortical connections that are not NA dependent ( Figure 4B), HA-immunoreactive cell bodies were found exclusively in the LC ( Figure 4D). These data are consistent with NA-specific expression of the HA-tagged hM4Di due to PRS, and nonselective expression of mCherry which is under the control of hSyn. Silencing of LC: vlOFC , but not LC: A32 , projections impairs adaptation to changes in the A-O association Rats received bilateral injections of CAV2-PRS-hM4Di-mCherry in either the OFC (n=25) or the A32 (n=17). Rats were then trained and tested as shown in Figure 5A. Following the initial instrumental training and outcome devaluation testing ( Figure supplement 1. Initial training and test for rats to be injected with 6-OHDA (n=12) and control rats (CTL; n=8). Figure supplement 1-source data 1. Source files for the behavioural data from the initial phase for rats injected with 6-hydroxydopamine (6-OHDA). Figure supplement 2. Initial training and test for rats to be injected with 6-OHDA+Desi (n=9) and control rats (CTL; n=8). Importantly, consumption tests performed immediately after the reversal tests revealed that all groups consumed more of the non-devalued outcome indicating that the satiety-induced devaluation was effective and that DCZ injections did not disrupt the rats' ability to distinguish between devalued and non-devalued rewards ( Figure 5-figure supplement 1D and Figure 5-figure supplement 2D for LC: vlOFC and LC: A32 , respectively). Together, these results indicate that LC NA projections to the OFC, but not to A32, are required to both encode and recall changes in the identity of the expected outcome. Discussion Goal-directed actions are the expression of learned associations between an action and the outcome it produces. These associations are however flexible, being amenable to updating when the identity of the outcome changes. Our data demonstrate that NA inputs to the OFC might be an essential component of this updating process. This conclusion is based on a body of complementary evidence. First, we demonstrated that animals with a loss of NA inputs in the OFC can initially learn and express A-O adapted from Figures 8, 9 and 11 of The Rat Brain in Stereotaxic Coordinates (Paxinos and Watson, 2014). (C) Immunofluorescent staining for dopamine beta hydroxylase (DβH) and mCherry in the locus coeruleus (LC) of a representative rat injected with CAV2-PRS-hM4Di-mCherry in the orbitofrontal cortices (OFC). (D) High colocalization of immunofluorescent staining for HA (tag of inhibitory DREADDs) and mCherry in the LC of the same representative rat injected in the OFC. (D) Comparison of antero-posterior DAB staining for HA in two representative rats, one injected in the OFC, the other in A32. Scale bar panel B: 1 mm. Scale bars panels C, D, E: 100 μm. The online version of this article includes the following source data and figure supplement(s) for figure 4: contingencies, but are impaired when the identity of the outcome has been modified. Importantly, such deficits were also observed when NA depletion occurred immediately before the encoding of the new A-O contingencies. We then showed that this impairment was selective to NA inputs, because combined depletions of DA and NA, but not of DA alone, induced a profound deficit in outcome reversal. Finally, we investigated the temporal and anatomical specificity of this effect using an NA-specific retrograde virus carrying inhibitory DREADDs to selectively target either LC: vlOFC or LC: A32 pathways. We found that silencing LC: vlOFC , but not LC: A32 , projections impaired the rats' ability to acquire and express the reversed instrumental contingencies. NA inputs into the OFC, but not the A32, are required for A-O updating The use of the SAP toxin led to a dramatic decrease of NA fibres density in all analysed cortical areas ( Figure 1B and Figure 1-figure supplement 2A). This may be due to diffusion of the toxin from the injection site or to the existence of collateral LC neurons and/or fibres passing through the ventral portion of the OFC, but targeting other cortical areas (Cerpa et al., 2019). However, injection of 6-OHDA led to less offsite NA depletion suggesting that a large part of the previous observation is toxin-specific. Indeed, no significant loss of NA fibres was visible in the insula cortex (Figure 2-figure supplement 2B), which has been previously implicated in goal-directed behaviour (Balleine and Dickinson, 2000;Parkes and Balleine, 2013;Parkes et al., 2015). We did nevertheless observe an offsite depletion in more proximal prefrontal areas (prelimbic/A32 and MO), albeit a more modest depletion that what was observed using the SAP toxin. Several studies have described the projection pattern of LC cells. These studies, using various techniques, indicate that LC cells mainly target a single region, and that only a small proportion of LC neurons collateralize to minor targets (Plummer et al., 2020;Kebschull et al., 2016;Uematsu et al., 2017;Chandler et al., 2014). Therefore, even if the OFC NA innervation is presumably specific (Chandler et al., 2013), we cannot rule out a possible collateralization of some neurons toward neighbouring prefrontal areas (including A32 and MO). We have previously discussed that the posterior ventral portion of the OFC is an entry point for LC fibres en passant, which ultimately target other prefrontal areas (Cerpa et al., 2019). We then used a CAV2 vector carrying the NA-specific promoter PRS to target either the LC :vlOFC or the LC: A32 pathway (Hayat et al., 2020;Hirschberg et al., 2017). It has been shown that the CAV2 vector can infect axons-of-passage, however the vector does not spread more than 200 µm from the injection site (Schwarz et al., 2015). Therefore, when targeting the OFC, we injected anteriorly to the level where the highest density of fibres of passage is expected (Cerpa et al., 2019) in order to minimize infection of such fibres and restrict inhibition to our pathway of interest. Overall, the current behavioural results are in line with our previous work showing that the ability to associate new outcomes to previously acquired actions is impaired following chemogenetic inhibition of the VO and LO or disconnection of the VO and LO from the submedius thalamic nucleus (Fresno et al., 2019). These results point to a role for the ventral and lateral parts of the OFC and its NA innervation in updating A-O associations. However, it is worth mentioning that different subregions of the OFC, both along the medio-lateral and antero-posterior axes of OFC, display clear functional heterogeneities (Bradfield and Hart, 2020;Izquierdo, 2017;Panayi and Killcross, 2018;Bradfield et al., 2018;Barreiros et al., 2021). Therefore, while we have previously focused on the anatomical heterogeneity of the NA innervation in these prefrontal subregions (Cerpa et al., 2019), a thorough characterization of its functional role in each of these subregions still needs to be addressed. We must also acknowledge that only male rats were used in the current study. The LC displays some anatomical and physiological variations between male and female rats (Joshi and Chandler, 2020), therefore a thorough characterization would also need to integrate this sex factor. Figure supplement 1. Initial training and test for rats injected with CAV2-PRS-hM4Di-mCherry in the orbitofrontal cortex (OFC). Figure supplement 1-source data 1. Source file for the behavioural data from the initial phase for LC: vlOFC rats. Figure supplement 2. Initial training and test for rats injected with CAV2-PRS-hM4Di-mCherry in area 32 (A32). Figure supplement 2-source data 1. Source file for the behavioural data from the initial phase for LC: A32 rats. Figure 5 continued Our key finding is that NA inputs to the OFC are required for updating the association between an action and its outcome. Indeed, a similar impairment was observed when NA depletion was performed either prior to initial training or prior to reversal training, which indicates that the reversal period is critically reliant on NA inputs. In addition, chemogenetic silencing of the LC: vlOFC pathway before the reversal training, or before testing also produced similar impairments, which further demonstrates that OFC NA inputs are required for both the encoding and the recall of new A-O associations. These results are consistent with recent views on the role of the OFC in goal-directed behaviour Panayi and Killcross, 2018;Cerpa et al., 2021). In contrast to NA inputs to the OFC, our results show that NA inputs to A32 (the prelimbic cortex) are not required for responding based on initial or reversed instrumental contingencies. These data add to the current literature indicating a major dissociation in the role of NA inputs to different prefrontal regions (Robbins and Arnsten, 2009). Indeed, NA inputs to the medial wall of the PFC are required for attentional regulation. Specifically, lesioning NA inputs Newman et al., 2008) or chemogenetic inhibition of NA inputs to the mPFC (Cope et al., 2019) alters attentional setshifting, while NA recapture inhibition via atomoxetine improves it (Newman et al., 2008). OFC NA depletion can also alter cue-outcome reversal, but not dimensional shift, in an attentional set-shifting task (Mokler et al., 2017). Recently, it was also shown that reversible inactivation of the medial OFC (mOFC) decreased performance accuracy on a two-armed bandit task in rats (Swanson et al., 2022). Interestingly, performance accuracy was also impaired following systemic, but not intra-mOFC, administration of an NA antagonist (Swanson et al., 2022). This result seems consistent with our finding showing that reversal learning and expression is intact when the LC: A32 pathway is silenced and may suggest a potential role for the ventral and lateral regions of OFC, rather than mOFC, in this effect. NA, but not DA, inputs to the OFC are required for A-O updating Using a strategy which allows for a differential depletion of DA and/or NA fibres, we found that NA-dependent mechanisms are required during the encoding and recall of new A-O. The role of cortical DA-dependent mechanisms in goal-directed behaviour remains poorly understood, but we have previously shown that DA signalling in the prelimbic cortex/A32 plays a critical role in the detection of contingency degradation (Naneix et al., 2009). Such detection is likely to involve the processing of non-expected rewards which induces, at the level of A32, a DA-dependent reward prediction error signal (Montague et al., 2004;Schultz and Dickinson, 2000). These results therefore raise the possibility that the coordination of goal-directed behaviour under environmental changes might depend on a DA-A32 system to adapt to causal contingencies and an NA-OFC system to adapt to changes in outcome identity (Cerpa et al., 2021). However, it is not yet clear if the NA-OFC system is also involved in detecting the causal relationship between an action and its outcome (see Cerpa et al., 2021, for a discussion). Some have reported impaired adaptation to contingency changes following inhibition of VO and LO or BDNF knockdown in these regions (Whyte et al., 2021;Zimmermann et al., 2017), while another study showed that inhibition of VO/LO leaves sensitivity to degradation intact, at least during an initial test (Zimmermann et al., 2018). Interestingly, a recent paper in marmosets demonstrates that inactivation of anterior OFC (Area 11) improves instrumental contingency degradation, whereas overactivation impairs degradation (Duan et al., 2021). The potential role of the rodent ventral and lateral regions of OFC, and of NA innervation to the OFC, in adapting to degradation of instrumental contingencies requires further investigation. Updating goal-directed behaviour When trained on reversed contingencies, animals encode the new A-O associations (Fresno et al., 2019;Parkes et al., 2018). Under similar experimental conditions, past research has shown that reversal learning performance is the result of updating prior existing A-O contingencies without unlearning the initial contingencies (Bradfield and Balleine, 2017). In other words, the animals build a partition between a state for the new contingencies and the initial state of old contingencies (Hart and Balleine, 2016). Current research has proposed that the OFC is critically involved in this partition of information when task states change without explicit notice (McDannald et al., 2011;Wilson et al., 2014;Sadacca et al., 2017, Wikenheiser andSchoenbaum, 2016). Consistent with this view, chemogenetic inhibition of the OFC (ventral and lateral) impairs goal-directed responding following identity reversal Howard and Kahnt, 2021). Here, we found a similar deficit following lesion of NA inputs to the OFC. Given that the deficit in goal-directed behaviour was restricted to the reversal phase, including both reversal training and the test based on reversed contingencies, it is likely that NA-OFC is involved in both creating new states and in the 'online' use of the information included in this new state. Such a proposal is in accordance with popular LC-NA system theories suggesting that a rise in NA activity allows for behavioural flexibility when a change in contingencies is detected (Aston-Jones et al., 1997;Bouret and Sara, 2005;Sadacca et al., 2017). Conclusion Our results provide evidence for the involvement of NA inputs to ventral and lateral OFC in the updating and use of new A-O associations. Recent research has revealed a remarkable parcellation of cortical functions in goal-directed action (Fresno et al., 2019;Turner and Parkes, 2020;Dalton et al., 2016). The current study provides a clear basis for an in-depth understanding of the cortical coordination involved in executive functions. Animals and housing A total of 136 male Long-Evans rats, aged 2-3 months, were obtained from the Centre d'Elevage Janvier (France). Rats were housed in pairs with ad libitum access to water and standard lab chow prior to behavioural experiments. Rats were handled daily for 3 days prior to the beginning of the experiments and were put on food restriction 2 days before behaviour to maintain them at approximately 90% of their ad libitum feeding weight. The facility was maintained at 21 ± 1°C on a 12 hr light/ dark cycle (lights on at 8:00 am). Environmental enrichment was provided by tinted polycarbonate tubes and nesting material, in accordance with current French ( Stereotaxic surgery For all experiments, rats were anaesthetized with 5% inhalant isoflurane gas with oxygen and placed in a stereotaxic frame with atraumatic ear bars (Kopf Instruments) in a flat skull position. Anaesthesia was maintained with 1.5% isoflurane and complemented with a subcutaneous injection of ropivacaïne (a bolus of 0.1 mL at 2 mg/mL) at the incision site. After each injection, the injector was kept in place for an additional 10 min before being removed. Rats were given 4 weeks to recover following surgery. Injection sites were confirmed histologically after the completion of behavioural experiments. In the first experiment (n=57), we used a toxin selective for NA neurons (SAP) to target and deplete NA terminals in the VO and LO. For half of the rats ('Pre' groups, n=29), surgery was performed before the initial instrumental training and testing phase, for the other half surgery was performed after the initial training and testing ('Post' groups, n=28). Intracerebral injections were made using repeated pressure pulses delivered via a glass micropipette connected to a pressure injector (Picospritzer III, Parker). For SAP groups (Pre n=15; Post n=15), 0.1 µL of anti-DβH SAP (0.1 µg/µL) was bilaterally injected at one site targeting both VO and LO, while CTL rats (Pre n=14; Post n=13) received 0.1 µL of inactive IgG SAP (0.1 µg/µL). Injection coordinates (in mm from Bregma) were determined from the atlas of Paxinos and Watson, 2014: +3.5 antero-posterior (AP), ±2.2 medio-lateral (ML), and -5.4 dorso-ventral (DV). We then used a toxin selective for CA neurons (6-OHDA hydrochloride) and a noradrenaline uptake-blocker (Desi) to target and deplete DA neurons in the VO and LO. All rats underwent surgery after the initial instrumental training phase. Rats were then allocated to the full CA depletion condition (group 6-OHDA n=12; CTL n=8) or the specific DA depletion condition (6-OHDA+Desi n=9; CTL n=8). 6-OHDA (4 µg/µL) was dissolved in vehicle solution containing 0.9% NaCl and 0.1% ascorbic acid. A volume of 0.2 µL of 6-OHDA was bilaterally injected in the OFC at the same coordinates as for the first experiment. Animals in the CTL group received injections of the vehicle solution. Thirty minutes before the surgical procedure, animals in the 6-OHDA+Desi group received a systemic (i.p.) injection of Desi (25 mg/mL) at a volume of 1 mL/kg. Validation of the CAV2-PRS-hM4Di-mCherry construct Although previous studies have validated the use of the CAV2-PRS construct using a range of actuators, including excitatory and inhibitory opsins, as well as potassium channels (Howorth et al., 2009a;Howorth et al., 2009b;Hickey et al., 2014;Li et al., 2016), we used a separate cohort of rats to verify the functional effect of our DREADDs construct in vivo by quantifying changes in the expression of c-Fos, a recognized marker of neuronal activation, upon administration of DCZ. In order to obtain a high baseline of c-Fos activation, rats underwent a stress procedure. As shown in Figure 4-figure supplement 1B, rats were administered i.p. with either vehicle or DCZ (0.1 mg/kg) 45 min before being placed in a Plexiglas shock chamber equipped with stainless steel rods on the floor and a circuit generator connected to a scrambler and a timing unit. Rats received five shocks (0.5 s, 0.8 mA) randomly interspersed over 10 min (stress condition). As a control of c-Fos activation, we included in the experimental design animals administered with vehicle, but left untouched in their home cage (no stress condition). Rats were perfused 90 min after the procedure and coronal slices collected as described in the Histology section. For hM4Di/c-Fos colocalization analysis, sections were taken (see Figure 4-figure supplement 1-source data 1) from antero-posterior levels of the LC from -9.50 to -10.0 mm from Bregma (vehicle/no stress, n=2; vehicle-stress, n=4; DCZ-stress, n=4). Quantification of the percentage of LC hM4Di-positive cells (mCherry, red) that co-express c-Fos (Alexa 488, green) was performed by a trained observer blind to the experimental conditions. Behavioural apparatus For all behavioural experiments, training and testing was conducted in eight identical operant chambers (40 cm width × 30 cm depth × 35 cm height, Imetronic, Pessac, France) individually enclosed in sound and light-resistant wooden chambers (74 × 46 × 50 cm 3 ). Each chamber was equipped with two pellet dispensers that delivered grain (Rodent Grain-Based Diet, 45 mg, Bio-Serv) or sugar (LabTab Sucrose Tablet, 45 mg, TestDiet) pellets into a food port when activated. For instrumental conditioning, two retractable levers were located on each side of the food port. Each chamber had a ventilation fan producing a background noise of 55 dB. During the session, the chamber was illuminated by four LEDs in the ceiling. Experimental events were controlled and recorded by a computer located in the room and equipped with the POLY software (Imetronic). Behavioural protocol Initial training and test The training procedure was adapted from Parkes et al., 2018. On days 1-3, rats were trained to retrieve food pellets from the food port. During each daily session, 40 sugar and 40 grain pellets were delivered pseudo-randomly every 60 s, on average. Following food port training, rats received 12 daily sessions of instrumental training, during which they were required to learn initial A-O associations. During these sessions, each lever, in alternation, was presented twice for a maximum of 10 min or until 20 outcomes were earned. The inter-trial interval between lever presentations was 2.5 min (i.e., the session could last up to 50 min and the rats could obtain a maximum of 80 food pellets). The A-O associations and the order of lever presentations were counterbalanced between rats and days. During the first three sessions, lever pressing was continuously reinforced with a fixed ratio (FR) 1 schedule. Then, the probability of receiving an outcome was reduced, first with a random ratio (RR) 5 schedule (days 4-6), then with an RR10 (days 7-9,) and an RR20 schedule (days 10-12). Outcome devaluation tests were performed 1 day after the last instrumental training session. First, to induce sensory-specific satiety (Rolls et al., 1986), rats received access to one of the two outcomes (20 g) for 1 hr in a set of plastic feeding cages to which they were previously habituated. Immediately after the satiety procedure, rats were returned to the operant chambers where they were given a choice test in extinction (i.e., unrewarded) with both levers available for 10 min. The devalued (sated) food was counterbalanced between rats. Following the extinction test, animals were returned to the plastic feeding cages and given a consumption test of satiety-induced devaluation, during which they received 10 min concurrent access to both types of food pellets (10 g of each). The amount consumed of each pellet type was measured to confirm that the satiety-induced devaluation was effective and that rats were able to distinguish between the sensory features of the different food pellets. Reversal training and test Following the initial phase, rats were trained on reversed A-O associations with a procedure adapted from previous studies in our laboratory (Fresno et al., 2019;Parkes et al., 2018). Specifically, the identity of outcomes was switched so that rats had to update previously established A-O associations, always keeping a RR20 schedule of reinforcement. Following reversal training, outcome devaluation tests were conducted in the same manner as previously described. Chemogenetics The DREADD agonist deschloroclozapine (DCZ) was dissolved in dimethyl sulfoxide (DMSO) to a final volume of 50 mg/mL, aliquoted in small tubes (50 μL) and stored at -80°C (stock solution). For behavioural experiments, our stock solution was diluted in physiological saline to a final injectable volume of 0.1 mg/kg and administered systemically (i.p.) 40-45 min prior to testing at a volume of 10 mL/kg. Fresh injectable solutions were prepared from stock aliquots on the day of the usage. DCZ was prepared and injected under low light conditions. Histology At the end of all behavioural experiments, rats were injected with a lethal dose of sodium pentobarbital (Exagon Euthasol) and perfused transcardially with 60 mL of saline followed by 260 mL of 4% paraformaldehyde (PFA) in 0.1 M phosphate buffer (PB). Brains were removed and post-fixed in the same PFA 4% solution overnight and then transferred to a 0.1 M PB solution or to a 0.1 M PB with 30% saccharose solution (6-OHDA experiment). Subsequently, 40 µm coronal sections were cut using a VT1200S Vibratome (Leica Microsystems) or freezing microtome for the 6-OHDA experiment. Every fourth section was collected to form a series. DAB staining was performed for DβH (for the SAP and the 6-OHDA experiments), TH (6-OHDA experiment), HA, and mCherry (chemogenetic experiments). Free-floating sections were first rinsed (4×5 min) in 0.1 M phosphate buffer saline (PBS) containing 0.3% Triton X-100 (PBST) and then incubated in PBST containing 0.5% (for mCherry) or 1% (for DβH and TH) hydrogen peroxide solution (H 2 O 2 ) for 30 min in the dark. Further rinses (4×5 min) in PBST and a 1 hr incubation in blocking solution (PBST containing 3% goat serum) followed. Sections were then incubated with the primary antibody (mouse monoclonal anti-DβH, 1/1000; mouse monoclonal anti-TH, 1/2000; rabbit monoclonal anti-HA, 1/1000; rabbit polyclonal anti-RFP, 1/2000) diluted in blocking solution for 24 hr (for mCherry) or 48 hr (for DβH and TH) at 4°C. After rinses (4×5 min) in PBS (for DβH and TH) or PBST (for mCherry), sections were placed in a bath containing the secondary antibody (biotinylated goat anti-mouse, 1/1000; biotinylated goat anti-rabbit, 1/1000) diluted in PBS (for DβH and TH) or PBST containing 1% goat serum (for mCherry) for 2 hr at room temperature. Following rinses (4×5 min) in PBS (for DβH and TH) or PBST (for mCherry), they were then incubated with the avidin-biotin-peroxydase complex (1/200 in PBS for DβH and TH; 1/500 in PBST for mCherry) for 90 min in the dark at room temperature. H 2 O 2 was added to the solution before the final staining with DAB was made (10 mg tablet dissolved in 50 mL of 0.1 M Tris buffer). Stained sections were finally rinsed with 0.05 M Tris buffer (2×5 min) and 0.05 M PB (2×5 min), before being collected on gelatincoated slides using 0.05 M PB, dehydrated (with xylene for DβH and TH), mounted and cover-slipped using the Eukitt mounting medium. Fibres loss quantification To measure fibre density in the SAP and 6-OHDA experiments, we used the protocol described in Cerpa et al., 2019. We examined sections at +4.4, +3.7, and+3.0 (mm from Bregma) using a Nanozoomer slide scanner with a 20× lens (Hamamatsu Photonics). Digital photomicrographs of regions of interest (ROI, square windows of 300×300 µm 2 , 1320×1320 pixels) in each hemisphere were examined under a 20× virtual lens with the NDP. view 2 freeware (Hamamatsu Photonics). Each ROI was outlined according to Paxinos and Watson, 2014. Quantification of DβH-and TH-positive fibres was performed using an automated method developed in the laboratory with the ImageJ software (Cerpa et al., 2019). Specifically, a digitized version of the photomicrograph was converted to black and white by combining blue, red, and green channels (weights 1, -0.25, and -0.25), subjected to a median filter (radius 3 pixels) in order to improve the signal-to-noise ratio, smoothed with a Gaussian filter (radius 8), and subtracted from the previous picture to isolate high spatial frequencies. Large stains were further eliminated by detecting them in a copy of the image. The picture was then subjected to a fixed threshold (grey level 11) to extract stained elements, and the relative volume occupied by fibres estimated by the proportion of detected pixels in the ROI. As a control for poor focus, the same images were analysed a second time while allowing lower spatial frequencies (Gaussian filter radius 20). The ratio between the proportions of pixels detected by the two methods was used as a criterion to eliminate blurry images. Experimental design and data analysis Each rat was assigned a unique identification number that was used to conduct blind testing and statistical analyses. Behavioural data and fibres volume were analysed using sets of between and within orthogonal contrasts controlling the per contrast error rate at alpha = 0.05 (Hays, 1963). Simple effects analyses were conducted to establish the source of significant interactions. Statistical analyses were performed using PSY Statistical Program (Bird et al., 2022) and graphs were created using GraphPad Prism. All experiments employed a between-× within-subjects behavioural design. In the first experiment, the between-subject factors were group (Pre vs. Post) and treatment (CTL vs. SAP) and the withinsubject factors were training day (acquisition data) or devaluation for the test data (responding on lever associated with non-devalued or devalued outcome). In the 6-OHDA experiments, the betweensubject factor was group (CTL vs. 6-OHDA or CTL vs. 6-OHDA+Desi) and the within-subject factor was training day (acquisition data) or devaluation (test data). To analyse DβH and TH fibres volume, the between-subject factor was group (experiment 1: CTL vs. SAP; experiment 2: CTL, 6-OHDA+Desi, or 6-OHDA) and the within-subject factor was region (VO vs. LO) for the OFC. There was no within-subject factor for the quantification of fibres in the other regions of PFC. In the final chemogenetics experiment, the between-subject factor was treatment during reversal acquisition (vehicle vs. DCZ) and the within-subject factors were training day (acquisition data) or treatment during reversal test (vehicle vs. DCZ) and devaluation (test data). To analyse hM4Di/c-Fos colocalization, a Student's t-test was used to compare vehicle-and DCZ-treated animals within the stress condition.
2023-02-22T16:17:35.551Z
2023-02-20T00:00:00.000
{ "year": 2023, "sha1": "60912e6bd0572759e29fd31be808716b5d73cfc3", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "133906ddd989fb010aa5df7fa06b8cc32d0bda0c", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
257030629
pes2o/s2orc
v3-fos-license
The low density receptor-related protein 1 plays a significant role in ricin-mediated intoxication of lung cells Ricin, a highly lethal plant-derived toxin, is a potential biological threat agent due to its high availability, ease of production and the lack of approved medical countermeasures for post-exposure treatment. To date, no specific ricin receptors were identified. Here we show for the first time, that the low density lipoprotein receptor-related protein-1 (LRP1) is a major target molecule for binding of ricin. Pretreating HEK293 acetylcholinesterase-producer cells with either anti-LRP1 antibodies or with Receptor-Associated Protein (a natural LRP1 antagonist), or using siRNA to knock-down LRP1 expression resulted in a marked reduction in their sensitivity towards ricin. Binding assays further demonstrated that ricin bound exclusively to the cluster II binding domain of LRP1, via the ricin B subunit. Ricin binding to the cluster II binding domain of LRP1 was significantly reduced by an anti-ricin monoclonal antibody, which confers high-level protection to ricin pulmonary-exposed mice. Finally, we tested the contribution of LRP1 receptor to ricin intoxication of lung cells derived from mice. Treating these cells with anti-LRP1 antibody prior to ricin exposure, prevented their intoxication. Taken together, our findings clearly demonstrate that the LRP1 receptor plays an important role in ricin-induced pulmonary intoxications. These findings demonstrate for the first time that the transmembrane receptor LRP1 plays a central role in ricin poisoning and open new vistas for the development of novel therapeutic agents for dealing with ricin-induced intoxications. Results Interactions of ricin with membrane-bound proteins from mice lungs. To examine whether ricin binds in a differential manner to cell-surface proteins, murine lung cell membrane proteins were resolved by SDS-PAGE and transferred to absorbent membranes which were then incubated with purified preparations of either ricin or ricin-related Ricinus communis agglutinin (RCA). Labeling with polyclonal anti-ricin antibody, which interacts with both ricin and RCA, revealed that while RCA seems to bind in an indiscriminate manner to a wide range of lung cell membrane proteins, purified ricin was found to bind to a limited number of discrete protein bands (Fig. 1). Identification of ricin-bound lung cell membrane proteins. The labeled bands detected above, consist of proteins which were extracted from lung cell outer membranes and then resolved by SDS-PAGE and electro-transferred to a PVDF membrane. These processes are expected to alter the conformational structures of respective proteins in a radical manner and therefore their apparent in-vitro interaction with ricin may not reflect faithfully the binding that occurs between ricin and cell-membrane bound proteins in intact cells. To redress this issue, ricin was allowed to interact with lung cell membranes and proteins were then resolved on native gels under conditions which are expected to preserve protein/ricin complexes intact. Protein transfer was also performed under unique conditions to avoid protein complex disruption, utilizing the Blue-native polyacrylamide gel electrophoresis (BN-PAGE) methodology 9 . Following labeling with polyclonal anti-ricin antibodies, 3 faint Lectin blot of membrane-bound proteins from mice lungs: Lung cell membrane proteins were resolved by SDS-PAGE, transferred to absorbent membranes, and incubated with purified preparations of RCA or ricin. Black frames indicate that these are non-consecutive lanes taken from two blots. Confocal microscopy analysis of ricin binding to LRP1 in cultured HEK293 cells. To characterize the binding of ricin to LRP1, the localization of these two proteins in ricin-intoxicated HEK293 cells was determined. To this end, HEK293 cells were grown to confluence and then incubated with ricin. Cells were then fixed, stained simultaneously with fluorescently-labeled anti-ricin and anti-LRP1 antibodies and visualized by immunofluorescence confocal microscopy. As shown in Fig. 2a, merging of the anti-ricin and anti-LRP1 stained cells resulted in highly co-localized staining (yellow staining, rows 3-4, right panel). Rows 1 and 2 in Fig. 2a display single-fluorophore controls for each antibody. Quantification of % co-localization showed that 90.6% of the ricin is linked to LRP1 and that 60.7% of the LRP1 is occupied by ricin (Fig. 2b), suggesting that LRP1 has a major and nearly-exclusive contribution to the binding of ricin to the outer-surface of these cells. Blocking LRP1 abrogates ricin toxicity in HEK293 cells. To assess the contribution of LRP1 to ricin-mediated cytotoxicity, we examined whether preclusion of ricin binding to LRP1 would protect cells from ricin intoxication. To this end, we utilized the genetically engineered HEK293-AChE cell line, which constitutively synthesizes and secretes large amounts of acetylcholinesterase (AChE) to the culture medium 12 . Ricin mediated protein synthesis arrest, an early event in ricin intoxication, results in diminished production and secretion of AChE with a half maximal inhibitory concentration (IC 50 ) of 0.1 ng/ml. We first examined the effect of anti-LRP1 antibodies on HEK293 ricin-induced intoxication. To this end, cells were pre-incubated with rabbit-anti-LRP1 antibody, exposed to ricin (10 ng/ml, 100 IC 50 ) for 1 hour, rinsed and incubated at 37 °C for 18 hours, after which secreted AChE levels were quantified. Cells which were not pre-incubated with anti-LRP1 antibody, served as a positive control for ricin intoxication. While exposure of the control cells to ricin led to a 70% reduction in secreted AChE levels, pretreatment with anti-LRP1 antibodies led to significantly higher levels of secreted AChE, which were only 30% lower than the levels exhibited by non-intoxicated cells. Pretreatment of the HEK293-AChE cells with a non-related antibody (anti-B. anthracis-PA antibody) did not protect cells from intoxication; the extracellular levels of AChE measured in these cells were as low as those measured in ricin-intoxicated cells that were not treated with antibodies (Fig. 3a). For specific binding-site blockage of the LRP1 receptor, we utilized Receptor-Associated Protein (RAP), a natural antagonist of LRP1 13 . We tested RAP's ability to competitively inhibit binding and entry of ricin into HEK293-AChE cells. To this end, cells were chilled to 4 °C and incubated with RAP. These conditions are compatible with ligand-receptor binding yet do not allow internalization of the newly formed ligand/receptor complexes. After 1 hour ricin was added at different concentrations for 1 hour, then cells were washed and incubated at 37 °C for 24 hours to allow internalization of cell-linked ricin. Cells exposed to ricin without preincubation with RAP served as a positive control for ricin intoxication. As shown in Fig. 3b, cells exposed to ricin at a concentration of 2 ng/ml (20 IC 50 ) displayed ~80% decrease in secreted AChE activity, while cells which were treated with RAP prior to the addition of the same concentration of toxin expressed nearly normal levels of AChE (95% compared to non-intoxicated cells). Only when the cells were incubated with higher ricin concentrations was RAP-related protection compromised in a dose dependent manner. Thus, when the cells were exposed to a 4-fold higher dose of ricin (8 ng/ml, 80 IC 50 ), pre-incubation with RAP resulted in 40% AChE activity compared to 7.5% activity without RAP pretreatment, while exposure to a 16-fold higher dose of ricin (32 ng/ml, 320 IC 50 ), abolished the beneficial effect of RAP. In this latter instance, both RAP-pretreated and non-treated cells displayed no more than residual levels of secreted AChE levels (8.7 and 7.1% respectively). Taken together, this set of experiments confirms that the LRP1 receptor plays an important role in ricin-induced intoxication and that functional antagonism of the LRP1 receptor leads to substantially reduced sensitivity of the cells to ricin. www.nature.com/scientificreports www.nature.com/scientificreports/ Ricin binds to LRP1 binding-cluster II. The extra-cellular segment of LRP1 comprises four complement-type repeats (CRs) that are further divided to four distinct clusters (I-IV) to which most of the known LRP1 ligands bind 14 . It was therefore of interest to determine which cluster of LRP1 serves as the binding site of ricin. To this end, biotinylated-LRP1 clusters II-IV, known to bind most of the identified LRP1 ligands, were separately immobilized on Octet streptavidin-biosensors and interacted with ricin toxin. Biotinylated asialofetuin (ASF), a well-known ligand of ricin, was also immobilized on Octet streptavidin-biosensors and served as a positive control for ricin binding. Only LRP1 cluster II was found to bind ricin, whereas no measurable interactions between ricin molecules and LRP1 cluster III or IV could be discerned (Fig. 5a). Band # Description Coverage To examine whether the binding of ricin to LRP1 cluster II indeed represents a bona fide RTB-driven interaction, we measured binding rates of ricin and its isolated subunits to biotinylated soluble cluster II on an Octet sensor. When ricin holotoxin (10 μg/ml) was added, it quickly bound to cluster II, reaching near saturation at about (c) Cells treated with control or LRP1 siRNAs were exposed to ricin (10 ng/ml) and 18 hours later secretion of AChE was quantified. Results are presented as mean ± SE of 3 measurements. 1 nm shift and dissociated in a bi-phasic manner (Fig. 5b). Next, the cluster II-biosensor was interacted with purified RTB (10 μg/ml) inducing a marked wavelength interference reaching about 0.5 nm after 300 seconds. As the wavelength shift is proportional to the protein mass, these results fit well with the fact that the molecular weight of RTB is approximately half of the holotoxin (33 kDa and 67 kDa, respectively). In contrast, when cluster II interacted with a purified preparation of the catalytic A subunit of ricin (RTA, 10 μg/ml), low-to-insignificant binding was observed (the residual binding probably reflects impurities of holotoxin in the RTA preparation, which are estimated to be less than 5%). The binding kinetics of ricin to cluster II were characterized using the same platform with increasing concentrations of ricin. As ricin has two nearly identical lectin-binding site located within its B-subunit, it was assumed that each binding site will bind the receptor independently. Accordingly, the binding sensograms were fitted using the 2:1 heterogeneous ligand model which is a combination of two 1:1 curve fits. Indeed, this model resulted in an excellent fit to the binding sensograms for the tested ricin concentrations (r = 0.99, Fig. 5c). Conversely, when the binding data was fitted using a model in which ricin binds LRP1 at only one site (1:1), a poor fit was generated. Using the 2:1 model, the overall affinities values (K D ) of the two binding sites of ricin toward LRP1 were calculated to be 81 and 47 nM (K D 1 and K D 2, respectively). These results support the assumptions that the two ricin-lectin binding sites interact with LRP1 in an independent manner, albeit, at similar affinities. The effect of ricin-neutralizing antibodies on the interaction with LRP1. Previously, phage-display libraries based on antibody-encoding genes originated from ricin-immunized non-human primates, were utilized to isolate a set of anti-ricin monoclonal antibodies which bind to either RTA or RTB 15 and their ability to neutralize ricin was demonstrated both in vitro and in vivo 16 . In view of our findings regarding the role of LRP1 in the intoxication process of ricin, it was of interest to determine whether one or more of these monoclonal neutralizing antibodies impairs the binding of ricin to LRP1. To address this issue, biotinylated cluster II was immobilized on an Octet biosensor and the maximal wavelength interference induced by ricin was evaluated in the absence or the presence of each antibody. To set up the experimental system, the binding of ricin was first tested in the presence of excess of galactose which was shown before to bind the lectin-binding moieties of the toxin. As expected, while ricin induced a wavelength shift of about 1.3 nm, galactose completely abolished the binding of the toxin to cluster II (Fig. 6). Next, ricin was pre-incubated with each antibody and each of the formed toxin/antibody complexes was interacted with the immobilized cluster II. We first tested the MH1 mAb which targets RTA and found that the ricin-MH1 complex induced a significant increase in the wavelength shift compared to that of ricin alone (Fig. 6). As the extent of the wavelength interference is dependent in part upon the antigen mass, these results fit well with the assumption that the binding of MH1 to ricin does not hamper RTB-mediated binding to the receptor and since the mass of the MH1-ricin complex is larger than that of the toxin alone, the net result is an increase in the apparent signal. The monoclonal antibodies MH73, MH75 and MH77 bind with similar affinities to non-overlapping epitopes located on the surface of RTB 15 . Formation of complexes between ricin and the MH73 or MH75 mAbs did not prevent the toxin binding to cluster II and actually, once again, increased the measured wavelength shift (Fig. 6). In contrast, antibody MH77 reduced the binding of ricin to cluster II by more than 70%. These results suggests that the MH77 anti-ricin monoclonal antibody and cluster II of LRP1 interact with the same region/epitope of ricin. It may well follow that the neutralizing effect of MH77 is due to its ability to prevent ricin binding to the LRP1 receptor. Role of the LRP1 in ricin induced cytotoxicity of lung cells. In the series of experiments described above, the contribution of LRP1 receptor to ricin intoxication was examined in cultured cells (HEK293-AChE). It was therefore of interest to examine whether LRP1 plays a similarly significant role in ricin-mediated cytotoxicity in primary cells. Ricin intoxication is considered most toxic via the pulmonary route of exposure. We have previously reported that following pulmonary exposure of mice to ricin, different lung cell types bind the toxin www.nature.com/scientificreports www.nature.com/scientificreports/ at different rates and levels 7 . Moreover, we found that the ricin-mediated rRNA depurination process occurs in different cell populations (24 hours post exposure) at distinctly different levels 8 . Thus, depurination in neutrophils was negligible, while macrophages and endothelial cells displayed 10.8% and 22% depurination values, respectively. The most pronounced depurination activity was measured in epithelial cells, where depurination was found in more than 80% of these cells 24 hours post-exposure to ricin. In view of the role of LRP1 in ricin intoxications described above, we examined whether a correlation exists between the levels of depurination and LRP1 expression in these different lung cell types. To this end, single cell suspensions (SCSs) of mice lungs were subjected to flow cytometric analysis with both anti-cell-type specific and anti-LRP1 antibodies. While the overall expression of LRP1 on lung cells was 33% (33 ± 2.9% of the cells were LRP1 positive), neutrophils displayed no more than near-to background levels of LRP1 (2.7 ± 0.53), while 22.8 ± 4.2 and 70 ± 4.6 percent of the lung endothelial and epithelial cells respectively, expressed this membrane-bound receptor, in excellent correlation to their measured depurination levels following ricin intoxication (Fig. 7a). In contrast, the high level of LRP1 expression in lung macrophage cells, 62 ± 8.1%, did not correlate with their measured depurination levels (~11%). We note however, that unlike the other lung cell types examined, macrophages are eliminated very rapidly from the lungs following ricin exposure 7 so that their depurination levels, determined at 24 hours post exposure long after removal of most of the macrophages, is a gross underestimation of the actual depurination process in this cell type. To test whether ricin toxin binds the lungs cells via the LRP1 receptor, lung SCS was exposed to ricin (10 ng/ ml) for 24 hours at 37 °C and then fixed and stained simultaneously with specific antibodies directed against ricin and LRP1 and visualized by immunofluorescence confocal microscopy. As shown in Fig. 7b, merging of the anti-ricin and anti-LRP1 stained cells resulted in yellow-staining representing co-localized ricin and LRP1 (rows 3-4, right panel). Rows 1 and 2 in Fig. 7b display single-fluorophore controls for each antibody. Quantification of the relative amounts of colocalized ricin and LRP1, indicated that 91.2% of the cell-membrane bound ricin is linked to LRP1 (Fig. 7c), and that 43.5% of the LRP1 receptor is occupied with ricin. To determine whether blocking the LRP1 receptor in the mice lung cells will prevent their intoxication by ricin, lung cells pretreated with anti-LRP1 antibody (1 hour, 37 °C) were exposed to ricin (10 ng/ml) and 48 hours later, cell viability was assessed utilizing the XTT assay. As shown in Fig. 7d, anti-LRP1 antibodies nearly completely protected the lung SCSs from ricin intoxication, with 96.5% of the cells that were pretreated with anti-LRP1 remaining viable, compared to 55.4% of the ricin-intoxicated cells which were not pretreated with the anti-LRP1 antibody. Pretreatment of the cells with a non-related antibody (anti-bacillus anthracis-PA antibody) did not provide any protection from ricin to lung SCSs. These findings confirm that LRP1 functions as an important receptor for ricin in lungs and that functional antagonism of the LRP1 receptor protects lungs from ricin intoxication. Discussion The scientific literature dealing with ricin, provides detailed insights regarding the toxin's crystallographic structure 17,18 , the unique mode of ricin intra-cellular trafficking [19][20][21][22][23] as well as its catalytic activity mechanism 24,25 . In contrast, the binding profile of the toxin to the cell membrane has not been thoroughly configured and no specific receptor has yet been identified. The identification of specific receptors for ricin is of practical importance, as it may help in the development of specifically-tailored therapeutics that prevent the binding of the toxin to target cells. So far, there has been a wide consensus as to the non-selective nature of ricin binding to all cell-surface glycoproteins or glycolipids containing a terminal galactose link [26][27][28] . Our observation that purified ricin was found to bind to a relatively low number of cell membrane proteins raised the possibility that, contrary to common belief, ricin interacts with specific transmembrane receptors. Ricinus communis agglutinin (RCA), which displays high-level homology to ricin, was found to have a wide binding profile. Indeed, the non-selective binding of RCA to cells is well documented and this protein is in fact used as an analytical tool for marking cell-membranes and for identifying oligosaccharide structures on cell surfaces 29,30 . Baenziger and Fiete 31 have shown that RCA can bind glycopeptides with terminal N-acetylgalactosamine residues, while ricin cannot. This greater affinity of www.nature.com/scientificreports www.nature.com/scientificreports/ RCA was reported previously 32 , and may be related to the fact that RCA has double the number of binding sites, because of its tetramer structure, as opposed to ricin, which is a dimeric molecule 33 . Mass spectrometry analysis revealed that ricin/cell-membrane-protein complexes contain either the mannose receptor or the prolow-density lipoprotein receptor-related protein 1. Both the ricin A and B chains carry glycoproteins that contain mannose-rich oligosaccharides and as such, ricin is known to bind to the mannose receptor on specific cells i.e., macrophages or non-parenchymal liver cells 34,35 . However, most cell types do not express the mannose receptor and consequently, ricin internalization into most cells is mediated exclusively by the B chain-related lectin activity 19,36 . In cells expressing the mannose receptor, competitive prevention of ricin binding can be completely achieved only in the presence of both mannose and galactose. In this case, the ricin molecule functions simultaneously as a lectin that recognizes and binds galactose moieties and as a mannose substrate that is recognized by target cell membranal mannose receptors. Quantification of the binding of ricin to Kupffer cells in the presence of mannose or galactose revealed that the binding of the toxin to mannose receptor constitute only 7% of the total binding 35 . Moreover, exposure of macrophages to transgenic ricin that is defective in both RTB binding sites was found to be non-toxic even when the binding was mediated by the mannose receptor 37 . Based on this finding, it seems that binding through RTB is essential for the toxicity of ricin, even in the presence of an alternative receptor. However, a recent study 38 indicated an important role for mannose-receptor uptake of ricin by demonstrating that Kupffer cells could be equally protected from ricin intoxication by monoclonal antibodies directed against either the ricin A (non-lectin) or B (lectin) subunits. Thus, the mannose-receptor may play a greater role in ricin uptake than hitherto suggested, yet in parenchymal cells, which are mostly devoid of this receptor, an alternative point of entry is mandatory for ricin-induced cytotoxicity. LRP1, a cell surface receptor belonging to the LDL receptor gene family 39 , which interacts with a variety of ligands (e.g. apolipoprotein E, α2-macroglobulin, amyloid precursor protein and several proteases and protease inhibitors [40][41][42][43], plays a role in cell communication and signal transduction 40,44 and functions also as the cell entry receptor for the Pseudomonas exotoxin A and the minor-group common cold virus 45,46 . This 600 kDa receptor is a type I single-pass transmembrane protein 47 which contains a 515 kDa N-terminal extracellular heavy chain comprising 4 ligand-binding clusters, that is non-covalently attached to a 85 kDa membrane-integrated intracellular light chain. With the exception of receptor-associated protein (RAP), which serves as a molecular chaperone that interacts with all LRP1 clusters, most ligands bind exclusively to cluster II and/or IV 40 . The ability of LRP1 to bind with high affinity to numerous structurally distinct ligands, results from the presence of 31 ligand binding repeats in the molecule which form a unique contour surface and charge distribution, allowing multiple combinations of interactions between the ligand and receptor 48 . In this study, functional blockage of LRP1 by three distinct and unrelated methods, indicated that this membrane-bound protein acts as the main host cell receptor for the ricin toxin. First, treatment with anti-LRP1 antibody prior to ricin intoxication reduced toxicity in a substantial manner. Second, addition of RAP in excess, reduced or even prevented ricin-induced intoxication of HEK293 cells. RAP is found primarily in the endoplasmic reticulum where it functions as a molecular chaperone 49 that prevents association of newly synthesized LRP1 molecules with endogenous ligands. Due to its ability to antagonize ligand binding to this receptor, exogenously added RAP constitutes a powerful tool to study LRP1-mediated receptor/ligand interactions 50,51 . Finally, partial silencing of LRP1 expression in HEK293 cells by targeting siRNA, reduced ricin toxicity by 50%. From these results, we conclude that LRP1 is the primary endocytic receptor for ricin. We note in this context that the experimental procedures employed for functional blocking of LRP1 cannot refute the possibility that other components such as glycosylated ligands of LRP1, may play a role as intermediate molecules in ricin/LRP1 interactions. However since we provide ample evidence for the fact that functional blocking of the LRP1 receptor reduces ricin cytotoxicity, we firmly believe that the attribution of a major ricin target receptor role for LRP1 is fully valid. The central role of LRP1 in ricin-induced HEK293 cell toxicity is in line with the fact that CHO cells were found to be resistant to Pseudomonas exotoxin (PE) as well as ricin 52,53 . LRP1 is known to be the membrane receptor of PE 44 and the resistance of CHO cells to this bacterial toxin is not due to a paucity of LRP1 expression, but rather to the fact that the LRP1 receptor on this cell type is blocked by the RAP molecule 54 . The finding that LRP1 serves as a major receptor for ricin, provides an explanation for the resistance of CHO cells to the toxin and we assume that as in the case of PE, CHO refractivity towards ricin stems from the blockage of the LRP1 binding site by RAP. It should be noted that LRP1 was reported to be differentially glycosylated in a tissue specific manner, these differences affecting the receptor's stability 55 . These variations in LRP1 glycosylation would probably affect the efficiency of LRP1-mediated ricin binding and uptake in different cells. LRP1 contains cysteine-rich complement-type repeats (CRs), epidermal growth factor (EGF) repeats, β-propeller domains, a transmembrane domain and a cytoplasmic domain. The CR modules are organized into four highly conserved clusters (clusters I-IV 56 ,). In this study we identified cluster II as the sole LRP1 binding site for ricin, and demonstrated that this binding is mediated by subunit B of the toxin. Cluster II and IV, which are responsible for the majority of ligand binding to the LRP1 receptor 40,57-59 are highly similar in their binding properties, displaying only minor differences regarding their kinetics of interactions 60 . Huang et al. 61 , suggested that the CR modules within these clusters, present different charge densities and hydrophobic patches, which in turn lead to varying receptor-ligands interactions responsible for the different ligand specificity of each cluster. Certain ligands, such as ricin, recognize different combinations of the CRs located within a single binding cluster (cluster II), whereas others, such as alpha-2-macroglobulin, were found to bind to CRs located on different clusters (clusters II and IV). Identification of the CR regions on the LRP1 that are important for binding ricin, will allow developing a specific inhibitor capable of preventing this interaction. The binding kinetics of ricin to cluster II were found to fit a 2:1 heterogeneous ligand model, in line with the fact that ricin has two lectin-binding domains within the B-subunit. The overall affinities values (K D ) of the two binding sites of ricin 81 and 47 nM (K D 1 and K D 2, respectively) support the assumption that the two ricin-lectin binding sites interact with LRP1 in an independent manner. These K D values, which are two orders of magnitude Scientific RepoRtS | (2020) 10:9007 | https://doi.org/10.1038/s41598-020-65982-2 www.nature.com/scientificreports www.nature.com/scientificreports/ higher than the affinity interactions commonly measured between lectins and oligosaccharides (normally in the millimolar range 62 ,), can explain the predilection of ricin binding to the LRP1 receptor, rather than to other galactose residues on the cell surface. Ricin can bind to free galactose, as attested by the fact that toxin binding to LRP1 cluster II was abolished in the presence of excess of galactose. Nevertheless, the marked proclivity of ricin towards LRP1, as opposed to other cell-surface glycoproteins, clearly indicates that the toxin's interaction with glycans is strongly influenced by vicinal non-sugar structural elements. The interactions of other lectin toxins, were also found to be restricted to specific receptors, i.e. Shiga and Cholera toxins to the Gb3 and GM1 receptors, respectively 63,64 . Finding that LRP1 plays a major role in the intoxication process of ricin, it was of interest to determine whether our monoclonal neutralizing antibodies 15,16 impair the binding of ricin to LRP1. We found that anti-ricin MH77 monoclonal antibody reduces binding of ricin to cluster II by more than 70%, whereas other anti-ricin monoclonal antibodies, MH1, MH73 and MH75 did not prevent binding of the toxin to this cluster, even though two of them, MH73 and MH75, bind to non-overlapping epitopes located on the surface of RTB 15 . Though these 4 antibodies conferred similar survival rates to ricin-intoxicated mice when administered six hours post exposure, when mice were treated 24 hours post exposure, antibody MH77 provided significantly higher protection than the other 3 antibodies 16 . The fact that the most effective anti-ricin monoclonal antibody is the one that prevents the binding of ricin to LRP1, underscores the pivotal role of the LRP1 receptor in ricin intoxications. The toxicity of ricin depends on the route of exposure, pulmonary exposure being considered most dangerous 65 . Pathological studies of pulmonary ricin intoxications demonstrated that injury is confined to the lungs as manifested by perivascular, interstitial and alveolar edema, influx of neutrophils to the lungs and the mounting of an acute inflammatory response. Flooding of the lungs leads to respiratory insufficiency and death 66 . To probe the possible role of the LRP1 receptor in ricin-mediated pulmonary poisoning, we tested the contribution of LRP1 to ricin intoxication in a single cell suspension produced from mice lungs. Quantitation of LRP1 expression in different cell populations of the lungs, revealed a strong correlation between LRP1 expression levels in the different cell subtypes and ribosomal damage levels, i.e. 28 S rRNA depurination, in these cells following pulmonary exposure to ricin. This finding strongly suggests that the expression of LRP1 by lung cells dictates their sensitivity towards ricin. Moreover, confocal microscopy imaging of intoxicated lung SCSs revealed that over 80% of the ricin is linked to LRP1. Most importantly, treating those SCS with anti-LRP1 antibody prior to ricin exposure prevented their intoxication. These results demonstrates beyond doubt the importance of this receptor in pulmonary ricin intoxications. Studies carried out in our laboratory on a swine model for pulmonary ricinosis, demonstrated that the pathological state ensuing pulmonary exposure to ricin is that of acute respiratory distress syndrome (ARDS) 67 . Cumulative evidence suggests that the ectodomain of LRP1 is proteolytically cleaved from cell surfaces in different clinical pathologies including ARDS, releasing a soluble form of this receptor (sLRP1) [68][69][70][71] . sLRP1 maintains the ligand binding characteristics of cell-bound LRP1 and may therefore act as a competitive inhibitor of ligand binding and clearance by cell surface-associated LRP1. We now intend to assess whether LRP1 shedding occurs in lungs of animals exposed to ricin, and if so, whether this shedding of the main receptor for ricin has a beneficial or harmful contribution to pulmonary ricinosis. The present study has demonstrated for the first time that a plant toxin can act by binding to a receptor known to mediate binding and uptake of physiological ligands. Ricin toxin is capable of killing cells of many different animal species and of various tissues. In order to be broadly effective as a virulence factor it must make use of widely distributed and highly conserved molecules, such as the LRP1 receptor. We conclude that ricin is one of several ligands which use LRP1 to enter cells and that cells displaying this receptor on their surface are likely to be targets for its toxic effects. Materials and Methods Ricin and ricinus communis agglutinin (RCA) preparations. Crude ricin was prepared from seeds of endemic Ricinus communis, essentially as described before 72 . Briefly, seeds were homogenized in a Waring blender in 5% acetic acid/phosphate buffer (Na 2 HPO4, pH 7.4) the homogenate was centrifuged and the clarified supernatant containing the toxin was subjected to ammonium sulfate precipitation (60% saturation). The precipitate was dissolved in PBS and dialyzed extensively against the same buffer. For the preparation of purified ricin and Ricinus communis agglutinin (RCA), crude ricin was loaded consecutively onto 2 columns, the first column contains activated Sepharose which binds and thereby depletes the RCA. RCA bound to the Sepharose column was thereby purified following elution with 0.5 M Galactose in PBS. The flow-through of the activated Sepharose column (crude ricin) was loaded onto the second column, containing α-lactose (lactamyl) agarose (Sigma-Aldrich, Rehovot, Israel), and the column was washed in order to discard non-related impurities. Purified ricin was eluted from the lactamyl agarose column with 0.5 M Galactose in PBS. Animals. Animal experiments were performed in accordance with the Israeli law and approved by the Ethics Committee for Animal Experiments at the Israel Institute for Biological Research (Project Identification Codes M-12-15, M-9-19). Treatment of animals was in accordance with regulations outlined in the U.S. Department of Animal Welfare Act and the conditions specified in the National Institute of Health's Guide for Care and Use of Laboratory Animals. All animals in this study were female CD-1 mice (Charles River, Margate, UK) weighing 27-32 g. Mice were housed in filter-top cages in an environmentally controlled room, maintained at 21 ± 2 °C and 55 ± 10% humidity and had access to food and water ad libitum. Lighting was set to mimic a 12-hours:12-hours dawn-dusk cycle. Extraction of lung cell membranes. Lungs were harvested from terminally anesthetized mice (ketamine, 1.9 mg/mouse and xylazine, 0.19 mg/mouse) which were subjected to PBS perfusion via the heart. Lungs were then homogenized in TE buffer (100 mM Tris and 10 mM EDTA PH 7.5) containing 15% sucrose solution and centrifuged at 10,000 g for 30 minutes. The supernatant was ultracentrifuged at 100,000 g for 120 minutes at 4 °C to pellet the total membranes. The membrane pellet was dissolved in 3 ml of TE 40% sucrose solution. Continuous sucrose gradients were prepared by layering sucrose solutions 20-50% (prepared in TE buffer) into 14 ×89 mm ultracentrifuge tubes (Beckman, Indianapolis, IN, USA) including the membrane pellet dissolved in 40% sucrose. The membrane-pellet-containing sucrose gradients were ultracentrifuged at 100,000 g over night at 4 °C. The crude cell-membrane fraction located in the middle of the tubes, was collected with a syringe and washed three times with PBS. Blue Native gel electrophoresis. Membranes of mice lung cells were incubated with purified ricin for 2 hours, rinsed with PBS and centrifuged at 5000 RPM for 10 minutes. The pellet was solubilized with 2% dodecyl maltoside and resolved by Blue-Native PAGE 75 on 4-16% gradient gels (NativePage, Novex, ThermoFischer, Waltham, MA, USA) together with native high molecular weight markers (Amersham Biosciences, Bath, UK) on a BioRad protean II minigel system. Gels were run at 35 V for 30 min and at 350 V for 3 hours. After electrophoresis, half of the gels were transferred onto a PVDF membrane (Invitrogen). Membranes were blocked with 5% nonfat blotting grade blocker (170-6404; BioRad) in tris-buffered saline/Tween 20 for 1 hour at room temperature. For immunoblotting of lung cells-membranes, the membranes were incubated with rabbit-anti-ricin polyclonal antibody for 2 hours and then with anti-rabbit IgG horseradish peroxidase-linked antibody (Sigma-Aldrich) for 1 hour, developed with Clarity Western ECL Substrate (BioRad) and visualized by a chemiluminescence detection system (Fujifilm, LAS3000). The remaining membranes were stained with colloidal Coomassie stain (SimplyBlue, Invitrogen). From the stained gels, 3 positive bands for ricin (as visualized on the PVDF membranes) were excised with a scalpel and destained for further mass spectrometric analysis. Proteins in each band were reduced with 5 mM dithiothreitol and alkylated with 10 mM iodoacetamide (Sigma-Aldrich) in the dark for 30 min at 21 °C. Proteins were digested by rehydrating the gel pieces with 12.5 ng/µl trypsin (Promega, Madison, WI, USA) in 25 mM NH 4 HCO 3 at 4 °C for 10 min following by overnight incubation at 37 °C. Peptides were then extracted by addition of 50% (vol/vol) acetonitrile 5% (vol/vol) formic acid, vortex, sonication, centrifugation, and collection of the supernatant were performed. Samples were dried and stored at −80 °C until further analysis. Liquid chromatography and mass spectrometry. Liquid chromatography and mass spectrometry were performed by the de Botton Institute for Protein Profiling at The Nancy and Stephen Grand Israel National Center for Personalized Medicine (Weizmann Institute of Science, Rehovot, Israel) as previously described 76 . For identification purposes, raw data was first processed using Proteome Discoverer v1.41. MS/MS spectra were searched using Mascot v2.4 (Matrix Sciences, Chicago, IL, USA) and Sequest HT. Data were searched against Mus musculus protein database as downloaded from UniprotKB (http://wwww.uniprot.org/). Immunolocalization of ricin and LRP1 by confocal microscopy. Anti-ricin Ab was conjugated (1 mg) with the Alexa Fluor 594 protein labeling kit (Molecular Probes, Thermo Fisher Scientific, Waltham, MA, USA) according to the manufacturer's instructions. For immunolocalization experiments, HEK293 cells or a single cell suspension (SCS) from mice lungs were seeded on #1 glass cover slips in 24-well dishes and exposed to ricin (100 ng/ml). Cells were fixed with 4% paraformaldehyde (PFA, Gadot, Israel) for 10 min at 4 °C, washed three times with PBS, and placed for 1 hour in a blocking solution (10% normal goat serum (NGS)) in PBS containing 0.05% Tween-20 (P5927, Sigma-Aldrich).
2023-02-20T14:55:50.604Z
2020-06-02T00:00:00.000
{ "year": 2020, "sha1": "13fc1ec1ec3790e372c848c67915493a344af483", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-65982-2.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "13fc1ec1ec3790e372c848c67915493a344af483", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
201848944
pes2o/s2orc
v3-fos-license
Gestational diabetes risk in a multi-ethnic population Aims To compare gestational diabetes mellitus (GDM) risk among two ethnic minority groups, with high type-2 diabetes (T2DM) prevalence, as compared to the Jewish population majority group. Methods A historical cohort study was conducted using clinical data collected between January 1, 2007, and December 31, 2011. The study sample included 20–45-year-old women; 2938 Ethiopian, 5849 Arab and 5156 non-Ethiopian Jewish women. GDM was defined according to the two-step strategy: step 1: glucose ≥ 140 mg/dl and step 2: using Coustan and Carpenter’s diagnostic criteria. GDM risk was tested in a multivariable model, adjusted for age, parity and pre-gestational values of the metabolic syndrome components. Results Mean body mass index (BMI) values and morbid obesity rates were lowest among Ethiopian women and highest among Arab women. The prevalence of pre-gestational diabetes was significantly higher among Ethiopian (2.7%) and Arab (4.1%) women than among non-Ethiopian Jewish women (1.6%), and GDM screening rates were relatively high (85.5%, 87.2% and 83%, respectively). The proportion of pregnancies complicated with GDM was higher among Ethiopian women (4.3%) but not significantly different between Arab (2.9%) and non-Ethiopian Jewish (2.2%) women. In multivariable analysis, GDM was associated with Ethiopian ancestry (OR, 2.55; 95% CI, 1.60–4.08), adjusted for age, BMI, plasma triglyceride level and parity. Arab ethnicity was not significantly associated with GDM risk in multivariable analysis. Conclusions Both Ethiopian and Arab minority ethnicities have a higher risk of T2DM in comparison with other Israeli women, but only Ethiopian origin is an independent risk factor for GDM while Arab ethnicity is not. Introduction Gestational diabetes mellitus (GDM) defined as glucose intolerance first diagnosed during pregnancy is associated with a higher risk of adverse obstetric and perinatal outcomes [1,2]. Moreover, women with GDM have sevenfold greater risk of developing type-2 diabetes (T2DM) 5-10 years after delivery [3], and offspring of mothers with GDM have higher obesity and diabetes mellitus (DM) rates later in life [4,5]. 3 The prevalence of GDM and T2DM differs among ethnic minority groups [6,7]. In the USA, Asian and Filipino women have higher prevalence of GDM and T2DM compared to non-Hispanic White women, while African-American women have higher prevalence of T2DM but not of GDM [8,9]. Arabs are the largest ethnic minority group in Israel, accounting for 21% of the population [10]. Arab women have high prevalence of obesity and central obesity [11] and higher risk of T2DM compared to the Jewish female majority population [12]. Ethiopian Jews have immigrated to Israel since 1984 and account for 1.7% of the population [13]. On arrival to Israel, the prevalence of DM among Ethiopian Jews was less than 1% [14] and increased rapidly thereafter. Recent studies reported higher prevalence of DM among Ethiopian Jewish women at reproductive age and lower prevalence of obesity compared to the majority group of non-Ethiopian Jewish women in Israel [15,16]. Data are lacking on the prevalence and risk factors for GDM among Arab and Ethiopian Jewish women. These data are pertinent for pre-pregnancy prevention and early detection of GDM and for timely treatment, and are thus the focus of the current study. Methods A historical cohort study was conducted using clinical data collected between January 1, 2007, and December 31, 2011, in the electronic medical records of Clalit Health Services (CHS) database. The study sample included women who were 20-45 years old on January 1, 2008, residents of the mostly urban Sharon and Hadera districts in central Israel, and insured by CHS. The sample was stratified by ethnicity and included women of Ethiopian ancestry, Arab women and non-Ethiopian Jewish women. CHS is the largest health plan in Israel and insures more than 86% of Ethiopian Jews, 76% of Arabs and 46% of non-Ethiopian Jews in the two districts. Data collected included demographics, laboratory test results, chronic medical therapy, hospital admissions and chronic diagnoses. We included information on live births between January 1, 2008, and December 31, 2011 (the study period). For most women (98%), GDM diagnosis was based on a 2-step screening protocol of 50 g oral glucose challenge test (GCT) and 100 g 3-h oral glucose tolerance test (OGTT). GDM was defined by 1-h post-GCT plasma glucose (PG) ≥ 200 mg/dl, or 1-h post-GCT PG ≥ 140 mg/dl and < 200 mg/dl and at least two plasma glucose values equal or greater than the plasma glucose thresholds set by Carpenter and Coustan glucose thresholds in 3-h OGTT: fasting-95 mg/dl; 1 h-180 mg/dl; 2 h-155 mg/dl, or 3 h-140 mg/dl [17]. Pre-gestational diabetes was defined in non-pregnant women by physician's diagnosis of DM, purchases of three or more hypoglycemic drug prescriptions, or at least two values of HbA1c, fasting or post-75 g oral glucose load plasma glucose within the DM range [18]. We included all births during the study period to calculate the proportion of pregnancies complicated with GDM, while only first births complicated with GDM were used for GDM risk analysis. Births among women with pre-gestational diabetes were excluded from the current analysis. Statistical analysis Comparisons of baseline characteristics between non-Ethiopian Jewish women (the reference group) and women of the two ethnic minority groups were carried out, using appropriate contrasts in a mixed linear model for continuous variables, and the Chi-square test for discrete variables with Bonferroni correction for multiple comparisons. The total number of pregnancies in the three ethnic groups was compared using Poisson regression. Proportions of pregnancies screened for GDM were compared using repeated measured logistic regression. The association between ethnicity and GDM risk was tested in multiple logistic regression analysis, adjusted for age, parity (number of children ≤ 18 year on 01/01/2008), whether it was a single or multiple pregnancy, and pre-gestational levels (for women who gave birth) or first values recorded during follow-up (for other women) of the metabolic syndrome components other than plasma glucose (i.e., fasting plasma triglycerides and HDL cholesterol, systolic blood pressure, and body mass index (BMI). Missing values were treated by multiple imputation approach. The CHS institutional ethics committee approved the study protocol. In accordance with the Israeli Ministry of Health regulations, informed consent was not required because all identifying information had been removed from the study dataset. Compared to non-Ethiopian Jewish women, the number of births per woman during the study period was lower among Ethiopian women (0.36 vs. 0.48; (p < 0.001) and higher among Arab women (0.53, p = 0.001) ( Table 1). Arab women had the highest performance rate of GDM screening (87.2%) of the three ethnic groups, while the screening rates did not significantly differ between Ethiopian and non-Ethiopian Jewish women (85.5% and 83.0%, respectively) ( Table 1). Compared to non-Ethiopian Jewish women, the proportion of pregnancies complicated with GDM was higher among Ethiopian women (4.3% vs. 2.2%; p < 0.001) but did not differ significantly among Arab women (2.9%; p = 0.12). Characteristics of pregnant women by ethnicity In all ethnic groups, women with GDM were older and had higher values of BMI, plasma triglycerides and systolic blood pressure (Table 2). Ethiopian Jewish woman with GDM were, on average, 3.5 years older than reference women with GDM and had significantly lower mean (± SD) BMI 25.0 ± 4 versus 30.6 ± 5 kg/m 2 , respectively. GDM risk factors Adjusted for age, BMI, parity and pre-pregnancy values of systolic blood pressure, plasma triglycerides and HDL cholesterol, Ethiopian ancestry was associated with higher likelihood for GDM [odds ratio (OR) 2.55; 95% confidence interval (CI) 1.6-4.1], while Arab ethnicity was not (OR 1.43; 95% CI 0.95-2.15 p = 0.087). Other factors associated with greater risk of GDM included older age, higher BMI and higher plasma triglycerides levels, while parity was associated with lower risk (Table 3). Systolic blood pressure and HDL cholesterol were not significantly associated with GDM. There were 82 live twin births. Further adjustment for multiple versus single pregnancy did not materially change the point estimates in the multivariable model (data not shown). Discussion We found that Ethiopian women had a 2.5-fold greater risk of GDM compared to non-Ethiopian Jewish women, independent of maternal age, body weight, blood pressure and dyslipidemia. In fact, Ethiopian women with GDM had significantly lower body weight compared to reference and Arab women with GDM. The higher risk of GDM among Ethiopian women is in line with recent studies showing high risk of adult-onset DM among Ethiopian Jews younger than 50 years of age, particularly women [15,16,19]. In our study, Arab ethnicity was not found to be significantly associated with a greater risk of GDM, although Arab women had higher prevalence of pre-gestational diabetes compared to non-Ethiopian Jewish women. Previous studies have shown that Arab men and women are at greater risk of T2DM [12,20]. Possibly, this study was underpowered to show a smaller ethnic difference in GDM risk among Arab and non-Ethiopian Jewish women. Other studies examining ethnic minorities in the USA found that African-American women also have higher prevalence of T2DM and similar prevalence of GDM compared to non-Hispanic White women [9]. Lawrence et al. [21] suggested that the differences in the effect of ethnicity on GDM versus T2DM risk might be due to a higher proportion of ethnic minority women with pre-gestational diabetes, leaving a smaller fraction of the population at risk of GDM. Ethnic disparities in GDM risk have been reported in other populations. Filipino and Asian women have a significantly higher prevalence of GDM and T2DM compared to non-Hispanic White Americans, even in normal and low BMI categories [9]. The mechanism for the different effect of BMI and ethnicity on GDM and T2DM risk is unclear. Differences in ethnic-related body composition and fat distribution have been suggested in the Multicultural Community Health Assessment Trial (M-CHAT) [22], Multi-Ethnic Study of Atherosclerosis (MESA) [23] and Mediators of Atherosclerosis in South Asians Living in America (MASALA) studies [24]. Other genetic factors that modulate insulin resistance and β cell function may also play a role. South Asians have higher values of insulin resistance and lower values of β-cell function than other ethnic groups (Chinese, African-American, Latino and non-Hispanic Whites) even after adjusting for age and adiposity [25]. Similar to other studies, we also found that age, BMI and plasma triglyceride levels were significantly associated with a higher risk of GDM [26][27][28]. In contrast to previous studies [29], we found no association between systolic blood pressure and GDM risk. This may be explained by imperfect standardization of blood pressure measurements performed and recorded in a clinical setting. We found that parity was associated with lower risk of GDM, after controlling for the effect of age and BMI. Recently, Sweeting et al. [30] reported that parous women without a history of GDM had a lower risk of GDM in subsequent pregnancies. Seventy-two percent of the pregnant women in our study were multiparous. It is conceivable that women with a history of GDM before the study period were more likely to develop T2DM and were therefore excluded from this study. The proportion of pregnancies complicated with GDM in our study (2.2% among non-Ethiopian Jewish women) was somewhat lower than previously reported in Israel. The proportion of pregnancies complicated with GDM reported by Sella et al. was 4.3% in a mostly Jewish population of women insured by the second largest health plan in Israel [28]. The higher GDM rates reported by Sella et al. compared to the current study may be partially explained by a higher proportion of women diagnosed with a one-step screening test, using 3-h 100 g OGTT (9% vs. 2%), women without a positive OGTT who initiated insulin treatment after GDM screening (9.8% vs. 0%) and a slightly older mean age (31.4 vs. 30.9 years, respectively) [31]. In the current study, GDM diagnosis was based on the two-step strategy, using the Carpenter and Coustan diagnostic criteria, which was the most common practice in Israel during the study period. These criteria are less sensitive compared to those of International Association of Diabetes in Pregnancy Study Groups/World Health Organization (IADPSG/WHO) [18]. The HAPO study showed that the association between maternal hyperglycemia and adverse maternal and fetal outcomes is continuous, without a clearcut threshold [32,33]. Thus, plasma glucose levels that are lower than the glucose cutoffs recommended by Carpenter-Coustan are still associated with significant risk of adverse maternal and fetal outcomes. Differences in GDM prevalence and in the relative diagnostic importance of fasting, 1-h and 2-h plasma glucose were observed across the ethnically diverse centers of the HAPO study [34]. Adopting the IADPSG/WHO diagnostic criteria for GDM is expected to significantly increase the number pregnancies diagnosed with GDM [35]. This study has few limitations: The analysis was based on data collected for clinical and administrative purposes. However, there is high utilization rate of prenatal care services in Israel (i.e., free access to family physicians, obstetric care and laboratory testing), and thus, underestimation of GDM prevalence is not likely. The statistical analyses were based on live births only and did not include stillbirths. Our database did not include information on GDM in previous pregnancies, family history of DM, socioeconomic status and lifestyle habits (diet and physical activity), all of which are significant risk factors for GDM. Nevertheless, this is the first population-based study that provides epidemiological data on the proportion of pregnancies complicated with GDM and the risk determinants for GDM in two ethnic minority groups living in the same region: Ethiopian and Arab women, including comparisons with the majority non-Ethiopian Jewish population. Conclusions We have shown that Ethiopian women are at greater risk of GDM despite having lower mean BMI. Special efforts should be directed toward prevention and early diagnosis of GDM among Ethiopian women to reduce maternal and fetal adverse outcomes associated with impaired glucose metabolism in pregnancy. Indeed, it was already suggested in the article "diabetes in pregnancy" [36]: the goal is to improve pregnancy outcomes in women with gestational diabetes through sustainable policies of screening and treatment. Further research is needed to understand the higher susceptibility to GDM among Ethiopian women despite lower BMI.
2019-09-07T14:39:14.655Z
2019-09-07T00:00:00.000
{ "year": 2019, "sha1": "80bb863b8600aea0a386e401ec654ba5e8f3069f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00592-019-01404-8.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "80bb863b8600aea0a386e401ec654ba5e8f3069f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257300883
pes2o/s2orc
v3-fos-license
Exploration of the phase diagram within a transport approach . We study equilibrium as well as out-of-equilibrium properties of the strongly interacting QGP medium under extreme conditions of high temperature T and high baryon densities or baryon chemical potentials µ B within a kinetic approach. We present the thermodynamic and transport properties of the QGP close to equilibrium in the framework of e ff ective models with N f = 3 active quark flavours such as the Polyakov extended Nambu-Jona Lasinio (PNJL) and dynamical quasiparticle model with the CEP (DQPM-CP). Considering the transport coe ffi cients and the EoS of the QGP phase, we compare our results with various results from the literature. Furthermore, out-of equilibrium properties of the QGP medium and in particular, the e ff ect of a µ B - dependence of thermodynamic and transport properties of the QGP are studied within the Parton-Hadron-String-Dynamics (PHSD) transport approach, which covers the full evolution of the system during HICs. We find that bulk observables and flow coe ffi cients for strange hadrons as well as for antiprotons are more sensitive to the properties of the QGP, in particular to the µ B - dependence of the QGP interactions. Introduction It is known that the evolution of the deconfined QCD phase in ultra-relativistic heavy-ion collisions has been successfully described within hydrodynamic simulations and hybrid methods [1][2][3][4].However, only a microscopic treatment can provide a proper non-equilibrium description of the entire dynamics through possibly different phases up to the final asymptotic hadronic states.Here we report on a recent progress made within the PHSD transport approach [5][6][7][8][9], which is an off-shell transport approach based on the Kadanoff-Baym equations in first-order gradient expansion.This approach sequentially describes the full evolution of relativistic heavy-ion collisions from the initial hard collisions and formation of strings, the deconfinement with a dynamic phase transition to a strongly interacting QGP, to hadronization and subsequent interactions in the expanding hadronic phase.While the hadronic part is essentially equivalent to the conventional HSD approach [10], the microscopic properties of the QGP phase are described by the DQPM, which is based on the lQCD data and allows to interpret the equations of state (EoS) in terms of dynamical degrees of freedom and furthermore allows to evaluate the cross sections of the corresponding in/elastic reactions.The PHSD transport approach well describes observables from p+A and A+A collisions from SPS to LHC energies including electromagnetic probes [9].In order to tackle the new challenge -i.e. the evolution of the partonic systems at finite µ Bthe PHSD approach has been extended to incorporate partonic quasiparticles and their differential cross sections that depend not only on T as in the previous PHSD studies, but also on µ B explicitly [11].Within this extended approach, the 'bulk' observables in HICs for different energies -from AGS to RHIC -for symmetric and asymmetric Au+Au/Pb+Pb collisions have been studied.Only a small influence of the µ B modification of the parton properties (masses and widths) and their interaction cross sections has been found in bulk observables. Furthermore, in Ref. [12] we extended our study to more sensitive observables, such as collective flow coefficients and a manifestation of the µ B dependencies of partonic cross sections in the flow coefficients.In addition, we explore the relations between the in-and out-of equilibrium QGP by means of transport coefficients and collective flows. Transport properties of the QGP at finite µ B We present transport coefficients of the QGP medium at finite µ B where the phase transition is possibly changing from a crossover to a 1st order one.Due to the notoriuos difficulty for the estimation of transport coefficients at finite µ B in lattice QCD it is necessary to resort to effective models which describe the chiral phase transition.It is important to note that while most of the models have a similar EoS, which agrees well with available lattice data, predictions for transport coefficients of the QGP can vary significantly already at µ B = 0 [11,[13][14][15][16]. In Refs.[15,16] we have evaluated transport coefficients of the QGP medium for a wide range of µ B for two models with a similar phase structure: the extended N f = 3 PNJL model and the dynamical quasiparticle model with a hypothetical CEP (DQPM-CP) located at µ B = 0.96 GeV.The shear and bulk viscosities for quasiparticles with medium-dependent masses m i (T, µ q ) can be derived using the Boltzmann equation in the relaxation-time approximation (RTA) through the relaxation time: where q = (u, d, s), d q = 6 and d g = 16 are the spin and color degeneracy factors for quarks and gluons, respectively, whereas τ i are the corresponding relaxation times.c s is the speed of sound for fixed µ B .The relaxation times are evaluated from the interaction rates by calculating the partonic differential cross-sections as a function of T and µ B for the leading tree-level diagrams [11]: where denotes the relative velocity in the c.m. frame, d j is the degeneracy factor for spin and color.The specific shear viscosity of the QGP matter is shown in Fig. [17], (green triangles and magenta circles) [18], (cyan stars) [19].The red line corresponds to the DQPM results [16], while the dashed blue line displays the η/s parametrisation used in hydrodynamic simulations within MUSIC in [22].The dash-dotted gray line demonstrates the Kovtun-Son-Starinets bound (η/s) KS S = 1/(4π) [20].The grey area represents the model-averaged results from a Bayesian analysis of experimental heavy-ion data [21].(Right) η/s as a function of T/T c (µ B ) at finite µ B : DQPM-CP results [16] (solid lines) are compared to the estimates from the N f = 3 PNJL model (dashed lines) [15]. 1 as a function of scaled temperature T/T c at µ B = 0 (left) and at finite µ B (right).At µ B = 0 we show results from the DQPM [16] (solid red line), in comparison with the lQCD results for pure SU(3) gauge theory [17][18][19], model-averaged results from a Bayesian analysis of the experimental heavy-ion data [21] (grey area) and η/s as employed in hydrodynamic simulations in [22] (dashed blue line).For finite µ B ≥ 0 we compare the results from the PNJL and DQPM-CP models obtained by the RTA approach with the interaction rate.The estimates from both models show an increase of the specific shear viscosities η/s and electric conductivities σ QQ /T with µ B .While results for η/s are in agreement for moderate µ B in the vicinity of the phase transition, there is a clear difference in σ QQ /T essentially due to the different description of the partonic degrees of freedom [16]. Evolution of the QGP in the PHSD transport approach To investigate the sensitivity of 'bulk' observables as well as flow coefficients of different hadrons produced in HICs to the modification of the partonic interactions and their transport properties at non-zero baryon density we have considered the following two settings for the transport simulations: • PHSD5.0 -µ B = 0 (dashed blue lines): the pole masses and widths of quarks and gluons depend only on T ; however, the differential and total partonic cross sections are obtained by calculations of the leading order Feynman diagrams employing the effective propagators and couplings g 2 (T/T c ) from the DQPM at µ B = 0 [11].Thus, the cross sections depend explicitly on the invariant energy of the colliding partons √ s and on T .This is realized in the PHSD5.xby keeping µ B = 0. • PHSD5.0 -µ B (solid red lines): the pole masses and widths of quarks and gluons depend on T and µ B explicitly; the differential and total partonic cross sections are obtained by calculations of the leading order Feynman diagrams from the DQPM and explicitly depend on √ s, T and µ B .This is realized in the full version of PHSD5.x[11].with partonic cross sections and parton masses calculated for µ B = 0 (dashed blue lines) and with cross sections and parton masses evaluated at the actual chemical potential µ B in each individual space-time cell (solid red lines) in comparison to the experimental data from the STAR collaboration [23]. Summary We find that HIC results from the extended PHSD transport approach, where in the QGP phase we found that transport coefficients have noticeable T and µ B dependence, have been in agreement with the BES STAR data in case of bulk observables [11] and elliptic flow of charged particles [12], and reasonably agrees with the results from the hybrid approach [22].It is important to note that, η/s used for hydrodynamic evolutions is close to the DQPM estimates as shown in Fig. 1 (left).However, results from the PHSD transport approach have shown a rather small influence of the µ B -dependence of the QGP interactions on the elliptic flow than hybrid simulations.This small sensitivity of final observables EPJ Web of Conferences 276, 01025 (2023) https://doi.org/10.1051/epjconf/202327601025SQM 2022 to the influence of baryon density on the QGP dynamics can be explained by the fact that at high energies, where the matter is dominated by the QGP phase, one probes the QGP at a very small µ B , whereas at lower energies, where µ B becomes larger, the fraction of the QGP drops rapidly (see Fig. 2 (left)).Therefore, the final observables for lower energies of order of 1 − 10 GeV are in total dominated by the hadrons which participated in hadronic rescattering and thus the information about their QGP origin is washed out or lost. Figure 2 .Fig. 2 ( Figure 2. (Left) The QGP energy fraction from the PHSD as a function of time in central (impact parameter b = 2.2 fm) Au+Au collisions for different √ s NN = 200 − 3 GeV at the midrapidity region |y| < 1. (Right) The transverse momentum distributions for 0-5% central Au+Au collisions at √ s NN =27 GeV and midrapidity (|y| < 1) for PHSD5.2 with partonic cross sections and parton masses calculated for µ B = 0 (dashed blue lines) and with cross sections and parton masses evaluated at the actual µ B in each individual space-time cell (solid red lines) in comparison to the experimental data from the STAR collaboration[23].
2022-09-22T18:05:10.713Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "0f500117280b5802317250cd9e51f3dccb81e2f4", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2023/02/epjconf_sqm2022_01025.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "07c077deb6a9b99649863e07e229090e18d490c7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
14497508
pes2o/s2orc
v3-fos-license
A photometric catalogue of galaxies in the cluster Abell 85 We present two catalogues of galaxies in the direction of the rich cluster \a85. The first one includes 4,232 galaxies located in a region $\pm 1^\circ$ from the cluster centre. It has been obtained from a list of more than 25,000 galaxy candidates detected by scanning a Schmidt photographic plate taken in the \bj band. Positions are very accurate in this catalogue but magnitudes are not. This led us to perform CCD imaging observations in the V and R bands to calibrate these photographic magnitudes. A second catalogue (805 galaxies) gives a list of galaxies with CCD magnitudes in the V and R bands for a much smaller region in the centre of the cluster. These two catalogues will be combined with a redshift catalogue of 509 galaxies (Durret et al. 1997; astro-ph/9709298) to investigate the cluster properties at optical wavelengths (Durret et al. in preparation), as a complement to our previous X-ray studies (Pislar et al. 1997, Lima-Neto et al. 1997). Introduction ABCG 85 is a very rich cluster located at a redshift z=0.0555. We performed a detailed analysis of this cluster from the X-ray point of view, based on Einstein IPC data Send offprint requests to: E. Slezak, slezak@obs-nice.fr ⋆ Based on plates scanned with the MAMA microdensitometer at CAI, Paris and on observations collected at the European Southern Observatory, La Silla, Chile ⋆⋆ Tables 1 and 2 are only available in electronic form at the CDS via anonymopus ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/Abstract.html (Gerbal et al. 1992 and references therein). In the optical, no photometric data were available at that time, except for an incomplete photometric catalogue by Murphy (1984), and about 150 redshifts were published in the literature only after we completed our first X-ray analysis (Beers et al. 1991, Malumuth et al. 1992. We therefore undertook a more complete analysis of this cluster, with the aim of obtaining both photometric and redshift data at optical wavelengths and better X-ray data from the ROSAT data bank . We present here our photometric data. The redshift catalogue is published in a companion paper (Durret et al. 1997a) and the analysis of all these combined optical data will be presented in Paper III (Durret et al. in preparation). Method for obtaining the catalogue We decided to obtain a photometric catalogue of the galaxies in the direction of the Abell 85 cluster of galaxies by first processing the field 681 in the SRC-J Schmidt atlas. This blue glass copy plate (IIIaJ+GG385) was investigated with the MAMA (Machineà Mesurer pour l'Astronomie) facility located at the Centre d'Analyse des Images at the Observatoire de Paris and operated by CNRS/INSU (Institut National des Sciences de l'Univers). In order to also get information on the neighbouring galaxy distribution, the central 5 • × 5 • area has been searched for objects using the on-line mode with the 10 µm step size available at that time. The involved algorithmic steps are well-known. They can be summarized as follows : first a local background estimate and its variance are computed from pixel values inside a 256 × 256 window, then pixels with a number of counts higher than the background value plus three times the variance are flagged, which leads to define an object as a set of con-nected flagged pixels; an overlapping zone of 512 pixels is used in both directions for each individual scan. Although this method may appear rather crude, its efficiency is nevertheless quite high for properly detecting and measuring simple and isolated objects smaller than the background scale. The region where ABCG 85 is located is not crowded by stellar images (b II ≃ −72 • ), so that most of the objects larger than a few pixels can indeed be detected this way. The result was a list of more than 10 5 objects distributed over the ∼ 25 square degrees of the field bounded by 0 h 31 mn 30.4 s < α < 0 h 53 mn 10.6 s and −12 • 18'19.43" < δ < −7 • 05'13.88" (equinox 2000.0, as hereafter), with their coordinates, their shape parameters (area, elliptical modelling) and two flux descriptors (peak density, sum of background-subtracted pixel values). The astrometric reduction of the whole catalogue was performed with respect to 91 stars of the PPM star catalogue (Roeser & Bastian 1991) spread over the field, using a 3 rd -order polynomial fitting. The residuals of the fit yielding the instrumental constants were smaller than 0.25 arcsecond and the astrometry of our catalogue indeed appears to be very good, as confirmed by our multi-object fibre spectroscopy where the galaxies were always found to be very close (< 2.0 arcseconds, i.e. 3 pixels) to the expected positions. Since the required CCD observations were not available at that time, a preliminary photometric calibration of these photographic data has been done using galaxies with known total blue magnitude. The magnitude of stars is certainly much easier to define, but such high-surface brightness objects suffer from severe saturation effects on Schmidt plates when they are bright enough to be included in available photometric catalogues. So, 83 galaxies were selected from the Lyon Extragalactic Database (LEDA) in order to compare their magnitude to their measured blue flux. A small region around each of these objects was scanned and this image has been used: i) to identify the object among its neighbours within the coordinate list and ii) to assess the quality of the flux value stored in the on-line catalogue with respect to close, overlapping or merged objects. The 74 remaining undisturbed objects identified with no ambiguity came from eight different catalogues in the literature. Whatever the intrinsic uncertainties about the integrated MAMA fluxes are, systematic effects were found with respect to the parent catalogue in a flux versus magnitude plot, as well as discrepancies for some objects between the LEDA and the Centre de Données Astronomiques de Strasbourg (CDS) databases. Consequently, three catalogues including 12 objects were removed and the LEDA magnitude of 5 objects was replaced by a CDS value which seems in better agreement with their aspect and with the overall trend when compared to similar objects. Later, 7 objects far from the overall trend were discarded. These successive rejections resulted in a set of 55 objects distributed over a six magnitude range. The magnitude zero-point for our photographic catalogue was obtained by plotting the flux of these objects against their expected magnitude. A rms scatter of 0.34 mag was computed around the linear fit. Classification of the objects Most of the diffuse objects included in our main catalogue were automatically selected according to their lower surface brightness when compared to stars. As usual for glass copies of survey plates, the discrimination power of this brightness criterion drops sharply for objects fainter than approximately 19 th magnitude, and so does the completeness of the resulting catalogue if no contamination is allowed for. The number of galaxy candidates brighter than this limit within the investigated area appeared, however, to be already large enough to get a much better view of the bright galaxy distribution than using the deeper but very incomplete catalogue published by Murphy (1984). Moreover, including faintest objects was not necessary for the redshift survey of the Abell 85 cluster of galaxies we were planning (see Durret et al. 1997). Hence, no attempt was done to reach a fainter completeness limit. Nonetheless, in order to select galaxies, the decision curve which has been computed in the Flux vs. Area parameter space was fitted to the data so that some objects identified by Murphy from CCD frames as faint galaxies were also classified as galaxies by us. Next, a further test based on the elongation was performed in order to reject linear plate flaws or artefacts, as well as to pick bright elongated galaxies first classified as stars due to strong saturation effects. Finally, spurious detections occuring around very bright stars (area greater than 10 3 pixels) due to a wrong estimate of the local background were tentatively removed by checking their location with respect to these bright objects. In this way, a list of more than 25,000 galaxy candidates over the 25 square degrees of our SRC-J 681 blue field was obtained. The distribution of these galaxies is displayed in Fig. 1 for objects brighter than B J = 19.75. The Abell 85 cluster is clearly visible, as well as several other density enhancements which are mostly located along the direction defined by the cluster ellipticity. Completeness and accuracy of the classification The differential luminosity distribution of the galaxy candidates indicates that the sample appears quite complete down to the B J = 19.75 magnitude (see Fig. 2). To go further, we first tested the completeness of this overall list by cross-identifying it with three catalogues from the literature (Murphy 1984, Beers et al. 1991, Malumuth et al. 1992 with the help of images obtained from the mapping mode of the MAMA machine. It appeared that: i) all but one galaxy of the Malumuth et al. (1992) catalogue of 165 objects are actually classified as galaxies, with a mean offset between individual positions equal to 1.10 ± 0.06 arcsecond ; ii) 94% of the 35 galaxies listed by Beers et Fig. 1. Spatial distribution of the 11,862 galaxies brighter than BJ = 19.75 in the SRC-J 681 field. The large overdensities are indicated by superimposed isopleths from a density map computed by the method introduced by Dressler (1980) with N = 50 ; eleven isopleths are drawn from 850 to 2,850 galaxies/square degree. al. (1991) inside the area are included in our catalogue, only 2 bright objects which suffer from severe saturation being misclassified. Note that such an effect also caused 5 of the 83 galaxies chosen as photometric standards to be misclassified, which gives the same percentage as for the sample by Beers et al. The comparison with the faint CCD catalogue built by Murphy (1984) in the so-called r F band (quite similar to that obtained using a photographic IIIaF emulsion with a R filter) was performed only for objects which were visible on the photographic plate with secure identification (only uncertain X and Y coordinates are provided in the paper) and classified without any doubt as galaxies from our visual examination. There remained 107 objects out of 170, among which 88 are brighter than B J ∼ 19.75 (r F ∼ 18.5). Down to this flux limit, 82 objects (∼ 93%) are in agreement, thereby validating the choice of our decision curve in the Flux vs. Area parameter space. These cross-identifications therefore indicate that the completeness limit of our catalogue is about 95% for such objects, as expected from similar studies at high galactic latitude. In order to confirm this statement and to study the homogeneity of our galaxy catalogue, we then decided to verify carefully its reliability inside the region of the Abell 85 cluster of galaxies itself. The centre of ABCG 85 was assumed to be located at the equatorial coordinates given in the literature, α = 0 h 41 mn 49.8 s and δ = −9 • 17'33.", and a square region of ±1 • around this position was defined; such an angular distance corresponds to ∼ 2.7 Mpc h −1 100 at the redshift of the cluster (z = 0.0555). However, let us remark that the position of the central D galaxy is slightly different, α = 0 h 41 mn 50.5 s and δ = −9 • 18'11.", and so is the centre we found from our X-ray analysis of the diffuse component of this cluster, i.e.: α = 0 h 41 mn 51.9 s and δ = −9 • 18'17." . For all our future studies, we then chose to define the cluster centre as that of this X-ray component. The distribution of the ∼ 4,100 candidates within the area has been first of all visually inspected to remove remaining conspicuous false detections around some stars as well as some defects mainly due to a satellite track crossing the field. This cleaned catalogue contains a little more than 4,000 galaxy-like objects, half of which brighter than B J = 19.75. The intrinsic quality of this list has then been checked against a visual classification of all the recorded objects within a ± 11'25" area covering the region already observed by Murphy (1984) around the location α = 0 h 41 mn 57.0 s and δ = −9 • 23'05". The inspection of the corresponding MAMA frame of 2048 × 2048 pixels enabled us to give a morphological code to each object, as well as to flag superimposed objects and to deblend manually 10 galaxies (new positions and flux estimates for each galaxy member). Of course, the discrimination power of this visual examination decreases for star-like objects fainter than B J = 18.5 (r F ∼ 17.3) due to the sampling involved (pixel size of 0.67"), and an exact classification of such objects appeared to be hopeless above the a priori completeness limit of our automated galaxy list guessed to be B J = 19.75. Down to this limit, our results can be summarized as follows : i) ∼94% of the selected galaxies are true galaxies (including 7 multiple galaxies and 2 mergers with stars), while 4% may be galaxies ; ii) 7 genuine galaxies are missed (4%). Since these contamination and incompleteness levels of 5-6% were satisfactory, we decided to set the completeness limit for our automated galaxy catalogue at this magnitude B J = 19.75. The photographic plate catalogue For objects fainter than our completeness limit, the visual check of the inner (± 11'25") part of our object list has enabled us to confirm the galaxy identification of 135 galaxy candidates as well as to select 214 misclassified faint galaxies. The total number of galaxies included in the visual sample down to the detection limit is 541, whereas the initial list only contains 338 candidates within the same area. Keeping in mind that both catalogues are almost identical for objects brighter than B J = 19.75, we decided to replace the automated list by the visual one inside this ± 11'25" central area. Note that about 150 objects remained unclassified, including 26 galaxies from the CCD list by Murphy. We added these 26 galaxies to the final catalogue whose galaxies are plotted in Fig. 3. Table 1 lists the merged catalogue of 4,232 galaxies obtained from the SRC-J 681 plate in the ±1 • field of ABCG 85, with V and R magnitudes computed using the transformation laws obtained from our CCD data (see §3.3). This Table includes the following information : running number ; equatorial coordinates (equinox 2000.0) ; ellipticity ; position angle of the major axis ; B J , V, and R magnitudes ; X and Y positions in arcsecond relative to the centre defined as that of the diffuse X-ray emission of the cluster (see above) ; cross-identifications with the lists by Malumuth et al. (1992), Beers et al. (1991) and Murphy (1984). Description of the observations The observations were performed with the Danish 1.5m telescope at ESO La Silla during 2 nights on November 2 and 3, 1994 (the third night was cloudy, and this accounts for the missing fields in Fig. 4). A sketch of the observed fields is displayed in Fig. 4. Field 1 was centered on the coordinates : 00 h 41 mn 46.00 s , −9 • 20'10.0" (2000.0). There was almost no overlap between the various fields (only a few arcseconds). The Johnson V and R filters were used. Exposure times were 10 mn for all fields; 1 mn exposures were also taken for a number of fields with bright objects in order to avoid saturation. The detector was CCD #28 with 1024 2 pixels of 24 µm, giving a sampling on the sky of 0.377"/pixel, and a size of 6.4×6.4 arcmin 2 for each field. The seeing was poor the first night : 1.5-2" for fields 1 and 2, 2-3" for field 3 (in which consequently the number of galaxies detected is much smaller), and good the second night : 0.75-1.1". On the other hand, the photometric quality of the first night was better than that of the second one. However, the observation of many standard stars per night made a correct photometric calibration possible even for the second night as indicated by a comparison with an external magnitude list : the photometric catalogues from the six fields have the same behaviour for both nights (see e.g. Fig. 8). Data reduction Corrections for bias and flat-field were performed in the usual way with the IRAF software. Only flat fields obtained on the sky at twilight and dawn were used; dome flat fields were discarded because they showed too much structure. Each field was reduced separately. The photometric calibration took into account the exposure time, the time at which the exposure had been made, the color index (V-R), the airmass, and a second order term including both the color index and airmass. The photometric characteristics of both nights were estimated separately. Objects were automatically detected using the task DAOPHOT/DAOFIND. This task performs a convolution with a gaussian having previously chosen characteristics, taking into account the seeing in each frame (FWHM of the star-like profiles in the image) as well as the CCD readout noise and gain. Objects are identified as the peaks of the convolved image which are higher than a given threshold above the local sky background (chosen as approximately equal to 4 σ of the image mean sky level). A list of detected objects is thus produced and interactively corrected on the displayed image so as to discard spurious objects, add undetected ones (usually close to the CCD edges) and dispose of false detections caused by the events flagged in the previous section. Since exposure times were the same in V and R, the number of objects detected in the R band is of course much larger. We used the package developed by O. Le Fèvre (Le Fèvre et al. 1986) to obtain for each field a catalogue with the (x,y) galaxy positions, isophotal radii, ellipticities, major axis, position angles, and V and R magnitudes within the 26.5 isophote. Star-galaxy separation was performed based on a compactness parameter q determined by Le Fèvre et al. (1986, see also Slezak et al. 1988, as described in detail e.g. by Lobo et al. (1997). We chose q=1.45 as the best separation limit between galaxies and stars; very bright stars were classified as galaxies with this criterion and had to be eliminated manually. After eliminating repeated detections of a few objects, we obtained a total number of 805 galaxies detected in R, among which 381 are detected in V. The errors on these CCD magnitudes are in all cases smaller than 0.2 magnitude, and their rms accuracy is about 0.1 magnitude; these rather large values are due to the bad seeing during the first night and to pretty poor photometric conditions during the second night. Positions of the galaxies detected in the R band relative to the centre defined above are displayed in Fig. 5. Notice the smaller number of galaxies detected in field 3 due to a sudden worsening of the seeing during the exposure on this field. The astrometry of this CCD catalogue is accurate to 1.5-2.0 arcseconds as verified from the average mutual angular distance between CCD and MAMA equatorial coordinates for 174 galaxies included in both catalogues. The histogram of the R magnitudes in the CCD catalogue is displayed in Fig. 6. It will be discussed in detail in Paper III (Durret et al. in preparation). The turnover value of this histogram is located between R=22 and R=23, suggesting that our catalogue is roughly complete up to R=22. The (V-R) colours are plotted as a function of R for the 381 galaxies detected in the V band in our CCD catalogue (Fig. 7). Unfortunately, since the observed CCD field is small, there are only 50 of these galaxies with measured redshifts, and therefore it is not possible to derive a colourmagnitude relation from which to establish a membership criterion for the cluster. Transformation laws between the photometric systems 576 stars were also measured on the CCD images and used to calculate calibration relations between our photographic plate B J magnitudes and our V and R CCD magnitudes. The observed R band CCD magnitude R CCD as a function of the R magnitude calculated from the photographic B J magnitude is plotted in Fig. 8 for galaxies, showing the quality of the correlation for the six different CCD fields, especially for objects brighter than R=19. All the CCD fields appear to behave identically. The CCD catalogue The CCD photometric data for the galaxies in the field of ABCG 85 are given in Table 2. This Table includes for each object the following information : running number ; equatorial coordinates (equinox 2000.0) ; isophotal radius ; ellipticity ; position angle of the major axis ; V and R magnitudes ; X and Y positions in arcsecond relative to the centre assumed to have coordinates α = 0 h 41 mn 51.90 s and δ = −9 • 18'17.0" (equinox 2000.0) (this centre was chosen to coincide with that of the diffuse X-ray gas component as defined by Pislar et al. (1997) ). Conclusions Our redshift catalogue is submitted jointly in a companion paper . Together with the catalogues presented here, it is used to give an interpretation of the optical properties of ABCG 85 (Durret et al. in preparation, Paper III), in relation with the X-ray properties of this cluster (Pislar et al. 1997, Lima-Neto et al. 1997, Papers I andII). This article was processed by the author using Springer-Verlag L a T E X A&A style file L-AA version 3. (V-R) colour as a function of R for the 381 galaxies detected in the V band in our CCD catalogue. The 50 galaxies indicated with a square are those with redshifts in the interval 13,350 -20,000 km s −1 assumed to belong to ABCG 85. Fig. 8. Observed R band CCD magnitude RCCD as a function of the R magnitude calculated from the photographic BJ magnitude. The six different symbols correspond to the six CCD fields described above.
2017-09-16T00:38:27.362Z
1997-10-10T00:00:00.000
{ "year": 1997, "sha1": "041ed91c33c242a765a0a0befc34813d141e8ca1", "oa_license": null, "oa_url": "https://aas.aanda.org/articles/aas/pdf/1998/04/ds1413.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "677b6dbb6753ea23c72b707799d6fcf7032aef08", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
261372563
pes2o/s2orc
v3-fos-license
Phytochemical and antibacterial studies of methanolic extract and fractions of Guiera senegalensis leaves The aim of the research study was to carry out phytochemical and antibacterial studies of methanolic leaves extract and its fractions of G uiera senegalensis . The plant material was collected, identified, shed dried and pulverized to fine powder using pestle and mortar. The powdered plant material was subjected to maceration using methanol to obtained crude methanol leaf extract which was then partitioned using n -hexane, chloroform ethyl acetate and n-butanol. The extract and the fractions obtained were subjected to phytochemical screening using standard procedure, to detect the presence of secondary metabolites. The antibacterial assay of the extract and its fractions against S. aureus , B. subtilus , P.mirabilis and S. typhyrium were investigated using agar well diffusion method at different concentration (100 – 12.5 mg/mL). The phytochemical screening revealed the presence of various secondary metabolites which varies in the extract and fractions. The extract and its fractions showed significant ( p< 0.05 ) antibacterial activities against all the test isolates with the methanol extract having the highest mean zone of inhibition ranging from 2.00±1.00d-24.00±1.00* mm then followed by ethyl acetate fraction with the mean zone of inhibitions from 4.00±1.00d-22.00±1.00* mm the n-butanol fraction and the chloroform fraction had mean zone from 4.00±1.00d-18.50±1.00* mm and 4.00±1.00d-17.60±1.00d mm respectively while n-hexane fraction recorded lower mean zone of inhibition from 4.00±1.00a-15.00±1.00* The standard drug Ciprofloxacin had mean zone of inhibition from 16.20±0.00-39.0±0.00 mm. The most sensitive organism was S. aureus , while the least sensitive organism was S. typhyrium . The study has validated the ethnomedicinal claim for the use of this plant in treatment of antibacterial infections. Introduction The prevalence and clinical pattern of skin disorders are known to vary with climatic factors and cultural habits. There are reports highlighting the prevalence and pattern of skin disorders in various geographical locations (Patil et al., 2012). Although hospital based figures may not give a true representation of the prevalence, they may suggest the burden of the illness necessitating measures to combat them in the community. (Okoro and Emeka, 2013). Skin diseases are common and cause considerable morbidity worldwide. Lack of awareness of symptoms among the majority of lay people and lack of knowledge about skin diseases among first-and second-line health care providers have contributed to underestimations of prevalence. Household surveys (including people not seeking treatment) before the year2000 report point prevalence rates of 27%-53%, while it was 62%-87% after 2000. (Taal et al.,2015). In Nigeria, most studies on the pattern of skin diseases are hospital based and there is paucity of data from rural communities. The few studies carried out in rural communities in Nigeria were carried out mainly among school children. Reports from studies in rural communities in Cameroun and India have shown that infections are among the major skin diseases documented (Akinkugbe et al., 2016). Medicinal plants are of great importance to the health of individuals and communities. It is estimated that there are about 700,000 species of tropical flowering plants that have medicinal properties. Their actions include: antibacterial, antifungal, antiviral, antihelminthic and anticarcinogenic among others. These medicinal values lie in some chemical substances they contain (Bako et al., 2014). Several traditional medicinal plants, including Guiera senegalensis (plate 1), a shrub that grows well in sub-Saharan Africa have been candidates for research because of their perceived medicinal properties. Guiera senegalensis has been used in Northern part of Nigeria and elsewhere in traditional medicine as a cure for infections and wounds (Mohammed et al., 2016). The importance of Guiera senegalensis in traditional medicine became more apparent with the recent increase in fungal infections in Africa, and elsewhere. Extracts of Leaves, shoots and galls of Guiera senegalensis were found to be useful against bacteria and fungi infections (Al Shafei et al., 2016). Guiera senegalensis belongs to combretaceae family which consists of trees or shrubs, sometime climbing plants, comprising about 20 genera and 500 species (Siddig Hamad et al., 2017). Guiera senegalensis has numerous traditional medicinal applications, for instance, its leaves are employed for various internal diseases, prevention of leprosy, dermatoses, as tonic, infusions as diuretic, for stomach ache, cough and so on (Siddig Hamad et al., 2017). Guiera senegalensis leaves are widely administered for pulmonary and respiratory complaints, for coughs, as a febrifuge, colic and diarrhea, syphilis, beriberi, leprosy, impotence, rheumatism, diuresis and expurgation. In Northern Nigeria powdered leaves are mixed with food as a general tonic and blood restorative and also to women as a galactagogue. In Ghana and other West African Countries, leaves are used to treat dysentery and fever due to malaria¬ (Jigam et al., 2011). Phytochemical screening for Guiera senegalensis showed significant number of secondary metabolites namely anthraquinones, terpenoids, saponins, alkaloids, coumarins, mucilages, flavonoids, tannins and cardiotonic. Its cyanogenic heterosides were assayed in different organs of the plant, such as leaves, fruits, roots, and stem bark (Siddig Hamad et al., 2017). This research was aimed at investigating the phytochemical and antibacterial efficacy of methanolic leaves extract of Guiera senegalensis, Plate 1: A close view of Guiera senegalensis (Siddig et al., 2017). Sample collection and Identification The leaves of Guiera senegalensis was collected from Ruggar-Lima, Kware Local Government area of Sokoto State, Nigeria. The sample was identified at the Herbarium Unit, Department of Biological Sciences, Usmanu Danfodiyo University, Sokoto and was given a specimen number (UDUH/ANS/0145), packed in a polythene bags and then transported immediately to the Biology Laboratory for further treatments. 2.2 Sample preparation and extraction Fresh leaves of Guiera senegalensis was rinsed with tap water and shade dried in an open air, and then grounded into powdered form. One thousand grams (1000 g) of the powdered using pestle and mortar sample was macerated with 8 L of methanol with occasional agitation for 72hours, the extract was filtered and the solvent evaporated with rotary evaporator at 40 o C to obtain crude methanol leaf extract of Guiera senegalensis. (250 g) of the extract was suspended in 800 mL of distilled water which was then filtered and partitioned with solvent of increasing polarity to obtain n-hexane (HF), chloroform (CF), ethylacetate (EF) and n-butanol (BF) fractions. 2.3 Phytochemical screening Various chemical tests was conducted on the methanol extract and it's fractions to identify the presence of secondary metabolite such as alkaloids, flavonoids, tannins, saponins, terpenoids, Cardiac glycoside phenols and steroid according to the method described by Evans (2002). Preparation of nutrient agar plates The nutrient agar plates were prepared by suspending 28 g of nutrient agar powder in 1000 mL of distilled water. The mixture was then heated while stirring to fully dissolve all components. The dissolved mixture was autoclaved at 121 for 15 minutes and allowed to cool. The nutrient agars poured into each plate and leave the plates on the sterile surface until the agar becomes solidified. The lid of each Petri dish was replaced and the plates were stored in a refrigerator (Proom et al., 1950). Test strains Authentic pure cultures of pathogenic bacteria of Gram-positive (Staphylococcus aureus and Bacillus subtilus) and Gram-negative (Proteus mirabilis and Salmonella typhyrium) bacterial strains were used in the study. The organisms was sub-cultured on Mueller Hinton Agar medium, incubated at 37°C for 24 h and stored at 4°C in the refrigeration to maintain stock culture. The Gram-positive and Gram-negative bacteria were pre-cultured in nutrient broth overnight in a rotary shaker at 37°C and centrifuged at 10,000 rpm for 5 mins, pellet was suspended in double distilled water and the cell density were standardized by UV spectrophotometer (Soniya, 2009). Preparation of control solution Stock solutions of Ciprofloxacin (5 mg/ mL) was prepared by dissolving 50 mg of the powder in 10 mL of distilled water from which 0.05 mg/mL (50 μg/ mL) working solution was prepared. Preparation of crude extract and fractions of G. senegalensis Stock concentrations of 100 mg/mL was prepared with 10 % dimethyl sulfoxide (DMSO) by dissolving 0.5 g (500 mg) each of the crude extract and fractions (n-Hexane, chloroform ethylacetate, and n-butanol) in 5 mL of 10 % DMSO two-fold serial dilution was carried out to obtain three solutions of concentrations of 50, 25 and 12.5 mg/ mL. Antibacterial Assay Antibacterial activity of the extracts and its fractions were determined by agar diffusion method as adapted by (Tari et al., 2015). The standardized organisms were uniformly streak unto freshly prepared Mueller Hinton Agar with the aid of a sterile swab stick (cotton swabs). Four wells were punched on the inoculated agar plates using a cork borer. The wells were properly labeled according to the different concentrations of the extract and the fractions prepared. The punched wells were then filled with the extract. The plates were allowed to stay on the bench for 1hour for the extract to diffuse into the agar after which they were incubated at 37 0 C for 18-24hours. At the end of the incubation period, the plates were observed for any evidence of inhibition, which appeared as clear zones that were completely devoid of growth around the wells. The diameter of the clear zones was measured with a transparent ruler calibrated in millimeter (mm). Statistical Analysis The results obtained was subjected to the analysis of variance (ANOVA) using SPSS software followed by post hoc test, values were considered significant at p<0.05 and the data expressed as mean ± standard deviation Extraction and Fractionation The extraction of 1000 g of Guiera senegalensis afforded a yield of 250 g of the crude extract and the percent yields from the partitioned fractions are presented in Table 1. Antibacterial assay The result of antibacterial assay of the methanol extract and its n-hexane, chloroform, ethylacetate, and n butanol fractions against some selected antibacterial isolates are presented in Table 3, 4, 5, 6 and 7 respectively. Percentage yield The extraction of 1000 g of the powdered sample of G. senegalensis using methanol as extracting solvent yielded 250 g of methanol extract. The fractionation of water soluble portion of methanol leaf extract revealed that n-butanol fraction has the highest percentage % yield followed by ethylacetate, n-hexane, and finally chloroform (Table1). The result signifies that n-butanol offers highest activity while chloroform had the least. Phytochemical screening of methanol leaf extract and it fractions Preliminary phytochemical screening of the methanol leaves extract and it fractions (n-Hexane, chloroform, and ethylacetate, and nbutanol) of the leaves revealed the presence of saponins, tannins, alkaloids, cardiac glycoside, steroid, triterpenoids, phenols and flavonoids, ethylacetate and n-butanol fractions indicated the presence of similar constituents including flavonoids, alkaloids, tannins, saponins, phenols, steroid and cardiac glycoside while n-hexane contained only steroid, and triterpenoids as preseted in (Table.2).The presence of these phytochemical constituents in the other plants have been reported (Mukhtar et al., 2019). These secondary metabolites are thought to be responsible for the pharmacological activities of the plant (Emaikwu et al., 2019). Abubakar et al., (2020) reported the presence of saponins, tannins, alkaloids, cardiac glycoside, steroid, triterpenoids, phenols and flavonoids in the ethylacetate and n-butanol of the other plant. Antibacterial studies The antibacterial activity of methanol extract and it fractions (n-hexane, chloroform, ethylacetate and n-butanol) exhibited varying antibacterial activity against the test organisms and the activity was concentration dependent. The methanol leaf extract and it fractions exhibited significant (p<0.05) antibacterial activity at the graded concentration (100-12.5 mg/cm 3 ) with mean zone of inhibition ranging from 4.00±1.00d-24.00±1.00* mm against the test organisms (S.aureus ,B. subtilus, P. mirabilis and S. typhyrium ). Methanol leaf extract showed the highest mean zone of inhibition against S.aureus while nhexane fraction exhibited the least mean zone against B. subtilus. The standard drug ciprofloxacin showed the mean zone of inhibition range (16.20±0.00-39.0±0.00 mm) against all the test organisms; the drug showed the highest mean zone of inhibition against P. mirabilis and there was a lowest activity against S. typhyrium as presented in (Table 3). Methanol leaf extract showed significant (p< 0.05) antibacterial activity against S. aureus at 100 mg/mL which was higher than that of ciprofloxacin at 0.05 mg/mL as presented in (Table 3). n-hexane fraction indicated a higher antifungal activity against S. aureus and S. typhyrium at 100 mg/mL (Table 4), similarly, the standard drug ciprofloxacin exhibited higher effect against P. mirabilis when compared with the n-butanol fraction at 100 mg/mL and the effect was statistically significant (Table7). Chloroform fraction exhibited significant antifungal activity against S. aureus when Ethylacetate fraction exhibited the highest antibacterial activity against P. mirabilis ( Table 5). The highest activity observed by methanol leaf extract might be due to the concentration of moderately polar compounds such as flavonoids and their derivatives that have been reported to possess antibacterial activity (Alemu et al., 2017). Methanol leaf extract is a very good antibacterial agent for the treatment of different antibacterial infections caused by Stemphylium solani, Aspergillus flavus, Trichoderma viride, Penicillium Spp., Fusarium verticillatum, Cladosporium cladosporioides, and Fusarium solani (Patil et al., 2012). n-butanol fraction showed the activity against P. mirabilis when compared to ciprofloxacin, even though it recorded the mean zone of 12.00±1.00d mm against S. typhyrium which was lower than that of ciprofloxacin 24.30±0.06 mm but the difference is not statistically significant (Table.7). However, of all the antibacterial isolates used S. aureus was the most sensitive organism to methanol leaf extract and is the most dangerous of all the many common antibacterial isolates. Conclusion Preliminary phytochemical screening of methanol leaf extract and it fractions (n-Hexane, chloroform, ethylacetate, and n-butanol) revealed the presence of saponins, tannins, alkaloids, cardiac glycoside, steroid, triterpenoids, phenols and flavonoids. G. senegalensis has demonstrated significant antibacterial activity validating the ethnomedicinal claim for the use of the plant in the treatment of antibacterial infections. Conflict of Interest The author declares that there is no conflict of interest.
2023-08-31T15:13:05.952Z
2023-08-08T00:00:00.000
{ "year": 2023, "sha1": "84627101eab89c9bbf36922a96c255dac0ae6763", "oa_license": "CCBYNC", "oa_url": "https://www.ajol.info/index.php/cajost/article/view/252473/238508", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a1535435a5da6055724b66cd9304b3ddc4cae7b6", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
250471782
pes2o/s2orc
v3-fos-license
Expression of heat shock protein 70 (hsp70) in liver and kidney organ of silver rasbora (rasbora argyrotaenia) exposed by sublethal organophosphate pesticides Heat Shock Protein 70 (HSP70) is a stress protein that can appear in all cell types. HSP70 test is useful as a biomarker (biological response marker) that can be used as a stress marker in fish. Liver and kidney tissue, including types of organs that play an important role in metabolic processes in fish, make it possible for these tissues to respond to the presence of HSP70. This research aims to determine the effect of sublethal exposure to organophosphate pesticides on the expression of HSP70 in the liver and kidneys organ of Silver Rasbora, and to determine the comparison of HSP70 expression between the liver and kidneys organ of Silver Rasbora that exposed to sublethal organophosphate pesticides. This research was conducted in November 2020 - February 2021 using a completely randomized design (RAK) method consisting of 5 treatments and 4 replications. Each treatment consisted of a concentration of 0.001 ppm, 0.005 ppm, 0.01 ppm, 0.05 ppm and control treatment of 0 ppm. Silver Rasbora were acclimatized and maintained for 7 days and subjected to pesticide exposure treatment for the last 96 hours. The main parameters in this research were the expression of HSP70 in the liver and kidneys of Silver Rasbora and the supporting parameters observed were survival rate, temperature, pH, ammonia, and dissolved oxygen (DO). The main parameters were analyzed using the one-factor ANOVA test and Duncan's advanced test. Meanwhile, to determine the average difference between liver and fish kidneys, the independent sample T test was used. Supporting parameters were analyzed descriptively using tables and graphs. The results showed that exposure to sublethal organophosphate pesticides affected the appearance of HSP70 expression in the liver and kidneys of Silver Rasbora. The HSP70 value in both organs increased with increasing pesticide concentration. The liver organ has an average HSP70 value higher than that of the kidneys organ. Corresponding Author* : lailatullutfiyah@fpk.unair.ac.id Abstract. Heat Shock Protein 70 (HSP70) is a stress protein that can appear in all cell types. HSP70 test is useful as a biomarker (biological response marker) that can be used as a stress marker in fish. Liver and kidney tissue, including types of organs that play an important role in metabolic processes in fish, make it possible for these tissues to respond to the presence of HSP70. This research aims to determine the effect of sublethal exposure to organophosphate pesticides on the expression of HSP70 in the liver and kidneys organ of Silver Rasbora, and to determine the comparison of HSP70 expression between the liver and kidneys organ of Silver Rasbora that exposed to sublethal organophosphate pesticides. This research was conducted in November 2020 -February 2021 using a completely randomized design (RAK) method consisting of 5 treatments and 4 replications. Each treatment consisted of a concentration of 0.001 ppm, 0.005 ppm, 0.01 ppm, 0.05 ppm and control treatment of 0 ppm. Silver Rasbora were acclimatized and maintained for 7 days and subjected to pesticide exposure treatment for the last 96 hours. The main parameters in this research were the expression of HSP70 in the liver and kidneys of Silver Rasbora and the supporting parameters observed were survival rate, temperature, pH, ammonia, and dissolved oxygen (DO). The main parameters were analyzed using the one-factor ANOVA test and Duncan's advanced test. Meanwhile, to determine the average difference between liver and fish kidneys, the independent sample T test was used. Supporting parameters were analyzed descriptively using tables and graphs. The results showed that exposure to sublethal organophosphate pesticides affected the appearance of HSP70 expression in the liver and kidneys of Silver Rasbora. The HSP70 value in both organs increased with increasing pesticide concentration. The liver organ has an average HSP70 value higher than that of the kidneys organ. Introduction The use of pesticides in agriculture only functions as much as 90%, the rest has polluted the surrounding environment, especially fish [1]. Organophosphate is an active neurotoxin, it doesn't even require any conversion to inhibit the enzyme acetylcholinesterase [2]. Organophosphate is able to inhibit the action of the enzyme acetylcholinesterase (AChE) which causes the disruption of acetylcholine in delivering impulse stimulation from pre-synapse to post-synapse (neurotransmitter) so that the work of the muscles becomes disrupted. Muscle work that is not directed to cause symptoms of poisoning that affect the entire body [3]. HSP 70 is a group of HSPs found in abundance in organisms from bacteria to mammals, and their expression is markedly induced in response to environmental stressors, such as heat shock, UV and radiation, and chemical exposure [4]. The liver is the center of metabolism [5], The kidney organ acts as the main hematopoietic organ which also functions in the osmoregulation process [5]. Based on the above explanation, it is necessary to conduct research on sublethal exposure to organophosphate pesticides in stingrays, so that the expression of HSP70 in the liver and kidneys exposed to sublethal organophosphate pesticides can be used as a reference for detecting stress levels in fish. Research methods and Design The research method used is an experimental research method. The study used a completely randomized design (CRD) with 5 treatments and 4 replications. All treatments were exposed to organophosphate pesticides except controls. Preliminary Test A preliminary test is carried out to get the dose for the actual one. The test container is a 5 liter jar and filled with 1.5 liters of water equipped with aeration. The dosage used in the preliminary test includes; 0 mg / l; 0,0001 mg / l; 0.001 mg / l; 0.01 mg / l; 0.1 mg / l; 0.25 mg / l; 0.50; 0.75 mg / l and 1 mg / l (Kienle et al., 2009). Larval deaths are recorded every 24 hours until the yolk in the larvae runs out. Sublethal Test This test is carried out by using doses including; 0 ppm, 0.5 ppm, 1 ppm, 1.5 ppm and 2 ppm with 4 replications. The research container was a 5-liter jar equipped with aeration and filled with 1.5 liters of water. Each jar is filled with organophosphate pesticides according to the prescribed dose. Main Parameters and Supporting Parameters The main parameter in this study was the expression of Heat Shock Protein 70 (HSP70) in the liver and kidneys of silver rasbora fish (Rasbora argyrotaenia) exposed to sublethal organophosphate pesticides. Supporting parameters observed are survival rate (SR) and water quality data. The results showed that the sublethal concentration of organophosphate pesticides had an effect on the HSP70 values of the liver and kidneys of silver rasbora fish. The highest average value of HSP70 in the liver was in treatment P4 with an average value of 731.59 ng/ml and the lowest average value in treatment P0 was 295.28 ng/ml. While in the kidney, the highest average value of HSP70 was in treatment P4 with an average value of 398.35 ng/ml and the lowest average value in treatment P0 was 202.09 ng/ml. Picture 1. Grafik Survival Rate Silver Rasbora Fish The survival rate of silver rasbora fish observed at 96 hours after exposure to sublethal doses of organophosphate pesticides showed that stingrays had a high survival rate of 100% in all treatments. The expression of HSP70 in fish is a sensitive indicator of the cellular response to stressor exposure in the aquatic environment. HSP70 acts as a cell balancer when the internal and external conditions of the fish's body are unstable or experiencing stress due to stressors [6]. Organophosphate pesticides have a toxic action that is to inhibit the bond between acetyl and acetylcholinesterase (AchE). Organophosphates that enter the fish body will penetrate into the cells then the phosphorylated group of the organophosphate will bind to the acetylcholinesterase ester group by means of covalent bonds to form the organophosphate acetylcholinesterase complex, causing the accumulation of acetylcholine [1]. Accumulation of acetylcholine at the receptor can cause the diffusion of oxygen into the blood capillaries to be disrupted, resulting in a nervous imbalance and disrupting all cell activities. Cells that receive a stress response will directly form the HSP70 gene, but if this stress signal continues continuously then HSP70 will also continue to be produced so that the impact can cause damage to cells such as edema, hyperplasia, fusion, cell swelling and necrosis [7]. Conclusion The HSP70 value in both organs increased with increasing pesticide concentration. The liver organ has an average HSP70 value higher than that of the kidneys organ.
2022-07-13T16:32:21.390Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "2433dfc7ee7fe6a772ab5d78f001b0037dd7872e", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/1036/1/012007", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "24a77e5f21e87396e95d0b45cb3f53e01c3aa2bc", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
10193366
pes2o/s2orc
v3-fos-license
Novel BRCA1 and BRCA2 pathogenic mutations in Slovene hereditary breast and ovarian cancer families The estimated proportion of hereditary breast and ovarian cancers among all breast and ovarian cancer cases is 5–10%. According to the literature, inherited mutations in the BRCA1 and BRCA2 tumour-suppressor genes, account for the majority of hereditary breast and ovarian cancer cases. The aim of this report is to present novel mutations that have not yet been described in the literature and pathogenic BRCA1 and BRCA2 mutations which have been detected in HBOC families for the first time in the last three years. In the period between January 2009 and December 2011, 559 individuals from 379 families affected with breast and/or ovarian cancer were screened for mutations in the BRCA1 and BRCA2 genes. Three novel mutations were detected: one in BRCA1 - c.1193C>A (p.Ser398*) and two in BRCA2 - c.5101C>T (p.Gln1701*) and c.5433_5436delGGAA (p.Glu1811Aspfs*3). These novel mutations are located in the exons 11 of BRCA1 or BRCA2 and encode truncated proteins. Two of them are nonsense while one is a frameshift mutation. Also, 11 previously known pathogenic mutations were detected for the first time in the HBOC families studied here (three in BRCA1 and eight in BRCA2). All, except one cause premature formation of stop codons leading to truncation of the respective BRCA1 or BRCA2 proteins. Introduction Most breast and ovarian cancers are sporadic and only about 5-10% of breast and 10% of ovarian cancers are thought to be hereditary, causing the hereditary breast and ovarian cancer (HBOC) syndrome (1,2). Majority of HBOC cases have underlying cause in germline mutations in the BRCA1 and BRCA2 susceptibility genes (3,4). Carriers of known deleterious mutations in the BRCA genes have a lifetime risk of approximately 60 to 80% for development of breast cancer (BC) and a 15 to 40% lifetime risk for ovarian cancer (OC) and are also at a heightened risk for some other cancer types (4)(5)(6). So far, genome-wide association studies have not identified other highly penetrant susceptibility genes linked with HBOC, as reviewed in Mavaddat et al (7). Genetic screening of BRCA1 and BRCA2 therefore remains the only verified strategy for identification of individuals at high risk for hereditary BC and/or OC. To reduce cancer risk, healthy carriers of deleterious BRCA mutations are presented with various preventive options, such as regular intensive screenings, prophylactic mastectomy with breast reconstruction and/or oophorectomy or chemoprevention in the setting of a clinical trial (8,9). Additionally, genetic counseling and BRCA screening can be offered to first degree relatives of the carrier. The present report continues the previous report of our group from 2011 where pathogenic mutations in the BRCA1 and BRCA2 genes in the Slovene population were described (10). We describe novel pathogenic mutations that have not yet been described in the literature or BRCA mutational databases, such as Breast Cancer Information Core Database (BIC), Human Gene Mutation Database (HGMD-Professional), Universal Mutation Database (UMD) and Leiden Open Variation Database (LOVD). We also report pathogenic mutations for which records already exist but were detected for the first time in the Slovene HBOC families tested between January 2009 and December 2011. The possible effects of novel and pathogenic BRCA1 and BRCA2 mutations which have been detected in Slovene HBOC families for the first time are discussed. Patients and methods Tested individuals. In the period from January 2009 to December 2011, 559 new individuals from 379 Slovene HBOC families were submitted through mutational screening of the BRCA1 and/or the BRCA2 genes at the Institute of Oncology Ljubljana, which is the only public institution performing BRCA screenings in Slovenia. Probands were chosen after genetic counseling according to the ASCO guidelines for genetic and genomic testing for cancer susceptibility (11). The family history data were verified in the Slovenian state cancer registry established in 1950. All tested individuals provided written informed consent and attended genetic counseling sessions before and after testing. Mutation screening. In 362 probands admitted for complete screening of all BRCA1/2 exons, methods for variations searching consisted of multiplex ligation-dependent probe amplification analysis (MLPA; MRC Holland, Amsterdam, Netherlands) for detection of large genomic deletions and insertions and screening for small mutations of all BRCA1 and BRCA2 exons with high-resolution melting (HRM), denaturing gradient gel electrophoresis (DGGE) and direct sequencing methods (10). Probands (197) from cancer-affected families with already confirmed pathogenic BRCA mutation were tested only for the familial pathogenic mutation. The nomenclature of this study follows the Nomenclature for Description of Genetic Variations approved by the Human Genome Variation Society (HGVS). Results Since the screening for BRCA mutations began in Slovenia in the year 1999, altogether 45 distinct pathogenic BRCA mutations have been detected in the tested Slovene families -22 in the BRCA1 and 23 in the BRCA2 (Table I). The overall mutation detection rates for the period between January 1999 to December 2008 and from January 2009 to December 2011 were 29.8 and 21.2%, respectively (Table II). The majority of detected pathogenic mutations were nonsense mutations creating premature stop codons or missense mutations and small deletions and/or insertions that cause frameshifting and also lead to premature termination of translation. Of all detected BRCA1 mutations four were large deletions, all of more than one exon. No large deletions or insertions were detected in the BRCA2 gene so far. In the period of the last three years (January 2009 to December 2011) 559 probands were tested either for the known familial mutation or were submitted through the complete screening of all BRCA exons (Table II). Of the tested probands 115 were positive for BRCA1 pathogenic mutation and 41 for BRCA2 pathogenic mutation. In the stated period, three novel mutations were found which have not yet been described, one in the BRCA1 and two in the BRCA2 gene (Table III). The novel BRCA1 pathogenic mutation was detected in a healthy female from a HBOC family (Table III, Fig. 1). All novel BRCA2 mutations were detected in female BC patients (Table III, Fig. 1). Besides three novel mutations, eleven known pathogenic BRCA mutations were discovered for the first time in the Slovene HBOC families, three BRCA1 and eight BRCA2 (Tables IV and V). All these newly detected pathogenic mutations were detected in female BC and/or OC patients (Tables IV and V). All novel and newly detected pathogenic mutations in Slovenia were small mutations dictating premature stop codon formation and subsequent truncation of BRCA1 or BRCA2 proteins. Discussion Several recent studies have associated specific BRCA mutations with specific cancer risks and phenotypes (12,13). Many HBOC studies therefore have focused on predicting effects of specific BRCA mutations and reveal possible underlying molecular mechanisms (7). In this context, we discuss here the predicted effects of the individual novel and newly detected Slovene BRCA1 and BRCA2 pathogenic mutations. Novel mutations. All three novel mutations described herec.1193C>A (p.Ser398 * ) in the BRCA1 and c.5101C>T (p.Gln1701 * ) and c.5433_5436delGGAA (p.Glu1811Aspfs * 3) in the BRCA2 gene are located in exon 11 of BRCA1 or BRCA2, which is the largest exon in both genes and also carry the majority of pathogenic mutations described so far. As BRCA mutations causing truncation of the BRCA proteins are regarded as pathogenic, with some exceptions of truncating mutations in the last 27th exon of the BRCA2, we predict that all three novel mutations have deleterious effects (14,15). More detailed descriptions are given below. BRCA1. Mutation c.1193C>A (p.Ser398 * ) in exon 11 causes stop codon formation at codon 398. In the BIC database a similar mutation discovered in Asian population, which leads to formation of stop codon 398 (c.1193C>G), is described as a clinically significant variant, but no references are given. Codon 398 lies in one of five conserved regions located at the 5' end of exon 11 (codons 282-554), which include putative interacting sites for several proteins thought to be involved in transcription (16). Codon 398 also forms a part of interacting site (codons 341-748) for DNA repair protein RAD50 which participates in DNA repair by homologous recombination and by non-homologous end joining (16,17). Accordingly, we predict c.1193C>A mutation to severely impair BRCA1-mediated DNA repair. BRCA2. Mutation c.5101C>T (p.Gln1701 * ) is a nonsense mutation causing formation of a stop codon at position 1701 which is located in exon 11 in the ovarian cancer cluster region (OCCR) spanning nucleotides 3035 to 6629. Several studies have shown that truncating mutations in the OCCR region confer a higher ratio of ovarian cancer relative to breast cancer (18)(19)(20). Also, higher risk of prostate cancer was recently detected in males with mutations in the BRCA2 OCCR region (21). Consistently with these studies, one of the two Slovene BRCA2 c.5101C>T families exhibits a high incidence of OC, besides BC (Table III). Frameshift mutation c.5433_5436delGGAA (p.Glu1811 Aspfs * 3) results in translation termination at amino acid position 1813, which lies within the BRC repeat region in exon 11, between BRC5 (amino acids 1649-1735) and BRC6 (amino acids 1822-1914). Jara et al described a similar mutation, c.5439delT (p.Leu1813fs * 1), that might be disease-causing (22). This mutation dictates formation of a stop codon at amino acid position 1814 in the BRC repeat region (22). The BRC repeat region is a region of eight highly conserved internal BRC repeats separated by conserved nucleotide stretches (23,24). The eight BRC repeats bind the RAD51 recombinase and control its activity in homologous DNA recombination (23,24). Truncating mutation within the BRCA2 BRC repeat domain, such as the novel c.5433_5436delGGAA, are therefore predicted to seriously impair the cell's ability to repair DNA double-strand breaks (23)(24)(25). Known BRCA pathogenic mutations that have been detected for the first time in Slovene HBOC families BRCA1. From January 2009 to December 2011 three known pathogenic mutations in the BRCA1 and eight in the BRCA2 gene were detected for the first time in the Slovene HBOC families. Except one, all cause premature formation of stop codons leading to truncation of the respective BRCA1 or BRCA2 proteins. Sister -CC (32) and OC (56) Paternal aunt -CRC (51) Paternal The mutation c.66_67delAG (p.Glu23Valfs * 17) is the most common BRCA1 mutation worldwide which occurs at a frequency of 1.1% in the Ashkenazi Jews (26). Despite being so widespread, this is the first recording of c.66_67delAG in Slovenia, which to note has only very small Jewish population (estimated 500-1,000 people). The c.66_67delAG dictates formation of stop codon in the BRCA1 exon 2 thus forming a truncated BRCA1 protein, BRAt, which lacks all known BRCA1 functional domains (26). Studies have shown that besides being non-functional the truncated BRCA proteins can also impair the function of wild-type BRCA proteins (26,27). It was further suggested that the BRAt mutant protein increases transcription of the protein maspin (mammary serine protease inhibitor), which has been implicated in inhibition of growth, invasion, and metastatic potential of cancer cells (26,28). Jiang et al also demonstrated that maspin sensitizes BRCA deficient breast carcinoma cells to staurosporine-induced apoptosis thus leading to an increased chemosensitivity (29). The other two newly detected BRCA1 mutations are located in exon 11. Mutation c.3718C>T (p.Gln1240 * ) is reported few times in the BRCA mutational databases but is published only once by Kwong et al, who detected it in an endometrial cancer patient of European origin (30). We detected the c.3718C>T in two Slovene families who are, interestingly, both affected by various cancer types (Table IV). As this mutation was first detected in endometrial cancer, this could imply that the c.3718C>T predisposes to other cancer types besides BC/OC. Further studies are needed to corroborate this observation and uncover possible underlying molecular mechanisms. Mutation c.3436_3439delTGTT (p.Cys1146Leufs * 8) in the 11th exon of BRCA1 was before only found once in the Slovene neighboring country Austria (31). It is predicted to cause termination of protein translation at codon 1153. It can be compared to a similar mutation c.3481_3491del11 (p.Glu1161Phefs * 3) that creates stop codon at 1163 (32). The c.3481_3491del11 is a widespread French founder mutation that is frequently detected in hereditary OC (33,34). Comparably, the Slovene c.3436_3439delTGTT family is characterized by higher incidence of OC relative to BC. Future studies are needed to determine whether increased incidence of OC is associated with specific exon 11 truncating BRCA1 mutations. BRCA2. The eight newly detected BRCA2 mutations are all rather rare with only few existing records or publications. Mutation c.262_263delCT (p.Leu88Alafs * 12) is located in exon 3. It was first described in one Polish HBOC family (described as 488_489delCT) and was recently detected in a Spanish BC patient (35,36). Salgado et al suggested that abrogation of the amino-terminal exon 3 transcription activation domain in the BRCA2 protein affects BRCA2 role in transcriptional regulation and DNA repair processes through replication protein A (RPA) (36). They further suggested that abrogation of most (3320 amino acids) of the 3418 BRCA2 amino acids has more severe biological consequences besides disrupted transcriptional regulation (36). Mutation, c.658_659delGT (p.Val220Ilefs * 4), is located in exon 8 and is predicted to truncate the protein before the eight BRC repeats (37). Interestingly, the c.658_659delGT is one of a few BRCA2 mutations found in BRCA2 biallelic cases. These biallelic BRCA2 mutations are known to cause the D1 subgroup of Fanconi anemia (FA-D1), a rare autosomal recessive disorder characterized, among other defects, by predisposition to several childhood cancers (38,39). Studies have shown that FA-D1 patients are especially at a high risk of developing brain tumors, in particular medulloblastomas, compared to other subgroups which are caused by mutations in other DNA-repair genes (40). To note, BC and OC risk in biallelic BRCA2 patients is difficult to determine as FA patients usually die at a young age, before BC or OC would generally develop. Nevertheless, it could be useful to follow whether carriers of monoallelic c.658_659delGT are also burdened by an increased risk for medulloblastomas or other brain tumors. The Slovene family which has monoallelic BRCA2 c.658_659delGT does not, however, exhibit any brain tumors and is affected mostly by quite late onset of OC. No biallelic BRCA2 mutations were detected in Slovenia so far. Mutation c.1773_1776delTTAT (p.Ile591Metfs * 22) causes formation of a stop codon 612 in the exon 10 of BRCA2. It has been described for Western European and Chinese population (41)(42)(43). A similar truncating deletion c.1787_1799del13 forming stop codon near at 609 was recently discovered in a prostate cancer patient with family history of stomach cancer but no BC or OC (44). No functional characterizations have yet been published for c.1773_1776delTTAT, however, the Slovene family having c.1773_1776delTTAT is to date affected only by BC (Table V). Three BRCA2 mutations were detected in the exon 11, c.5213_5216delCTTA, c.6641insC and c.6814delA. Mutation c.5213_5216delCTTA (p.Thr1738Ilefs * 2) in exon 11 has been already found in several HBOC families, mainly in the USA, the Netherlands and in Belgium (45)(46)(47)(48)(49). It causes formation of termination signal at codon 1739 located between BRC5 and BRC6 in the BRC repeat region, similarly to the novel mutation c.5433_5436delGGAA discussed above. According to the literature no other cancers besides BC and OC are associated with this mutation. This also applies to the Slovene c.5213_5216delCTTA family. Mutation c.6641insC (p.Thr2214Asnfs * 10) in exon 11 is a frameshift mutation reported only once in BIC database. Mutation is predicted to form a stop codon at position 2223 located at the 3' end of exon 11. The mutation is causing the truncated BRCA2 protein for the subsequent exons 12 to 27. Mutation c.6641insC was identified in Slovene BC patient diagnosed at age 47, with a history of two BC cases in her family, diagnosed at ages 36 and 44. Interestingly, one was male BC (Table V). Similar mutation c.6641dupC (p.Lys2215Tyrfs * 10) was detected in nearby Croatia in two unrelated families (50). Mutation c.6814delA (p.Arg2272Glufs * 8) in exon 11 was detected in Slovene BC patient diagnosed at 32 years of age, whose mother had bilateral BC. It is described only once in the UMD database, without references, and is predicted to form stop codon at position 2279 near the 3' end of exon 11, therefore abrogating exons 12 to 27. Mutation c.8175G>A (p.Trp2725 * ) was first reported just recently by Levanat et al (50). Mutation c.8175G>A was identified in two unaffected siblings (with a family history of two BC cases) from Croatia (50). Mutation c.8175G>A lies in the frequently mutated exon 18 of BRCA2 leading to the truncation of the BRCA2 oligonucleotide binding domain (OB1) in the DNA-binding domain (DBD) (32). The BRCA2 DBD region is needed for binding of single-stranded DNA (ssDNA) that results from DNA damage or replication errors (51). Through this binding of ssDNA the BRCA2 protein mediates delivery of RAD51 to the sites of exposed single-stranded DNA thus enabling the RAD51 to catalyze homologous pairing and DNA strand exchange (51). Through affecting this recruitment of RAD51 to the ssDNA, mutations in the BRCA2 DBD are predicted to affect the homologous recombination needed for maintaining the integrity of the genome. Besides binding ssDNA, OB1 also binds the 70-amino acid DSS1 which is needed for BRCA2 stability and is also crucial for the BRCA2 functioning in one of the homologous recombination pathways (52,53). Mutation c.9117G>A (p.Pro3039Pro) is located in exon 23 of BRCA2. This splicing mutation was shown to be truncating (54). By this mutation the OB2 functional domain of BRCA2 protein is affected most probably causing impaired repair of double-strand DNA breaks (51,55). Mutation c.9117G>A was identified in three tested members from one Slovene family. Proband (mother) was diagnosed with BC at the age of 49. Her two daughters were both identified as carriers; one diagnosed with OC at the age 24 and one still unaffected. Mutation c.9117G>A has been already found in several HBOC families of Western/Central/East European origin (56). The present report describes three novel BRCA pathogenic mutations that have been detected in Slovene HBOC families thereby contributing to the ever-expanding spectrum of the world-wide pathogenic BRCA mutations. Eleven previously known pathogenic mutations that have been discovered for the first time in Slovenia are also presented. For the probands bearing novel or pathogenic BRCA1 and BRCA2 mutations which have been detected in Slovene population for the first time, relevant clinical data and family history are given. Recent literature is reviewed to provide new data, which should help to create specific plans for preventive and/or therapeutic strategies for individual carriers according to their specific mutation.
2016-05-12T22:15:10.714Z
2012-08-21T00:00:00.000
{ "year": 2012, "sha1": "9d8b75851ae256171e2bc7a8e7473794fc58ea47", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/ijo/41/5/1619/download", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9d8b75851ae256171e2bc7a8e7473794fc58ea47", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118523818
pes2o/s2orc
v3-fos-license
The Navier-Stokes equation and solution generating symmetries from holography The fluid-gravity correspondence provides us with explicit spacetime metrics that are holographically dual to (non-)relativistic nonlinear hydrodynamics. The vacuum Einstein equations, in the presence of a Killing vector, possess solution-generating symmetries known as spacetime Ehlers transformations. These form a subgroup of the larger generalized Ehlers group acting on spacetimes with arbitrary matter content. We apply this generalized Ehlers group, in the presence of Killing isometries, to vacuum metrics with hydrodynamic duals to develop a formalism for solution-generating transformations of Navier-Stokes fluids. Using this we provide examples of a linear energy scaling from RG flow under vanishing vorticity, and a set of Z_2 symmetries for fixed viscosity. Introduction In 1974, Damour [1], and later in 1986, Thorne et al. [2], considered an observer outside a black hole, interacting with (perturbing) the event horizon. Surprisingly, they found that the observer will experience the perturbations of the "stretched" horizon as modes of a viscous fluid possessing electric charge and conductivity. This inspired a host of works over the years but the connection between gravitational physics and fluids became sharper with the advent of the AdS/CFT correspondence when Policastro et. al [5] related the shear viscosity of N = 4 super Yang-Mills theory to the absorption of energy by a black brane. This was the start of using the holographic principle, that is the correspondence between gravitational theories on (d + 1)-dimensional manifolds and d-dimensional quantum field theories, as a tool for calculating hydrodynamic properties. More recently, there has been a set of works that directly relate solutions of Einstein's equations of a particular type to solutions to the Navier-Stokes equations in one dimension less [6,7,8,9,10,11,12,13,14]. Later we will review the details of how this correspondence is derived but the essential flavour is as follows. One writes down a very particular ansatz for the metric in d + 1 dimensions which has undetermined functions, v i (x, t), P (x, t) and parameter, ν with the index i = 1, .., d − 1 i.e. over a (d − 1) subset of the d spacetime dimensions. Solving the Einstein equations then constrains the functions v i (x, t), P (x, t) to give a set of second order nonlinear differential equations for v i (x, t), P (x, t). This set of equations are the Navier-Stokes equations describing a fluid in d dimensions with pressure, P (x, t), fluid velocity field v i (x, t) and viscosity ν. Thus particular solutions to the Navier-Stokes equations provide particular solutions to the Einstein equations. It has been known since Buchdahl [17] that for manifolds with isometries Einstein's equations have solution generating symmetries. That is there are "hidden" symmetries of the equations that map from one solution to the other. In fact there is a vast set of these as described by Ehler [18] and Geroch [19]. The question we wish to pose in this paper is whether the solution generating symmetries of Einstein's equations can lead to solution generating symmetries in the Navier-Stokes equations? The procedure to determine this will be as follows: • impose a Killing symmetry in a spacetime that admits the metric ansatz corresponding to the Navier-Stokes equations; • carry out generalised Ehler's transformations that preserve the ansatz; • determine the induced transformation of the Navier-Stokes data i.e. v i , P, ν. The procedure could immediately fail if there were no generalized Ehler's transformations that preserve the metric ansatz required for the fluid/gravity correspondence. We will find that there are a finite set and we will be able to explore the transformations on the Navier-Stokes fields for different choices of Killing directions in spacetime. Along the way we will show that they are not part of the usual spacetime Ehler's transformations and yet they do produce solution generating transformations for the Navier-Stokes fields. The paper will try to be as self contained as possible and so we begin with a review of the necessary ideas in fluids; the Navier-Stokes equation; the fluid gravity correspondence; and solution generating symmetries in general relativity. We will then carry out the procedure described above for spatial, timelike and null Killing vectors to see what solution generating symmetries they correspond to in the Navier-Stokes equation. We end with some comments and ideas for future work. A reader familiar with the formalism of hydrodynamics and the Navier-Stokes equation may wish to skip directly to section 3 where we carry out the solution generating transformation in the gravity dual to see the resulting induced transformations on the solutions of the Navier-Stokes equation. Hydrodynamics The study of hydrodynamics is fundamental to vast areas of physics and engineering, owing to its origin as the long-wavelength limit of any interacting field theory at finite temperature. Such a limit needs a consistent definition. Consider a quantum field theory where quanta interact with a characteristic length scale corr , the correlation length. The long-wavelength limit simply requires that fluctuations of the thermodynamic quantities of the system vary with a length scale L much greater than corr , parameterized by the dimensionless Knudsen number For a fluid description to be useful in non-equilibrium states, we naturally require that L remain small compared to the size of the system. This is usually satisfied trivially by considering systems of infinite size. The long-wavelength limit allows the definition of a particle as an element of the macroscopic fluid, infinitesimal with respect to the size of the system, yet containing a sufficiently large number of microscopic quanta. One mole contains an Avogadro's number of molecules, for example. Each particle defines a local patch of the fluid in thermal equilibrium, that is, thermodynamic quantities do not vary within the particle. Away from global equilibrium quantities vary between particles as function of time τ and spatial coordinates x, combined as x a = (τ, x). The evolution of particles in the fluid is parameterized by a relativistic velocity u b (x a ), which refers to the velocity of the fluid at x a . It is well known [29] that the thermodynamic quantities, such as the temperature T (x a ) and the density ρ(x a ), are determined by the value of any two of them, along with the equation of state. The evolution of the system is then specified by the equations of hydrodynamics in terms of a set of transport coefficients, whose values depend on the fluid in question. Fluid flow is in general relativistic in that the systems it describes are constrained by local Lorentz invariance, and velocities may take any physical values below the speed of light. Applications at relativistic velocities are multitudinous: the dust clouds in galaxy and star formation; the flow of plasmas and gases in stars supporting fusion; the superfluid cores of neutron stars; the horizons of black holes are all described by hydrodynamics. Modelling black holes (and black branes in M/string theory) with hydrodynamics has now developed into a fundamental correspondence of central importance to our present study, as discussed in §1. Quarkgluon plasmas behave as nearly ideal fluids and are expected to have formed after the inflationary epoch of the big bang, are reproduced in collisions at the RHIC and LHC. Non-relativistic fluids are equally ubiquitous, somewhat more familiar, and constitute an endless list of phenomena from the atmosphere to the oceans. The fluid equations We begin with a discussion, adapted from [30], of the relativistic fluid described by the stress energy tensor T ab and a set of conserved currents J a I where I indexes the corresponding conserved charge. The dynamical equations of the d-spacetime dimensional fluid are For an ideal fluid, with no dissipation, the energy-momentum tensor and currents may be expressed in a local rest frame in the form where p is the pressure, q I are the conserved charges and g ab is the metric of the space on which the fluid propagates. The velocity is normalised to u a u a = −1. The entropy current is given by (3b) with the charge q being given by the local entropy density. The conservation of the entropy current illustrates the non-dissipative nature intrinsic to zero entropy production. In a dissipative fluid, there are corrections to (3). We must first take into account the interrelation between mass and energy to define the velocity field more rigorously. This is achieved by using the Landau gauge, which requires that the velocity be an eigenvector of the stress-energy tensor with eigenvalue the local energy density of the fluid (this is satisfied by the velocity normalisation for the ideal fluid). If the stress energy tensor gains a dissipative term Π ab , and the current a term Υ a I , this reads Π ab u a = 0 Υ a I u a = 0. Dissipative corrections to the stress tensor are constructed in a derivative expansion of the velocity field and thermodynamic variables, where derivatives implicitly scale with the infinitesimal Knudsen number (1). Recalling that the equations of motion for the ideal fluid are composed of relations between these gradients, we may express Π ab purely in terms of the derivative of the velocity (when charges are present this is only true to to first order). This can be iterated to all orders in the expansion. Now, the derivative of the velocity may be decomposed using the acceleration A a , divergence θ, a symmetric traceless shear σ ab , and the antisymmetric vorticity ω ab into the form and P ab = g ab + u a u b is a projection operator onto spatial directions. In the Landau frame, only the divergence and shear can contribute to first-order stressenergy tensor. A similar analysis for the charge current retains the acceleration, and if one includes the parity-violating pseudo-vector contribution the leading order dissipative equations of motion for a relativistic fluid are (2) with where η and ζ are the shear 1 and bulk viscosities respectively, χ IJ is the matrix of charge diffusion coefficients, γ I indicates the contribution of the temperature gradients and Θ I the pseudo-vector transport coefficients. The transport coefficients have been calculated in the weakly coupled QFT in perturbation theory, whereas in the strongly coupled theory, a dual holographic description may be employed, see e.g. [3]. The incompressible Navier-Stokes equations In the non-relativistic limit defined by long distances, long times and low velocity and pressure amplitudes (see e.g [12]), the fluid equations (2) with (4) become the incompressible non-relativistic Navier-Stokes equations. In flat space and in the presence of an external electromagnetic field a i , these are where f ij = ∂ i a j − ∂ j a i is the field strength of a i . Ideal fluids are described by Euler's equations, where the kinematical viscosity ν (related to the shear viscosity) vanishes. We will mostly be concerned with fluid flow in the absence of external forces, where a i is zero. The Navier-Stokes fluid on a Rindler boundary A metric dual to the non-relativistic incompressible Navier-Stokes equations was first developed in [8] on the Rindler wedge, up to third order in the non-relativistic, small amplitude expansion detailed later in this section. An algorithm for generalising this metric to all orders was subsequently developed in [9], though terms calculated beyond third order are not universal. They receive corrections from quadratic curvature in Gauss-Bonnet gravity [11]. We summarise the construction in [9] here. Consider the surface Σ c with induced metric where the parameter √ r c is an arbitrary constant. One metric embedding this surface is which describes flat space ( fig. 1) in ingoing Rindler coordinates The hypersurface Σ c is defined by r = r c where r is the coordinate into the bulk. Allowing for a family of equilibrium configurations, consider diffeomorphisms satisfying the three conditions i) The induced metric on the hypersurface Σ c takes the form (6) ii) The stress tensor on Σ c describes a perfect fluid iii) Diffeomorphisms return metrics stationary and homogeneous in (τ, x i ). The allowed set is reduced to the following boost, shift and rescaling of x µ . First, a constant boost β i , Second, a shift in r and a rescaling of τ , These yield the flat space metric in rather complicated coordinates, The Brown-York stress tensor on Σ c (in units where 16πG = 1) is given by where are the extrinsic curvature and its mean, and n µ is the spacelike unit normal to the hypersurface. By imposing that the Brown-York stress tensor on Σ c gives that of the stressenergy tensor of a fluid we can identify the parameters of the metric (11) with the density, ρ, pressure, P and four-velocity u a of a fluid, as follows: The Hamiltonian constraint R µν n µ n ν = 0 on Σ c yields a constraint on the Brown-York stress tensor When this constraint is applied to the equilibrium configurations described above, one finds the equation of state is ρ = 0 (as above), or ρ = −2d(d − 1)p which occurs for a fluid on the Taub geometry [14]. Promoting v i and p to slowly varying functions of the coordinates x a , and regarding v i (τ, x j ) and p = r about equilibrium yields the metric Corrections appear in powers of 2 , so this is the complete metric to second order. The metric may now be built up order by order in the hydrodynamic scaling. Assume one has the metric at order n−1 , where the first non-vanishing component R (n) µν of the Ricci tensor appears at order n. By adding a correction term g (n) µν to the metric at order n, resulting in a shift in the Ricci tensor δR (n) µν , and requirinĝ the vanishing of the Ricci tensor is guaranteed to order n. Recalling that, in the hydrodynamic scaling, derivatives scale thus, one sees that corrections δR (n) µν at order n will appear only as r derivatives of g (n) µν . It is shown in [9] that, using the Bianchi identity and the Gauss-Codacci relations, integrability of the set of differential equations (16) defining δR (n) µν in terms of g (n) µν is given by imposing the momentum constraint, equivalent to the conservation of the stress tensor on Σ c , which is precisely the fluid equations of motion, to order n. The perturbation scheme contains several degrees of freedom. The gauge freedom of the infinitesimal perturbations for some arbitrary vector ϕ µ(n) (τ, x, r) at order n , which may be fixed by demanding that g rµ is that of the seed metric to all orders in . The x a -dependent functions of integration from equation (16) may be fixed by imposing the boundary form (6) of the metric on Σ c , and also requiring regularity of the metric at r = 0, which in this construction translates to the absence of logarithmic terms in r. Corrections to the bulk metric under these conditions then become where the F τ (τ, x) is related to redefinitions of the pressure and is fixed by defining the isotropic part of T ij to be to all orders. Applying the perturbation scheme to the seed metric yields to third order, which satisfies the vacuum Einstein equations if which are the Navier-Stokes equations with kinematical viscosity The corresponding corrections to the Navier-Stokes equations follow from conservation of the stress tensor on Σ c . Vector and scalar quantities are odd and even orders respectively in the scaling . Accordingly, corrections to the scalar incompressibility equation appear at even orders, and to the vector Navier-Stokes equations at odd orders. Duality in the context of holography The defining equations in general relativity are the Einstein field equations, and in the non-relativistic limit of hydrodynamics, the Navier-Stokes equations (5). Each is a set of non-linear partial differential equations whose solutions exhibit fantastically varied phenomenology. When approaching any complex physical system with a view to finding solutions, it is often advantageous to consider the symmetries, intensively studied in both of these systems since their conceptions. Beyond diffeomorphisms, the search in gravity has in general been somewhat limited [15,36], however in the presence of a spacetime isometry, the symmetry group becomes remarkably large [35], particularly for vacuum spacetimes. For symmetries of the Navier-Stokes equations see [31], and with regards to the conformal group [12,32]. In the light of the fluid/gravity correspondence, one may ask whether the symmetries of these systems are linked. In [37,38,39], they apply known symmetries of the Einstein equations to spacetimes with perfect fluid sources, constructing new spacetimes with the same equation of state, though not within a holographic framework. By drawing on the tools provided by these works and those in holography, we hope to develop a more general approach to the problem. The bulk provides an additional valuable degree of freedom, where the boundary sets the scene for the fluid evolution on the induced geometry. Moreover, we are now free to exploit the symmetries of the more extensive yet simpler vacuum geometries. It is these symmetries which we intend to holographically project to the fluid. In particular, we are interested in transformations between solutions to the Navier-Stokes equations arising from transformations between solutions to the vacuum Einstein equations: transformed metrics yield transformed fluid configurations. In this section, we discuss the work leading up to the spacetime Ehler's symmetry group of the vacuum Einstein equations with zero cosmological constant, itself contained within the generalized Ehlers group. We continue in §4 to apply the latter, in the presence of a Killing isometry, to fluids on the boundary of the Rindler space, thus deriving solution generating transformations of the fluid velocity, pressure and viscosity (the latter defining the RG flow). We offer in §4.1.1 a selection of example transformations including RG flow for zero vorticity fluids (where one may relax this constraint), and Z 2 transformations for fixed viscosity which we show in §4.3 in fact lie outside the spacetime Ehler's group. Symmetry groups of the Einstein equations Understanding the properties of the Einstein field equations has long been a subject of great theoretical interest, a sensible starting point being the inherent symmetries involved. To this end, Buchdahl [17] derived a form of duality in vacuum spacetime metrics, where an n-dimensional vacuum metric static with respect to a coordinate x s : g µν,s = 0, g as = 0 µ, ν ∈ {0, . . . , n} generates a dual vacuum metric It is this solution-generating property of spacetime isometries we wish to apply to solutions of the Einstein equations and holographically map to hydrodynamics. We have, however, a considerably larger symmetry group at our disposal. The authors of [16,18,19] develop the concept culminating in the generalized Ehlers symmetry group of the Einstein equations also for non-vacuum spacetimes. An extension exists [33,34] to dualities between vacuum spacetimes and those with electromagnetic backgrounds described by the Einstein-Maxwell equations, of relevance for magnetohydrodynamics. The Ehlers group The generalized Ehlers group Define a vector field ξ = ξ µ ∂ µ and one-form W = W µ dx µ on a manifold with metric g = (g µν ). The generalized Ehlers group is defined in [16] by the transformation where Ω 2 ≡ ξ α W α + 1 ≥ 1, and the inequality holds over the whole geometry. This group does not send vacuum metrics to vacuum metrics in general, but such transformations may be found in the spacetime Ehlers subgroup. The spacetime Ehlers group Let us restrict g to be some (3 + 1)-dimensional Lorentzian metric satisfying the vacuum Einstein equations and exhibiting some Killing isometry. Let us restrict ξ to define this Killing isometry, which is equivalent to the condition that the Lie derivative of the metric along ξ vanishes: The twist potential and Killing vector norm give the Ernst one-form for some scalar ς (exactness is guaranteed by vanishing Ricci tensor, see [40] p.164). Define a self-dual two form where * is the Hodge dual operator. The spacetime Ehlers group is defined for (3 + 1)-dimensional Lorentzian metrics by (24) for W satisfying where a bar denotes complex conjugation, and γ and δ are non-simultaneously vanishing real constants, which as a pair fix the gauge of W . The transformation defines an SL(2, R) group action on the Ernst scalar by the Möbius map 4 Solution-generating transformations on the Navier-Stokes fluid Consider those transformed metrics h(ξ, W, g) which preserve the functional form g (it clear that this is not in general the case). In the case of the Rindler metric dual to the incompressible Navier-Stokes fluid, we define the parameters of g by the fluid velocity v i , pressure P , and boundary position r c within the bulk. In the transformed metric h, we define the transformed parameters byṽ i ,P andr c , denoted by '∼'. On satisfying the vacuum Einstein equations onΣ c , now at r =r c in the transformed geometry, the transformed metric will yield the incompressible Navier-Stokes equations in the transformed parameters Vitally, if (v i , P ) satisfy the Navier-Stokes equations with viscosity ν = r c , then the transformed velocity and pressure (ṽ i ,P ) represent a new set of solutions for viscosity ν =r c . That is, we look for a subset of the generalized Ehlers transformation acting on the fluid metric (21), obeying some Killing isometry, which corresponds to solution-generating transformations of the velocity and pressure, and RG flow parametrised by r c , of an incompressible Navier-Stokes fluid. The Rindler metric is just one fluid metric supporting flat background geometries on the boundary. We therefore only wish instead to retain the common features of such metrics; the metric gauge g µr , and the flat boundary metric of the form (6). The equation we wish to solve is thus whereg τ r = 1 +ṽ g ab |r c =γ ab , whereγ τ τ = −r c ,γ ai = γ ai . (34b) Transforming the fluid We are provided in (34) with sufficient information to derive the possible fluid transformations via the form of the one-form W . Preserving the vanishing of theg rr = g rr = 0 component of the metric we find, directly from (24), the two possibilities One may obtain an expression for W a by contraction of (24) with the boundary indices (a, b, . . .) of the Killing vector: (Note, here and in what follows ξ µ = g µν ξ ν , ie. it is lowered with the metric g µν and never withg µν ). This expression is uniquely defined only at the dual boundary Σ c , where we have defined the form ofg ab and W a becomes independent of the dual fluid velocity and pressure. These expressions diverge for null Killing vectors, where λ = 0. We cover this case shortly. One can see that the parameters of the fluid is determined, to all orders in , by g ar = g ar | rc . Consequently, the transformation in the fluid will be given by the transformation of these components. Evaluation atr c is necessary in order to circumvent the ambiguity in the dual metric, and also provides explicit fluid transformations. We begin with Killing vectors null at the dual boundary, λ|r c = 0, where one finds from contraction of (24) with the Killing vector, which yields the following transformation, accompanied by the preservation of a null Killing vector, ξ µ ξ νg µν = 0. For non-null Killing vectors we employ the relations derived by comparing (35) and (37), and found from contraction of (24) twice with the Killing vector. Inserting W r (35) and W a (36) into the Ehlers transformation (24) and employing (39) and (40), one findsg Energy scaling invariance from a bulk isometry We begin with an example of a (null) Killing vector into the bulk, ξ = ξ r (x µ )∂ r . The Killing equation components (L ξ g) ai = 0 yield Integrability of these equations requires firstly where we have used the Navier-Stokes equations to express the constraint in this form. Additionally, integrability requires vanishing vorticity to first order, which with incompressibility implies (44). Transformation (38) which is exact to all orders. It is trivial to show that the pair (ṽ i ,P ) satisfy the incompressible Navier-Stokes equations (with viscosityr c ) if (v i , P ) do so (with viscosity r c ) for velocities satisfying (44) alone -vanishing vorticity imposes unnecessary constraint and removes the dissipative term from the fluid equations. It is interesting to consider the problems of existence, uniqueness and regularity of the Navier-Stokes in this case. The divergence of (44) yields a vanishing mean square vorticity which ensures the class of solutions (v i , P ) generated by (45) are regular. With respect to existence, the kinetic energy scales by a factorr c /r c and thus is bounded if there exists any solution satisfying (44) where the energy is finite. The timelike Killing vector One might expect, in the presence of a timelike Killing vector ξ = ∂ τ (it is sufficient for this discussion to consider stationary solutions), a transformation of the form enacting time-reversal of the fluid but this is not the case. This is explained by noting that time-reversal is enacted by redefining the viscosity by ν = ±r c [8] rather than by changing r c itself. This is because sending r c → −r c brings the fluid outside the causal region of the spacetime. Fixed viscosity transformations We turn to fixed boundary (viscosity) transformations, wherer c = r c . For Killing vectors null at the dual boundary then α = 0, and one recovers the identity. For non-null Killing vectors with α = 1, one finds which defines a Z 2 group. Spacelike Killing vectors Consider a generic space-like Killing vector ξ = ξ k ∂ k . Under (47), the pressure is preserved, while the velocity transforms as which is a reflection in the hyperplane normal to the Killing vector and containing the point at which the velocity is defined. Translational isometry Consider ξ = c k ∂ k where the constants c k are normalised to k c 2 k = 1, and the corresponding isometries are The incompressibility condition and Navier-Stokes equations are satisfied by the incompressible Navier-Stokes equations in the original fluid parameters along with the isometries. This is valid for fluids of arbitrary dimension. Rotational isometry Consider a Killing vector ξ = −x 2 ∂ 1 + x 1 ∂ 2 corresponding to a rotational isometry in a d-dimensional fluid. In polar coordinates where primed ( ) indices run from 3 to (d − 1), and the Killing vector becomes ξ = ∂ θ , the isometries are Solutions satisfy that is, the transformation sends η → −η (equivalently θ → −θ). The incompressible Navier-Stokes equations for the original fluid may be expressed as It is clear from the parity of these equations in η that if there exists a fluid solution defined in terms of a pair (µ, η) by (54), then there also exists a solution parameterized by the pair (µ, −η). That is, the transformed fluid satisfies the incompressible Navier-Stokes equations. Again, this is valid for fluids of arbitrary dimension. We provide an example with the three-dimensional fluid solution which satisfies the isometries (53). Here, A, B and τ 0 are arbitrary non-vanishing constants and q(τ ) is an arbitrary function of time. The duality is equivalent to sending τ 0 → τ 0 + iπ/2A. Generalized versus spacetime Ehlers In this section we discuss whether the fluid transformations of §4. In the case of the vacuum-to-vacuum spacetime Ehlers group, the Ernst scalar transforms according to the Möbius map (31). If conjugation of the Ernst scalar is to belong to this map, we must have Discussion We have demonstrated how solution-generating transformations of the Einstein equations in the presence of a Killing vector may be applied to spacetime holographically dual to hydrodynamics. Our focus has been on the incompressible Navier-Stokes fluid dual to vacuum Rindler spacetime, where we have uncovered a selection of fluid transformations: a linear energy scaling symmetry for solutions with vanishing vorticity (this constraint may be relaxed to (44)), deriving from RG flow of the fluid hypersurface through the bulk, and a Z 2 group of transformations for fixed viscosity (boundary) with explicit examples of reflection-like symmetry in translational and rotational fluid isometries. These transformations may not be remarkable from the perspective of hydrodynamics but it shows how part of the generalized Ehlers transformations can survive holography and give rise to transformations in the fluid dual. These fluid transformations, when applied to fluid metrics, will produce solutiongenerating transformations in the vacuum Einstein equations. However, the transformed metrics produced directly by our method are not necessarily vacuum. This apparent contradiction may be explained as follows. It is discussed in [42] how the electromagnetic field strength contribution to the Navier-Stokes equations (5) is determined by the projection of the electromagnetic field strength of the bulk spacetime along the unit normal to the hypersurface. If in the transformed spacetime this projected field strength vanishes, one will still recover the Navier-Stokes equations (32) (without forcing terms) in the dual parameters. In this way, it is not strictly necessary that the fluid metrics be vacuum to recover solution-generating transformations of the unforced incompressible Navier-Stokes equations. One can then be inspired to try the Harrison transformation which is a solutiongenerating transformation in Einstein-Maxwell theory to give new transformations in magnetohydrodynamics. A holographic relation of the sort described here has been constructed for magnetohydrodynamics in [41,42,43]. The Harrison transformation in the bulk may then lead to nontrivial transformations in magnetohydrodynamics transforming between fluid velocity and magnetic potentials; this is the subject of current work. One can also study the dimensional dependence of these solution-generating transformations. In recent work [44], the difference in scaling for turbulence between three and four dimensions was studied holographically with a large difference in qualitative behaviour. One could also examine backgrounds associated with nontrivial chemical potential in the dual; for example rotating or charged black hole backgrounds. Essentially, in this paper we wish to open up the use of gravitational solution generating symmetries in holography. It is encouraging that this did not immediately fail and one could preserve the fluid metric ansatz with some residual transformations surviving, yet it is intriguing that these transformations did not give anything particularly new. The results for magnetohydrodynamics may prove more significant.
2013-02-15T09:13:32.000Z
2012-11-08T00:00:00.000
{ "year": 2013, "sha1": "8d05715331b6bb692bace8bc5e427e1789e45fd3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1211.1983", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "8d05715331b6bb692bace8bc5e427e1789e45fd3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
63014
pes2o/s2orc
v3-fos-license
International Journal of Behavioral Nutrition and Physical Activity the Children's Eating Behaviour Questionnaire: Factorial Validity and Association with Body Mass Index in Dutch Children Aged 6–7 Background: The Children's Eating Behaviour Questionnaire (CEBQ) is a parent-report measure designed to assess variation in eating style among children. In the present study we translated the CEBQ and examined its factor structure in a sample of parents of 6-and 7-year-old children in the Netherlands. Additionally, associations between the mean scale scores of the instrument and children's body mass index (BMI) were assessed. Background Especially during the last few decades the prevalence rates of childhood overweight and obesity have reached epidemic proportions worldwide [1], and also in the Netherlands [2]. Obese children face difficulties in their social life and run a substantially increased risk of becoming our future generation of obese, chronically diseased adolescents and adults [3,4]. Despite widely held beliefs regarding the importance of factors promoting excessive weight gain in children, it still remains a challenge to discover the underlying child behaviours that might contribute to differences in weight status across children [5][6][7]. Unravelling these factors will inform the development of evidencebased intervention programs to prevent overweight and obesity in children. In the past, a number of psychometric instruments have been developed to assess eating behaviour in children, including the Children's Eating Behaviour Questionnaire (CEBQ) [7], the Dutch Eating Behaviour Questionnaire (DEBQ) [8,9], the Children's Eating Behavior Inventory (CEBI) [10] and the BATMAN (Bob and Tom's Method of Assessing Nutrition) [11]. The CEBQ is generally regarded as one of the most comprehensive instruments in assessing children's eating behaviour. The instrument was developed and validated in the United Kingdom, and recently the instrument has been validated in a Portuguese sample [6]. To our knowledge, no other validation studies have been performed on the CEBQ, but the instrument has been used for different research purposes, e.g., to examine associations with child body mass index (BMI) [6,12,13]; to compare appetite preferences in children of lean and obese parents [12,14]; to discover continuity and stability in children's eating behaviours across time [15]; and to examine eating behaviours of children with idiopathic short stature [16]. The CEBQ consists of the following eight scales. The scales food responsiveness (FR) and enjoyment of food (EF) reflect eating in response to environmental food cues. In response to these cues appetitive responses and eating rate have been found to strongly increase in overweight or obese children [5,7,13]. The scale desire to drink (DD) reflects the desire of children to have drinks to carry around with them, usually sugar-sweetened drinks [7]. Several studies found that BMI was positively associated with frequent consumption of sugar-sweetened drinks [17,18] and a decline in soft drink consumption would result in a reduction of overweight and obese children [19]. Satiety responsiveness (SR) represents the ability of a child to reduce food intake after eating to regulate its energy intake. Infants tend to be highly responsive to internal hunger and satiety cues, whereas this level of responsiveness decreases with advancing age [5,13,20]. Thus, during childhood, children will gradually lose the ability to effectively self-regulate energy intake, thereby promoting episodes of over-consumption and subsequently excessive weight gain. High scores on the scale slowness in eating (SE) is characterised by a reduction in eating rate as a consequence of lack of enjoyment and interest in food. Compared to their leaner counterparts, obese children have an increased consumption and have less reduction of their eating rate during the end of a meal [21]. Food fussiness (FF) is usually defined as rejection of a substantial amount of familiar foods as well as 'new' foods, thereby leading to the consumption of an inadequate variety of foods [22]. This type of eating style is characterised by a lack of interest in food [23], and slowness in eating [24]. Conflicting findings regarding the relationship between fussy eating and BMI in children have been found [23,[25][26][27]. The scales emotional overeating (EOE) and emotional undereating (EUE) can be characterised by either an increase or a decrease in eating in response to a range of negative emotions, such as anger and anxiety. Emotional overeating has been found to be positively related to child BMI, whereas emotional undereating was negatively related to child BMI [6,28]. The original CEBQ scale has been shown to have good internal consistency (Cronbach's alphas ranging from 0.72 to 0.91) [7], adequate two-week test-retest reliability (correlation coefficients ranging from 0.52 to 0.87) [7] and construct validity [5]. Principal Components Analyses showed that each scale had a single factor, which explained 50-84% of the variance, and an overall factor analysis resulted in a verification of the hypothesised (theoretical) scales [7]. The present study aimed to examine the factorial nature of the CEBQ in a Dutch sample of 6-and 7-year-old children. Specific objectives were to translate the CEBQ into the Dutch language, to assess its psychometric properties and to compare them with the original CEBQ, and to demonstrate its application in overweight-related studies by examining its association with the child's BMI. We hypothesised that overweight and obese children would have higher scores on 'food approach' subscales (i.e. FR, EF, EOE) and lower scores on 'food avoidant' subscales (i.e. SR, SE, EUE, FF) of the CEBQ. Measures The CEBQ was translated into Dutch by a team of four experts on eating behaviour at Maastricht University (the Netherlands) who are Dutch native speakers and fluent speakers of the English language (the two authors of this manuscript ES and SK, and two colleagues of the Department of Health Education and Promotion). Translations were cross-checked by this team and in case of inconsistencies between the translations, team meetings were held to discuss the particular item; for some issues, we contacted the developer of the instrument (Prof. Wardle) [7]. All translators approved the final translation. The CEBQ consists of 35 items comprising eight subscales, each containing 3 to 6 items. Parents are asked to rate their child's eating behaviour on a five-point Likert scale (never, rarely, sometimes, often, always; 1-5). Sample scale items include for example 'Given the choice, my child would eat most of the time', and 'My child leaves food on his/her plate at the end of a meal'. In table 1, all items of the CEBQ are displayed. Body Mass Index Parents were asked to report their children's height and weight to calculate BMI. Specific age and gender BMI cutoff points were used to define underweight [29] and overweight/obesity [30]. Additionally, a child's BMI was converted to a standardised z-score, adjusting for age and gender, based on reference data of the Fourth Dutch National Growth Study (1997) [31]. Parental reported weight and height of their children was available for 115 (85.2%) respondents. Statistical procedures A Principal Components Analysis (PCA) with Varimax rotation was performed on all items of the CEBQ to determine if the original eight-factor structure (CEBQ) [7] would be replicated in our sample. Both internal reliability coefficients (Cronbach's alphas) and (average) corrected item-total correlations were calculated. Guidelines exist to interpret (average) corrected item-total correlations, which correct for the contribution of the items to the scale. For the present study, we used the guidelines by Nunnally, who considered that correlations above 0.30 are 'good' and correlations below 0.15 may be unreliable (i.e. because they are wrongly interpreted by the study participants and/or are do not measure the same construct as the subscale) [32]. The reliability estimates were compared with those found by previous validation studies [6,7]. Pearson's correlations were computed to evaluate relationships between mean item scale scores on each of the eight factors of the CEBQ originally found by Wardle et al. [7]. Interpretations were based on Cohen's descriptive guidelines [33], correlations between 0.5 and 1.0 being considered as large, correlations between 0.3 and 0.5 as medium, and correlations between 0.1 and 0.3 as small. Gender and age differences between scores were calculated using independent samples t-tests. A series of multiple linear regression analyses was conducted to examine associations between scores on the subscales of the CEBQ with children's BMI z-scores as the dependent variable. Every subscale of the questionnaire was entered into the analysis separately with the following covariables to correct for potential confounding: child's gender and age; parental education, ranging from 1 (lowest level of education) to 7 (highest level of education); and parental employment status, dichotomised into 1 (employed) and 2 (non-employed). Missing anthropometric data was present for 20 children, and therefore BMI z-scores of these children could not be calculated. Those missing BMI z-scores were replaced using the mean imputation method. The sample size of the current study (N = 135) enables the detection of an additional explained variance of 6% (ΔR 2 = .06) in the prediction of one unit change in BMI z-score, with a power of .80 (alpha .05). In addition, one-way analysis of variance for comparison by weight status was used to examine differences in scale scores by child BMI groups and to assess the possibility of a non-linear relationship between BMI and eating style constructs. BMI was categorised into three weight categories, underweight (N = 20; 17.4%), normal weight (N = 83; 72.2%), and overweight/obesity (N = 12; 10.4%; 10 overweight and 2 obese children grouped together to increase the statistical power). Factor analysis The factor analysis revealed a seven-factor solution, presented in table 1. The seven factors accounted for 62.8% of the total variance. The items from two scales (EOE and FR) loaded onto the same factor, which we propose to name 'overeating' (table 2). Most of the scale items loaded as expected and their factor loadings were comparable to those obtained in the original study by Wardle et al. [7] and the study by Viana et al. [6]. However, four items deserve special attention. First of all, the item 'my child is always asking for food' did not load onto the expected factor FR, but on EF. Second, the item 'my child eats more when annoyed' loaded most highly onto the EUE factor (.55), but has been retained on the EOE scale on theoretical grounds (factor loading .47). The item 'my child eats more and more slowly during the course of a meal' loaded most highly onto the SR factor (.63), but has been retained on the SE factor (.39). Separate Principal Components Analyses (PCAs) on the seven final scales showed that six of them constituted a single factor with an eigenvalue greater than one, accounting for 51-70% of the variance across the scales. One exception was the overeating scale, which had two factors with an eigenvalue greater than one (revealing the original FR and EOE scales), accounting for 42% of the variance across the seven scales. In spite of our seven-factor solution, we performed further statistical analyses on the eight subscales as defined by Wardle and colleagues [7], in order to allow comparison with the original subscales and in line with the previous Portuguese study [6]. Reliability Reliability coefficients (Cronbach's alphas) for the different scales of the instrument are presented in table 2. The coefficients ranged from .75 to .91 for the CEBQ subscales, which are all within acceptable ranges. The average item-total correlations, correcting for the contribution of the items to the scale, suggested adequate consistency of item content within the CEBQ subscales (.51 -.75) (table 2). Moreover, all corrected item-total correlations are considered 'good' (ranging from .39 to .84) [32]. Age and gender differences Independent samples t-tests were conducted to examine age and gender variations in children's eating behaviour (table 3). There were no statistically significant differences (a) FR and EOE loaded onto the same factor in the final solution, so one scale was developed which we propose to name 'overeating' (OE). (b) The item 'My child is always asking for food' loaded most highly onto the EF factor (.53) than on the FR factor (.05), where the factor originally belongs. Therefore, this item was incorporated in the factor EF. (c) The item 'My child eats more when annoyed' loaded most highly onto the EUE factor (.55), but on theoretical grounds has provisionally been retained on the EOE scale, which is part of the newly developed factor OE. (d) The item 'My child eats more and more slowly during the course of a meal' loaded most highly onto the SR factor (.63), but has provisionally been retained on the SE factor, to provide better comparability with the original factor structure of the CEBQ. (e) The item 'My child is difficult to please with meals' also loaded onto the SR factor (.44). in parental responses regarding 6-year old children compared to parents of 7-year-olds. Significant gender differences were found. Boys scored higher on fussy eating (FF) than girls (mean 3.1 (SD 0.9) versus 2.6 (0.9), p = 0.000). Correlations between scales The correlations between subscales of the CEBQ (table 4) indicate that the 'food approach' subscales (FR, EF, and EOE) and the 'food avoidant' subscales (SR, SE, EUE, and FF) tend to be positively inter-correlated. For the 'food approach' subscales, especially the FR-EF and FR-EOE correlations were found to have a large effect size. Moreover, a large correlation was found between the 'food avoidant' subscales SR and SE, whereas medium correlations were found for SR-FF and SE-FF. The 'food approach' subscales and the 'food avoidant' subscales were found to be negatively correlated. Large negative correlations were found for EF-SR, EF-SE, and EF-FF, whereas medium correlations exist for FR-SR and FR-SE. The only exception among these negative correlations was the medium-sized positive correlation between the 'food approach' EOE factor and the 'food avoidant' EUE factor. The correlations coefficients were compatible with the findings of Wardle et al. [7] and Viana et al. [6]. Weight differences A series of independent regression analyses was used to model each subscale of the CEBQ separately with child BMI z-scores entered as a continuous dependent variable, while correcting for potential confounding variables (child's gender and age, parental educational level, and parental employment status). In general, child BMI zscores showed a linear increase with the 'food approach' subscales of the CEBQ (β 0.15 to 0.22), and a decrease with 'food avoidant' subscales (β -0.09 to -0.25) (table 5). Significant relationships were found for FR, EF (p < 0.05), and SR, SE (p < 0.01). The results regarding differences in scale scores across child BMI groups (one-way analysis of variance) are graphically displayed in figures 1 and 2, illustrating mean 'food approach' and mean 'food avoidant' scores by Discussion The present study showed good psychometric properties of the Dutch translation of the CEBQ in terms of factor structure, internal reliability and correlations between subscales corresponding very closely to the original study [7] and a recent Portuguese validation study of the CEBQ [6]. In our sample of 6-and 7-year-old Dutch children a seven-factor structure was the best interpretable solution, which explained 62.8% of the variance. In parallel with earlier studies [6,7], the original eight-factor structure could not perfectly be replicated. In comparison to the original factor structure [7], the scales of FR and EOE were clustered together in the present Dutch sample to ascer-tain the psychometric properties of this study. The FR and EOE scales were highly correlated, and combining them into one scale ('overeating') increased the internal consistency coefficient. However, caution is needed when combining those two scales, since they may differentiate in older age groups and it should be noted that the original FR and EOE scales were revealed in a separate Principal Components Analysis on the combined scale. Cross-sectional associations between the mean scale scores and BMI showed that overweight children displayed weaker satiety responses and stronger appetite responses to food compared to their leaner counterparts. This result is in line with the Portuguese study [6]. In addition, overweight children appeared to apply poorer eating regulatory mechanisms and to have an increased eating rate compared to normal-weight children. The positive association of the scales FR and EF with child's BMI zscore is consistent with research demonstrating that children with a higher BMI are highly responsive to environmental food cues [e.g., [5][6][7]13,28]]. SR and SE were inversely associated with child BMI z-score similar to the recently published study of Carnell and Wardle [13] and Viana et al. [6]. In the current study, EUE and FF were found to have the weakest associations with the BMI zscore. This result parallels those reported by Viana and colleagues [6], suggesting that these eating behaviours are less strongly related to child weight. Moreover, this low non-significant association of fussiness with the child's BMI resembled findings of other studies [23,25,26]. More studies are needed applying the CEBQ cross-culturally to confirm these findings. A recently published study in the Netherlands [9] suggested that emotional undereating was a more salient dimension for young children than emotional overeating. Young children react to emotional distress (loss of appetite when feeling e.g. upset or anxious) with a biologically natural response, which includes a reduction of gut activity thereby reducing children's food intake [34]. Indeed, consistent with findings from previous research [7,9], we found a low mean scale score on the EOE scale, confirming that eating in response to emotional stressors is quite abnormal in young children. In addition, our results support the psychosomatic theory [35,36], which posits that people overeat as a way of coping with emotional stressors based on experiences learned early in life. Our study indicates that this learned response to distress is not yet wellestablished in children as young as 7 years of age (see also [15]). In contrast to the studies of Wardle and colleagues [7] and Ashcroft and colleagues [15], no age effects were found for the CEBQ subscales. This may well be due to the narrow age range in our study (29 months), whereas the age range in the study of Wardle et al. [7] and Ashcroft et al. [15] was at least 4 and 6 years respectively. Similar to the findings reported by Wardle et al. [7], we found gender differences for FF, with boys scoring higher on fussy eating than girls. However, we also found significant differences for EOE (boys emotionally overeat more often than girls) and EF (girls enjoying food more often than boys). Since many differences in eating behaviours are detected during the teenage years among boys and girls, it would be advisable to track the development of gender differences in eating styles from early childhood onwards. Additionally, more research is needed to assess the exact role of gender in child eating behaviours, possibly in interaction with parental feeding styles [37]. Recently, evidence has been found regarding heritability of certain appetitive traits known to be related to the development of obesity. Carnell and colleagues [38] found evidence for a strong genetic influence of satiety and food cue responsiveness in children. In addition, Wardle et al. [39] have shown that genetic variants could contribute to lower sensitivity to satiety cues. These genetic influences on children's appetite responses indicate the importance of identifying high-risk children in early childhood, since they are more likely to overeat when encountering obesogenic environments. The present study has several limitations that should be acknowledged. First, factor-analytic procedures have to be repeated on a larger sample of Dutch 6-and 7-year olds to replicate our findings. In addition, considering the small sample size, confirmation regarding the associations between various eating styles and BMI in Dutch children Mean 'food approach' scores by Body Mass Index category Childr en's weight status Mean 'food avoidant' subscale scor e age 6 and 7 is needed. Second, the response rate was relatively low (mean 41.9%) and families with lower levels of education were relatively underrepresented in the current study. Another limitation was that the children's weight and height were parentally reported and not directly measured. Compared with measured weight and height, parents of 4-year-old children have been shown to slightly underestimate their children's weight and overestimate height, especially if their child was overweight or obese, whereas parents of underweight children tended to overestimate weight [40]. Hence, our study reported slightly lower percentages of overweight/obesity (10.4%) compared to the Dutch reference population of children aged 6 and 7 (2002-2004: ranging from 12.5% to 18.7%) [41]. It is likely that the present study yielded underestimates of associations between the instruments' scale scores and BMI, because of the parental reported nature of this study. In addition, there is a potential bias if parents who did not complete the questions regarding their children's weight and height had responded differently to distinct subscales than parents who completed those questions. However, except for DD, with slightly higher DD scores in those with missing height and weight data than in those with data present, no differences on any of the subscales were present. Finally, due to the cross-sectional nature of the study, inferences regarding causality cannot be made. Longitudinal and experimental study designs are needed to strengthen inferences, and assess the exact role of children's eating behaviours in the aetiology of obesity. Conclusion This study is the first to evaluate the factor structure of the CEBQ in a Dutch population among parents of children aged 6 or 7. In summary, the findings of the present study suggest that the instrument is valuable for identifying specific eating styles, which can be seen as important and modifiable determinants implicated in the development and maintenance of overweight and obesity. The identification of such variables is a prerequisite to gain insight into the behavioural pathways to obesity, and subsequently for the development of evidence-based intervention programs to prevent obesity in young children. Further longitudinal studies are needed to assess the role of eating behaviours in the development of obesity during childhood and into adulthood.
2018-05-08T18:28:08.544Z
0001-01-01T00:00:00.000
{ "year": 2008, "sha1": "a982a095f50286359f8a73511b479d529ac3c406", "oa_license": "CCBY", "oa_url": "https://ijbnpa.biomedcentral.com/track/pdf/10.1186/1479-5868-5-49", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a982a095f50286359f8a73511b479d529ac3c406", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
230802207
pes2o/s2orc
v3-fos-license
External auditory canal haemorrhage as the first sign of internal carotid artery pseudoaneurysm, a rare case: a case report Assessing the cause, severity of bleeding and strategies to control bleeding is crucial. We describe a rare case of a patient who was presented with epistaxis and left ear haemorrhage, as a probable complication of a ruptured internal carotid artery pseudoaneurysm. The massive haemorrhage compelled blood transfusion and clinical intervention. The diagnosis of internal carotid artery (ICA) pseudoaneurysm measuring 2.9 cm x 3.7 cm was concluded by computed tomography. Several coils were used to embolize the internal carotid artery pseudoaneurysm and arrest the bleeding with the guidance of an angiography. Coiling the pseudoaneurysm is highly recommended. Yet, the best methods to completely treat aneurysm are still in question. After the clinical intervention, the patient remained symptom-free and no episodes of bleeding were noted. Introduction Vascular lesions are severe complications caused by an invasive tumour, blood dyscrasia, penetrating trauma or blunt, or iatrogenic origin [1]. Pseudoaneurysm is an unusual vascular complication as a result of a partial injury of an arterial vessel wall, which causes blood flow via the laceration into the neighbouring tissues. This continuous leakage results in a slowly enlarging mass that results in a pseudoaneurysm over time [2]. Various clinical manifestations of bleeding from fatal to moderate can take place resulting in cranial or central nerve deficit. Considering its lifethreatening course, the attending doctor has a limited time to identify and treat these lesions. We present a case of epistaxis and massive haemorrhage of the ear due to internal carotid artery pseudoaneurysm, a complication likely stemming from surgical debridement of necrotic fasciitis of the left side of the neck performed previously. Patient and observation A 66-year-old woman presented with epistaxis and haemorrhage of the left ear without any obvious inducement for a period of one week. The patient has a 7 year history of hearing loss, mastoiditis, left tympanic membrane perforation and cervical necrotizing fasciitis which prompted a surgical debridement. A pseudoaneurysm was not observed during that time. The coagulation profile revealed that the haemoglobin level on admission was 60 g/L. An otoscopic examination revealed a blood-stained left external auditory canal. The tympanic membrane was intact. A laryngoscope examination revealed blood stained secretions in the posterior pharyngeal wall. Left pharyngeal oedema and a swollen uvula was observed. Neck contrast-enhanced computed tomography (CT) scan showed a slightly high-density shadow visible in the left parapharyngeal space, pharyngeal space and internal carotid artery area measuring 2.9 cm x 3.7 cm ( Figure 1). Oropharynx and laryngopharynx were compressed and narrowed. The patient underwent ICA coil occlusion. The angiogram was obtained to confirm the pseudoaneurysm and check the flow (Figure 2 and Figure 3). With the help of a 5F catheter and a guide wire, the pseudoaneurysm was successfully embolized using coils. Postembolization angiography revealed a complete exclusion of the pseudoaneurysm (Figure 4). The patient was given a blood transfusion. Follow-up evaluations revealed no recurrent episodes. Discussion Pseudoaneurysm treatment using coil embolization from the external auditory canal is effective and a secure option to usual surgical ligation of the affected artery, as illustrated in our case. No patient has been reported with associated functional impairment after coil embolization procedure. In our case, iatrogenic and injury are the possible causes of the pseudoaneurysm. The diagnosis of pseudoaneurysm should highly focus on the past history of injury and also a physical examination. Physicians should refrain from using a diagnostic needle aspiration due to the risk of bleeding and the related dilemma of dealing with such bleeding in an orifice setting. Contrast-enhanced computed tomography (CT) can illustrate a vascular enhancement with a round lesion. It can outline the site and degree of the lesion and also disclose any coexisting pathologies which cannot immediately be diagnosed clinically. Still, CT has inadequate detective sensitivity in some cases due to artefact from metallic fragments. Apart from the limitations of a CT scan, we still pursue to utilize it as a basic screening option [3,4]. The clinical diagnosis of a pseudoaneurysm was confirmed by the carotid angiogram before any kind of treatment is given. Angiography is more preferred when the lesion has been assessed and endovascular treatment is a reasonable alternative. Endovascular, surgical and conservative management is the treatment options for the pseudoaneurysm [5]. We opted out the duplexguided thrombin injection approach as its distribution of the thrombin is not well regulated and may result in some complications [6], despite the fact that it may also be a useful option to maintain a pseudoaneurysm [4]. Catheter-based embolization is a secure, rapid and effective method for the treatment of pseudoaneurysm. Various agents are used for embolization therapy, such as gelatin sponges and coils [5]. Benefits of an endovascular method include avoiding wound associated complications and facial scars. In our hospital, a pseudoaneurysm its branches are treated by occlusion of the involved artery via coils embolization over the neck of the pseudoaneurysm due to the fact that the artery is relatively tiny. Conclusion An ICA pseudoaneurysm should be taken into consideration if there is extreme epistaxis and a past history of a neck injury, even though it occurred many years ago. Pseudoaneurysm from the external auditory canal and its branches are rare in the head and neck. This case illustrates that a pseudoaneurysm can be treated by coil embolization in a secure, fast and beneficial manner. Figure 1: neck CT showing a high-density shadow measuring 2.9 × 3.7 cm at the area of the ICA, parapharyngeal space and pharyngeal space
2020-10-28T19:21:06.446Z
2020-10-15T00:00:00.000
{ "year": 2020, "sha1": "3402dfafc3400b872c2ec0943966a045821debf9", "oa_license": "CCBY", "oa_url": "https://doi.org/10.11604/pamj.2020.37.163.21968", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f49d8f40727712a407e24fcab8d468011d8e80b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53455242
pes2o/s2orc
v3-fos-license
Ameliorating Huntington ' s Disease by Targeting Huntingtin mRNA To date there are 9 known neurological diseases caused by an expanded polyglutamine (polyQ) repeat, with the most prevalent being Huntington’s Disease (HD) (Cummings & Zoghbi, 2000). HD is a progressive autosomal dominant disorder. It is caused by a CAG repeat expansion in the HTT gene, which results in an expansion of a polyQ stretch at the Nterminal end of the huntingtin (htt) protein. This polyQ expansion plays a central role in the disease and results in the accumulation of cytoplasmic and nuclear aggregates. In this chapter we will discuss wild-type htt function and the gain of toxic function of mutant htt in HD. Currently no treatment is available to delay onset or slow disease progression. However, recently developed RNA modulating therapies have great potential to lower mutant htt levels in HD. Already promising results in animal and human studies for other neurodegenerative disorders have been obtained, from which HD research can learn. Introduction To date there are 9 known neurological diseases caused by an expanded polyglutamine (polyQ) repeat, with the most prevalent being Huntington's Disease (HD) (Cummings & Zoghbi, 2000). HD is a progressive autosomal dominant disorder. It is caused by a CAG repeat expansion in the HTT gene, which results in an expansion of a polyQ stretch at the Nterminal end of the huntingtin (htt) protein. This polyQ expansion plays a central role in the disease and results in the accumulation of cytoplasmic and nuclear aggregates. In this chapter we will discuss wild-type htt function and the gain of toxic function of mutant htt in HD. Currently no treatment is available to delay onset or slow disease progression. However, recently developed RNA modulating therapies have great potential to lower mutant htt levels in HD. Already promising results in animal and human studies for other neurodegenerative disorders have been obtained, from which HD research can learn. Huntington's Disease HD is an autosomal dominantly inherited neurodegenerative disorder. HD is rare, but more common in Western countries. The prevalence of HD in America is approximately 5 in 100,000 (Shoulson & Young, 2011) and in Europe, the prevalence of HD may be even higher with estimates in England and Wales as high as 12 in 100,000 individuals (Rawlins, 2010). Post-mortem studies show that there is a 10-20 percent weight reduction in HD brains (Vonsattel et al., 1985). Neurodegeneration occurs throughout the forebrain with the GABAergic medium spiny neurons of the striatum as its first prominent victim, and to a lesser extent neurons in the cerebral cortex (Levesque et al., 2003). Severe cell loss in the striatal complex, the caudate nucleus and putamen results in striatal atrophy. This is accompanied by an enlargement of the lateral ventricles. The medium spiny projection neurons, containing enkephalin, are more susceptive to degeneration than substance P containing neurons while interneurons seem to be spared (Walker, 2007). With disease progression, degeneration expands throughout the HD brain and other structures become affected (Vonsattel et al., 1985). Cortical atrophy is characterized by thinning of the cerebral cortex and the underlying white matter. Neuronal loss is abundant in cortical layers III, V and VI but is also prominent in the CA1 region of the hippocampus, with a reduction of about 9 percent (Rosas et al., 2003). Disease onset usually occurs around midlife and is clinically characterized by a combination of symptoms: cognitive impairments, movement abnormalities, and emotional disturbances. Motor symptoms of HD include chorea and occasionally bradykinesia and dystonia (Tabrizi et al., 2009). Choreic movements, recognized as involuntary and unwanted movements, start in the distal extremities. During the course of HD these movements become more profound and eventually all other muscles of the body are affected. These symptoms can initially appear as lack of concentration or nervousness and unsteady gait (Kremer et al., 1992). Psychiatric symptoms often precede the onset of motor symptoms. Irritability is commonly one of the first signs and occurs throughout the course of the disease. Other psychiatric symptoms involve anxiety, obsessive and compulsive behavior while apathy and psychosis can appear in advanced stages. However, the most frequent psychiatric symptom is depression (Craufurd et al., 2001). Like psychiatric symptoms, cognitive symptoms can be present prior to the onset of the motor symptoms. The cognitive symptoms comprise mainly impairment in executive functions, including abstract thinking, problem solving, and attention (Snowden et al., 2002). Furthermore, the ability to learn new skills is affected (Paulsen et al., 2001). Altogether these symptoms substantially impede social and professional functioning. Eventually patients are incapable to adequately perform daily activities finally leading to progressive disability, requiring full-time care, followed by death (Simpson, 2007). Death generally occurs 15 to 20 years post diagnosis due to complications such as pneumonia, falls, dysphagia, heart disease or suicide. The disease is caused by a CAG trinucleotide repeat expansion within the coding region of the HTT gene. The HTT gene was the first autosomal disease locus to be mapped by genetic linkage analysis in 1983 (Gusella et al., 1983) on the short arm of chromosome 4 (4p16.3). The huntingtin protein (htt) was found to be ubiquitously expressed throughout the body, with highest expression in testis and brain (Strong et al., 1993), however, cells in the brain are specifically vulnerable to the toxic function of mutant htt. The CAG repeat expansion in the HTT gene results in an expanded polyQ repeat in the htt protein (The Huntington's Disease Collaborative Research Group, 1993). When the number of CAG repeats exceeds 39, the gene encodes a mutated form of the htt protein that is prone to aggregation. Alleles ranging 36 to 39 repeats, lead to an incomplete and variable penetrance of the disease or to a very late onset (McNeil et al., 1997). Repeat numbers exceeding 55-60 result in clinical manifestation of the disease before the age of 20, known as Juvenile Huntington's Disease (JHD) (Andresen et al., 2007) and both sexes are affected with the same frequency (Walker, 2007). Intergenerational CAG changes are extremely rare on normal chromosomes but on expanded chromosomes changes in CAG size take place in approximately 70 percent of meioses and expansion is more likely via the paternal line (Kremer et al., 1995). There is a strong inverse correlation between repeat numbers and the age of onset of the disease. The repeat length accounts for approximately 70 percent of the variance in age of onset (Roos, 2010). However, no correlation with repeat size is apparent for the progression and duration of the disease. Furthermore, neuropathological changes, such as atrophy and inclusion load are clearly correlated with the CAG repeat number. For patients, only symptomatic treatment is available and a treatment to slow down the progression or delay the onset of the disease remains elusive. Huntingtin protein When the HTT gene was discovered in 1993, the htt protein had an unknown function. Since then, enormous research efforts have revealed many functions of the wild-type protein (discussed in the present paragraph) and many toxic gain of functions of the mutant protein (discussed in the next paragraph). Wild-type htt is mainly localized in the cytoplasm, although a small proportion is present in the nucleus (de Rooij et al., 1996;Kegel et al., 2002). The protein is known to be associated with microtubules, the plasma membrane, Golgi complex, the endoplasmic reticulum, and mitochondria. Furthermore htt is associated with vesicular structures, such as clathrincoated and non-coated vesicles, autophagic vesicles, endosomal compartments or caveolae (Kegel et al., 2005;Strehlow et al., 2007;Rockabrand et al., 2007;Atwal et al., 2007;Caviston et al., 2011). Three of the first 17 amino acids at the amino terminus of htt are lysines, which are targets for post translational modifications that regulate htt half-life and are proposed to be involved in targeting htt to various intracellular membrane-associated organelles (Kalchman et al., 1996;Steffan et al., 2004;Kegel et al., 2005;Atwal et al., 2007;Rockabrand et al., 2007). The first 17 amino acids of htt have also been suggested to act as nuclear export signal (NES) by interaction with the nuclear pore protein translocated promoter region (Tpr) that then transports N-terminal htt fragments out of the nucleus (Cornett et al., 2005). The polyQ repeat starts at the 18th amino acid and is thought to form a polar zipper structure, which has been implicated in the interaction between different polyQ-containing transcription factors (Perutz et al., 1994;Harjes & Wanker, 2003). The polyQ stretch is followed by a polyproline repeat, which is thought to be involved in keeping the protein soluble (Steffan et al., 2004). Additionally, three main HEAT (htt, elongation factor 3, protein phosphatase 2A, and the yeast PI3-kinase TOR1) repeat motifs are identified which are known to form superhelical structures and are involved in protein-protein interactions (Takano & Gusella, 2002;Li et al., 2006). Htt is palmitoylated at the cysteine residue 214 by htt interacting protein (Hip) 14, which is thought to be involved in htt trafficking (Huang et al., 2004). Htt has various proteolytic cleavage motifs, with a hotspot between amino acid 500 and 600, which are recognized by various proteases, such as caspases 1, 3, 6, 7 and 8 and calpain Wellington et al., 2002;Kim et al., 2006). In contrast to mutant htt, the significance of wild-type htt cleavage is not completely clear. Mutant htt gain of toxic function in HD Expanded polyQ proteins are known to undergo conformational changes, which result in the hallmark of polyQ disorders, protein aggregates. The aggregates can already be found before the onset of the first symptoms (Weiss et al., 2008). Remarkably, there is growing evidence suggesting that these aggregates are not good indicators for disease onset and www.intechopen.com progression (Wanker, 2000;van Roon-Mom et al., 2006). The rate of aggregate formation is correlated to the length of the polyQ repeat (Legleiter et al., 2010). Whether accumulation of these aggregates is neurotoxic or neuroprotective is still under debate since evidence also suggests that soluble mutant htt is the main toxic component (Davies et al., 1997;Saudou et al., 1998;Arrasate et al., 2004). While the expanded polyQ repeat displays pathogenic properties it is probably not essential for normal function (Clabough & Zeitlin, 2006). Mutant htt is more disposed to proteolysis and it was shown that small N-terminal htt fragments are more toxic than full length mutant htt (Cooper et al., 1998). Proteolytic cleavage of mutant htt results in nuclear localization of toxic N-terminal mutant htt fragments. These N-terminal mutant htt fragments are important in the pathological process. Mutant htt fragments within the striatum of HD brains clearly differ from those of control brains, suggesting cleavage is disease specific (Mende-Mueller et al., 2001) and htt caspase-6 resistant HD mice did not show neuronal dysfunction (Graham et al., 2006). Various transcription factors have been found to co-localize with htt aggregates, such as TATA box binding protein (TBP), CREB binding protein (CBP) and p53 (Steffan et al., 2000;van Roon-Mom et al., 2002). These co-aggregated proteins can no longer assert their normal function and could thereby contribute to HD pathology (Nucifora, Jr. et al., 2001) Mutant htt is also suggested to act as pro-apoptotic factor triggering cell death. Htt is found to bind to the pro-apoptotic factor p53. Interestingly, p53 deficient HD mice displayed increased striatal inclusion body formation (Ryan et al., 2006). Expression of mutant htt in p53 deficient mice improved the lifespan probably by increased apoptosis initiated by mutant htt (Ryan & Scrable, 2008). In HD the fusion machinery and axonal transport are impaired. Accumulated N-terminal fragments block the axonal machinery, resulting in transport defects (Gunawardena et al., 2003). Endocytosis is thought to be impaired since the synaptic vesicle protein PACSIN1 has an altered subcellular location in early stage HD patients (Modregger et al., 2002). Finally, various proteins involved in exocytosis are known to have decreased expression levels in HD patients. Proteins involved in docking and fusion of vesicles show reduced transcript expression, suggesting a defect in the neurotransmitter release machinery in HD patients (Smith et al., 2007). N-terminal mutant htt fragments are found to be associated with the surface of mitochondria in transgenic and knock-in HD mice (Panov et al., 2002;Orr et al., 2008). The accumulation of mutant htt on mitochondria is increasing with age and correlates with disease progression. This impaired mitochondrial trafficking by N-terminal mutant htt could lead to decreased ATP supply in nerve terminals (Orr et al., 2008). Mutant htt is also suggested to be involved mitochondrial energy metabolism defects. Metabolic energy defects could be the result of mutant htt's capability to induce mitochondrial permeability transition pore opening. This leads to low mitochondrial membrane potential and high glutamate transmission, resulting in overactive glutamate NMDA receptors (excitotoxicity) (Choo et al., 2004). Abnormal mitochondrial respiratory chain function leads to reduced ATP levels and subsequent partially depolarized membrane. This voltage change leads to chronic calcium influx and activation of proteases, causing more reactive oxygen species (ROS) production. Further, increased ROS production gives rise to oxidative stress and could contribute to the vicious circle (Browne & Beal, 2006 Loss of wild-type function in HD As described above, the main cause of HD is a gain of toxic mutant htt function. Since various functions and post-translational modifications of htt are altered in HD, loss of wildtype htt function could also be involved. Htt expression is important for normal cellular function since knock-out of the homologous htt mouse gene was found to be early embryonic lethal (Zeitlin et al., 1995). Previous studies have shown that approximately 50% of htt protein level is required to maintain cell functionality (Dragatsis et al., 2000). Next to embryonic development, htt is also involved in regulation of apoptosis, transcription, intracellular transport and BDNF transcription (Zuccato et al., 2001;Imarisio et al., 2008). Wild-type htt is reported to act as protector of the brain cells from apoptotic stimuli (Rigamonti et al., 2000). Reduced wild-type htt expression in transgenic HD mice resulted in worsening of the behavioural deficits and survival. In addition, no severe striatal abnormalities were visible in those HD mice, which could mean that the striatal phenotype is mainly caused by mutant htt toxicity (Zhang et al., 2003). Furthermore, overexpression of wild-type htt protected these mice against neurodegeneration. Removal of endogenous htt in a Drosophila melanogaster (D. melanogaster) HD model was found to exacerbate the neurodegenerative phenotype associated, suggesting that loss of normal htt function might also contribute to HD pathogenesis (Zhang et al., 2009). HTT RNA Although the main toxic component is the htt protein, recent evidence suggests that also HTT RNA could have toxic properties. There is also recent evidence for antisense transcription through the HTT locus. In this paragraph we will review the importance of these findings. Htt RNA gain of function in HD Trinucleotide expansion disorders occur either in untranslated genomic regions (UTRs) resulting in a toxic RNA gain of function or loss of gene function, or in coding regions resulting in a gain of toxic protein function (Orr & Zoghbi, 2007). Until recently, it was believed that HD is solely caused by a toxic gain of function of the polyQ protein and to a lesser extent, loss of wild-type function. However, recent evidence suggests that the mutant CAG repeats of the HTT RNA transcript could also have toxic properties (Fig. 1). This RNA toxicity is caused by the long hairpin structures of the expanded RNA that result in abnormal interactions with double stranded RNA-binding proteins. The CAG repeat hairpin in the HTT transcript was found to be stabilized by the flanking (CCG)n repeat (de Mezer et al., 2011). The resulting double stranded CAG RNA hairpin formed intranuclear foci that co-localized with the muscleblind-like 1 (MBNL1) splicing factor (Jiang et al., 2004). Altered MBNL1 function is implicated in RNA toxicity of CUG repeat expansion disorders such as myotonic dystrophy type 1 (DM1) (Kanadia et al., 2003). DM1 is caused by a CTG repeat expansion at the 3' UTR of the DMPK gene. The CTG repeats are known to form stable hairpin structures that are toxic by causing abnormal alternative splicing by MBNL1 binding and sequestering in nuclear foci (Fardaei et al., 2001;Kanadia et al., 2003). Similar to expanded CUG repeats in DM1, synthesized expanded CAG repeats also resulted in abnormal alternative splicing in both transiently transfected and patient-derived cells (Mykowska et al., 2011). The RNA toxicity modifier MBNL1 was also found to be involved in another polyQ disease, namely spinocerebellar ataxia 3 (SCA3). MBNL1 was found to be up-regulated in a D. melanogaster model of SCA3. The neurodegenerative disorder SCA3 is caused by a CAG repeat expansion in the ATXN3 gene, which results in the expression of a polyQ containing ataxin-3 protein. Upregulation of the D. melanogaster homolog of MBNL1 (mbl) was found to enhance pathogenic ataxin-3 protein induced toxicity, as well as pathogenic mutant htt protein induced toxicity HD and SCA3 transgenes with a CAG repeat interrupted by CAA codons (expressing an identical polyQ protein as compared to a pure CAG repeat) showed only a mild phenotype, indicating the importance of the expanded pure CAG repeat for the toxic phenotype. Interestingly, both full CAG repeats and CAA interrupted CAG repeats showed similar levels of protein inclusions, indicating that the phenotype severity does not correlate with the number of inclusions . Recently, transgenic mice expressing a GFP construct with 200 CAGs in the 3' UTR resulted in reduced GFP levels as compared to animals with 23 CAG repeats in their 3' UTR of the GFP construct (Hsu et al., 2011). Furthermore, these CAG 200 mice showed nuclear RNA foci and a reduced breeding efficiency, which supports the gain of RNA toxicity hypothesis. Transgenic Caenorhabditis elegans (C. elegans) expressing various CAG repeat lengths in the 3' UTR of a GFP gene showed a length-dependent toxicity. Worms with an 83 CAG repeat did not show any phenotype, whereas C. elegans expressing 200 CAGs died within a few days. Both 125 CUGs and 125 CAGs co-localized in nuclear foci with C. elegans MBNL1 homolog CeMBL and overexpression of CeMBL partly reversed the CAG 125 induced phenotype . In contrast to the above studies, there is also evidence that the CAG repeat RNA is not toxic. Expression of a cDNA construct with 79 CAG repeats in the 3' UTR did not induce cell death, whereas a construct expressing 79 CAGs in the coding region did induce cell death (Ikeda et al., 1996). This was also found in two other polyQ disorders, spinocerebellar ataxia 1 (SCA1) and spinobulbar muscular atrophy (SBMA). A SCA1 mouse model with impaired nuclear localization signal in ataxin-1 did not show nuclear inclusion bodies and did not display the disease phenotype (Klement et al., 1998). Furthermore, impairing nuclear localization of the androgen receptor (AR) in SBMA by castration showed marked improvements of disease pathology, also suggesting that the pathology is mainly caused by gain of toxic protein and not RNA (Katsuno et al., 2002). A D. melanogaster model of CAG toxicity expressing a repeat construct with a premature termination codon before a 93 CAG repeat, did not show any phenotype (McLeod et al., 2005). Based on these results, it was suggested that the toxicity in CAG triplet repeat disorders was exclusively the result of expanded polyQ protein gain of function. From the above we can conclude that not only gain of toxicity by expanded polyQ protein, but also RNA toxicity from the expanded CAG repeat could be involved in HD pathology. However, the size of the CAG repeat is critical for RNA pathogenicity. HTT antisense transcription A large proportion of the genome can produce transcripts from both strands (Katayama et al., 2005). It has become clear that antisense transcripts are involved in triplet repeat disorders and bidirectional transcription has thus far been identified in DM1, spinocerebellar ataxia 8 (SCA8), and HD like 2 (HDL2) (Moseley et al., 2006;Wilburn et al., 2011). In SCA8, which is caused by a CTG repeat expansion in a transcribed but not translated ATXN8OS gene, it was thought that the expanded CTG repeat caused RNA toxicity (Koob et al., 1999). Unexpectedly, bacterial artificial chromosome (BAC) transgenic SCA8 mice showed 1C2 positive inclusion bodies. The 1C2 antibody specifically recognizes expanded polyQ tracts, which are the hallmark of polyQ disorders. A novel transcript called ataxin-8, which encodes a polyQ protein, was expressed from the opposite strand, suggesting polyQ induced toxicity (Moseley et al., 2006). A BAC HDL2 mouse model with a pathogenic CTG repeat on the sense and expanded CAG repeat on the antisense strand at the Junctophilin-3 locus showed both RNA toxicity caused by its expanded CUG repeat as well as protein toxicity by its polyQ translated expanded CAG repeat (Wilburn et al., 2011). These findings suggest that triplet repeat disorders can involve toxic gain of function of both protein and RNA by bidirectional transcription. Recently, two natural HTT antisense (HTTAS) transcripts were identified at the HD locus (Chung et al., 2011). HTTAS was found to be 5' capped, poly A-tailed and contained 3 exons. There were two different isoforms identified of which one enclosed a functional promotor and the CTG repeat. The HTTAS containing the short CTG repeat was found to be widely expressed in multiple tissues. Remarkably, expanded CTG repeat containing HTTAS was strongly reduced in HD brains. The authors state that HTTAS acts as a negative regulator for HTT transcript expression as knock-down of HTTAS resulted in higher htt levels and overexpression of HTTAS resulted in lower HTT levels (Chung et al., 2011). This negative regulating property on HTT of HTTAS could potentially have a clinical implication by overexpressing HTTAS in HD patients, thereby alleviating pathogenicity by lowering htt levels. RNA modulating therapies in HD Although the HTT gene was identified in 1993, there are no treatments to cure or even slow down the progression of the disease. Most therapeutic strategies under investigation are targeting one of the many altered cellular processes caused by toxic mutant htt. Targeting a single cellular process might be inadequate to be clinically beneficial. A more effective approach would be to reduce the expression of the causative HTT gene and thereby inhibiting all downstream toxic effects. Recent advances to inhibit the formation of mutant polyQ proteins using RNA modulating therapies, such as RNA interference (RNAi) and antisense oligonucleotides (AONs) look promising for HD (Sah & Aronin, 2011). RNAi is an endogenous cellular process involved in transcriptional regulation and acts as cellular defense mechanism against exogenous viral components. RNAi by introducing small interfering RNA (siRNA), short hairpin RNA (shRNA), or artificial micro RNA (miRNA), is increasingly used as a potential therapeutic tool to reduce expression of target transcripts. Specific knock-down is also achieved by introducing modified single stranded AONs that can hybridize to the target RNA, which is subsequently degraded or its translation blocked. The most frequently used htt RNA modulating strategies for HD are: Knock-down of total htt RNA levels by targeting both wild-type and mutant htt and allele-specific reduction of mutant htt RNA only (Fig. 2). Gene therapy to lower both htt alleles in HD Since htt has many important wild-type functions, one of the key questions that needs to be answered for htt lowering strategies to become successful is how much htt is needed for normal function, or rather, how much can htt levels be reduced before adverse effects become apparent. Below we will first describe the studies describing lowering of both wild type and mutant htt, followed by the different approaches for allele specifically lowering mutant htt only. Various synthetic oligonucleotides with different modifications and backbones have been used in rodents to partially lower htt expression. A partial reduction of both normal and mutant htt by 25 to 35% using shRNAs was found to be well-tolerated in wild-type rats up to 9 months without signs of toxicity or striatal degeneration (Drouet et al., 2009). Total silencing using artificial miRNAs for both wild-type and mutant htt of 75% within the striatum of a transgenic HD mouse model showed reduced toxicity, extended survival, and improved motor performance, 3 months after treatment (Boudreau et al., 2009). Striatal injection of non allele-specific artificial miRNA in wild-type mice resulted in 70% reduction of htt levels. The high murine htt transcript reduction was sustained without adverse side effects up to the end of their study, which was set at 4 months (McBride et al., 2008). Since htt lowering strategies will be most beneficial for patients when administered over many years, the long-term safety needs to be assessed. Therefore, simultaneously lowering transcript levels from both alleles can only be applied once the role of wild-type htt in the human brain is elucidated in more detail. Moreover, to date it is not known if there is equal transcription from both the mutant and wild-type htt allele. Lowering total htt transcript levels by 70% does not necessarily mean an equal reduction of both alleles by 70%. Allele-specific reduction of mutant htt in HD As described in previous paragraphs, endogenous htt expression is important for normal cellular function and an ideal strategy for an autosomal dominant disorder as HD would be to specifically target the mutant allele and thereby maintaining as much wild-type htt protein as possible. Suppression of 50% to 80% using siRNA specific for human mutant htt in transgenic rodent models of HD for 4 months was found to improve motor and neuropathological abnormalities and prolonged longevity in HD mice (Harper et al., 2005;Wang et al., 2005). These studies showed that lowering mutant htt without reducing wildtype htt levels, resulted in an improved pathology. These results favored an allele-specific htt lowering approach without altering the expression of endogenous wild-type htt expression. Various studies have shown that a pronounced decrease of mutant htt levels, with only minor reductions in wild-type htt is feasible using allele-specific oligonucleotides. The different approaches, their advantages and disadvantages will be discussed in the following paragraph. Targeting associated SNPs in HD Single nucleotide polymorphisms (SNPs) are DNA sequence variations that occur when a single nucleotide is different between the two alleles of a gene. One way to distinguish between the wild-type and polyQ disease-causing allele is to target such a SNP that is unique to the mutant transcript using siRNAs (Miller et al., 2003). siRNAs are known to discriminate between transcripts that differ at a single nucleotide and various studies have shown specific reduction of mutant htt mRNA using siRNAs directed against different SNPs. The first evidence of allele-specific silencing in HD using using SNP specific RNAi was obtained in human cells overexpressing htt transgenes (Schwarz et al., 2006). The first prove of principle of endogenous mutant htt silencing using SNPs in fibroblasts derived from HD patients was acquired in 2008 (van Bilsen et al., 2008). Extensive genotyping revealed a group of 22 SNPs highly associated with mutant htt alleles in a European HD cohort (Warby et al., 2009). Since then, various groups have shown that the vast majority of the HD patient population could be treated using 5 (75% of HD patients) or 7 (85% of the HD patients) different siRNAs (Lombardi et al., 2009;Pfister et al., 2009). The most promising SNP was found to be located in exon 67 of the HTT gene. This SNP is strongly associated with the mutant allele and 48% of the total Western HD population was heterozygous at this site (Pfister et al., 2009). Most of the heterozygous SNPs linked to the expanded CAG repeat in exon 1, are found remote downstream from the CAG repeat in exons 25 up to 67 (Lombardi et al., 2009;Pfister et al., 2009). To determine in HD patients whether they are heterozygous and if yes, which SNP belongs to the expanded CAG repeat, a technique called SNP linkage by circularization (SLiC) was developed (Liu et al., 2008). By circulating the DNA, the CAG repeat and SNP site were brought together, making it easy to link the SNP to the expanded CAG repeat using a single PCR. Although the selectivity obtained from above described SNP targeting siRNAs are very promising, there are some limitations. The diversity of SNPs within patient populations would make it necessary to develop multiple siRNAs. Furthermore, for HD patients that do not exhibit any of the most frequent SNPs a different treatment needs to be developed. Targeting the expanded CAG repeat in mutant HTT Another approach to achieve allele-specific silencing is based on the common denominator of all HD patients; their expanded CAG repeat. The selective silencing is either based on the hypothesis that there are structural differences between wild-type and mutant htt mRNA, or based on the larger number of CAGs in the expanded repeat and subsequent more binding possibilities. The first prove for allele discrimination by targeting the CAG repeat was achieved in HD human fibroblasts using a siRNA with 7 consecutive CUG nucleotides (Krol et al., 2007). Further studies with CAG repeat targeting siRNAs showed a low selectivity for the mutant allele, making siRNAs incompatible for CAG repeat directed allele-specific silencing (Hu et al., 2009). Other chemical modifications and oligomers show much higher specificity for expanded CAG repeat transcripts. Single stranded peptide nucleic acids (PNA), locked nucleic acids (LNA), and AONs with a 2'O methyl addition and phosphorothioate backbone targeting CAG repeats have been used to specifically reduce expanded HD transcripts in vitro in patient derived skin and blood cells (Hu et al., 2009;Evers et al., 2011). However, PNA selectivity was less pronounced in CAG repeat lengths (40 to 45 CAGs) that occur most frequently in the HD patient population. The allele-specific reduction after transfection of patient cells with LNAs and AONs with 7-mer CUG repeats was more pronounced in the average HD CAG repeat length. Furthermore, other endogenous CAG repeat containing transcripts with important cellular functions were unaffected by the tested CUG oligonucleotides (Hu et al., 2009;Evers et al., 2011). The main advantages of LNAs and AONs are that they are single stranded and do not show toxicity in vivo. Systemic delivery of modified AONs in Duchenne muscular dystrophy (DMD) boys carrying specific deletions in the DMD gene induced the synthesis of novel, internally deleted, but likely (semi-) functional, dystrophin proteins without clinically apparent adverse event (Goemans et al., 2011). Fig. 2. RNA modulating therapeutic approaches for lowering htt. Two different HTT RNA modulating strategies used for HD are: A) Non allele-specific reduction of total HTT RNA levels by targeting a sequence that is identical in both the wild-type and mutant HTT transcript. B) Allele-specific reduction of mutant HTT RNA by targeting a unique heterozygous SNPs only present in the mutant transcript or C) Allele-specific reduction targeting the expanded CAG repeat on the mutant HTT transcripts. www.intechopen.com Likewise, the use of only a single AON was suggested to be effective as treatment of various polyQ diseases (Hu et al., 2009;Evers et al., 2011). One expanded CAG repeat targeting AON was found to specifically reduce the expression of mutant ataxin-1 and ataxin-3 mRNA levels in SCA1 and 3, respectively, and mutant atrophin-1 in dentatorubralpallidoluysian atrophy (DRPLA) in patient derived cells (Evers et al., 2011). Although these results are promising, extensive research is needed to elucidate the mechanism used by those oligonucleotides to induce selective silencing and to assess specificity and safety. Likewise, the full potency of this allele-specific treatment will be revealed when the first in vivo results are obtained. RNA modulating therapies in other neurodegenerative diseases AONs have also been used for the treatment of neurodegenerative disorders and are found to be taken up by neurons when delivered into the cerebral lateral ventricles. Here are some examples showing therapeutic benefit in animal models and/or clinical trials. Prevention of mutant protein translation We will first focus on the neurodegenerative disorder amyotrophic lateral sclerosis (ALS) where RNA modulating therapeutics are used to reduce transcript levels of disease causing protein. The RNA modulating therapeutics to treat ALS are currently tested in a phase I clinical trial. The progressive neurodegenerative muscle weakness disorder ALS is a caused by loss of motor neurons in the brain and spinal cord (Al-Chalabi & Leigh, 2000). The first mutations linked to the familial form of ALS (fALS) were found in the superoxide dismutase 1 (SOD1) gene. Mutated SOD1 is known to be toxic and prone to aggregation. Only approximately 1% of ALS cases is the result of mutations in the SOD1 enzyme (Bossy-Wetzel et al., 2004). In a transgenic mouse model of ALS, 2'O methoxyethyl modified AONs were used to lower mutant SOD1 levels by binding and subsequent RNase H mediated breakdown of SOD1 transcripts. Continuous ventricular infusion of the SOD1 targeting AON significantly slowed disease progression (Smith et al., 2006). The first results of a phase I study testing the safety of this SOD1 targeting AON in patients with fALS caused by mutant SOD1 are expected at the end of 2011. The outcomes of this phase I trial will be vital for future trials with RNA modulating therapies in HD. Modulating pre-mRNA splicing RNA modulating therapeutics are also used to modulate pre-mRNA splicing events in spinal muscular atrophy (SMA) using modified AON in vivo. SMA is an autosomal recessive neuromuscular disorder caused by loss of function of the survival motor neuron 1 (SMN1) gene. This homozygous deletion of SMN1 results in degeneration of motor neurons in the anterior horn of the spinal cord and lower brain stem (Bowers et al., 2011). Depletion of SMN1 is not embryonic lethal because of the presence of the almost identical SMN2 gene. However, due to a point mutation in an intron the SMN2 www.intechopen.com transcript is not correctly spliced. The majority of SMN2 transcripts are therefore lacking exon 7, which results in a truncated protein and lower expression of a functional SMN protein (Lorson et al., 2010). Current therapeutic strategies are aimed at modulating alternative splicing of SMN2. Transfecting fibroblasts with an AON blocking intronic splicing silencers in intron 7 of SMN2 were found to result in inclusion of SMN2 exon 7 (Singh et al., 2006). Injection of differently modified AONs into the brains of SMA mouse models resulted in increased exon 7 inclusion and subsequent elevated SMN protein levels. The AON treated SMA mice displayed increased muscle size and extended survival (Williams et al., 2009;Hua et al., 2010;Passini et al., 2011). Another modulating pre-mRNA splicing strategy involves the addition of a functional moiety to the AON to replace the missing splicing enhancer protein, thereby enhancing the inclusion of exon 7 by the splicing machinery (Cartegni & Krainer, 2003;Skordis et al., 2003). Several in vivo studies have shown increased SMN2 protein levels after intraventricular injection of splicing factors recruiting AONs (Dickson et al., 2008;Baughan et al., 2009). The AONs to treat SMA show promising results in vivo and the progression in therapeutics will be monitored closely. Results regarding delivery of the AON to the brain in humans and how well the AON is tolerated will be very useful for the development of RNA modulating therapeutics for HD. Drug delivery to the brain, how to cross the blood brain barrier? One major challenge of AON therapies for neurodegenerative disorders is delivery of the AON to the target organ. In the following paragraph we will describe in short the blood brain barrier function and how this impairs the uptake of peripherally administered drugs. We will focus in particular on the limitations and possibilities of AON delivery to the brain and will speculate on future clinical applications. Blood brain barrier A unique feature of the brain is that it is separated from the blood by the blood brain barrier (BBB). This is a monolayer of endothelial cells forming tight junctions through the interaction of cell adhesion molecules (Palmer, 2010). Astrocytes with their processes surrounding the endothelial cells, pericytes located between the endothelial cells and astrocytes, macrophages, and the basement membrane, form the other structural components of the BBB. Endothelial cells of the BBB are characterized by only few fenestrae and pinocytic vesicles, limiting transport to and from the brain. In this respect, it should be noted that the BBB also separates largely the immune system from the brain. Despite this gate-controlling system, essential nutrients, such as glucose, are permitted to pass (Bernacki et al., 2008). In neurodegenerative diseases, including HD, disruption of the BBB is common (Tomkins et al., 2007;Palmer, 2010). Interestingly, in animal models, this can even lead to neurodegenerative changes itself (Tomkins et al., 2007). The BBB has been already noticed in the work of Paul Ehrlich, Nobel Prize winning bacteriologist in the late 19th century. Injected dyes stained all organs except the brain and spinal cord. However, he did not attribute this phenomenon to the presence of a barrier but www.intechopen.com to dye characteristics. His student showed later that staining of the brain was possible when the dye was injected directly into the brain (Palmer, 2010). Subsequent studies using electron microscopy were able to directly visualize the BBB. The BBB is a major challenge in central nervous system (CNS) drug development. When a drug is administered to the body, a fraction will be bound to proteins (e.g. serum albumin, lipoprotein etc.) and a fraction will be free. The free fraction is the pharmacologically relevant fraction, since it is available to cross the BBB (Palmer, 2010), depending on its physiochemical properties. After crossing the BBB, the drug will enter the interstitial fluid and go to the target (proteins, receptors, transporters etc.). Subsequently, the interstitial fluid drains to the cerebrospinal fluid (CSF), which is produced at a rate of 500 ml/day in humans, while the ventricle system can house only 100-150 ml. This means that there is at least 3 times CSF circulation, allowing continuous drainage of the brain's interstitial fluid. Crossing the blood brain barrier In the process of drug discovery, the aim is to find a substance which is potent, selective and preferably bioavailable. In addition, it needs to be able to cross the BBB, and reach the target at a sufficient concentration (Alavijeh et al., 2005). The following mechanisms are available to cross the BBB. The first one is simple diffusion. Small lipophilic substances which have a hydrogen bond are more likely to pass the BBB (Gerebtzoff & Seelig, 2006). The second mechanism is via active transport mediated by transporter molecules. The most well-known is glucose with its glucose transporter 1 (GLUT1), which is the most widely expressed among the GLUT family (13 isoforms) (Guo et al., 2005;Palmer, 2010). Other carriers are for instance lactate and amino acids. A well-known drug transported via this way is levodopa (Cotzias et al., 1967). The third mechanism to cross the BBB is via receptor-mediation. Receptor-mediated endocytosis allows macromolecules to enter the brain, such as transferrin, insulin, leptin, and insulin-like growth factor 1 (Pardridge, 2007). Besides systemic mechanisms to cross the BBB, there are also techniques to bypass the BBB by direct infusions into the subdural space, the brain's ventricle system, or the brain parenchyma. These infusions can be single, repeated, or continuous depending on the methodology, using either simple or sophisticated pump systems. It is possible to use one probe or more probes for infusion. Using the subdural and ventricle compartments, diffuse delivery of the drug into the brain can be achieved, while using intraparenchymal delivery, a local, but well-targeted delivery can be realized When a substance has successfully entered the brain, there are mechanisms preventing adequate functioning. One mechanism is active transport to remove the substance, also known as resistance. A superfamily of multidrug resistance proteins, belonging to the ATPbinding cassette transporters, drives substances away by an ATP-dependent process (Palmer, 2010). One of the most abundant proteins is the P-glycoprotein. This mechanism is responsible for the failure of some anticancer drugs. Another family of egress transporters is the organic anion transporting proteins. In the field of HD, efforts are ongoing to deliver innovative drugs to the brain via the systemic route and drugs are designed to use one of the three mechanisms to cross the BBB, www.intechopen.com as explained earlier. For instance, Lee and associates described the use of a peptide nucleic acid as an antisense which was able to access endogenous transferrin transport pathways (receptor mediated endocytosis) and reach the brain in a transgenic mouse model (Lee et al., 2002). However, there are also efforts to bypass the BBB, and to deliver the drug using either the ventricle system or intraparenchymally. Conclusion To date there is no treatment to prevent or even slow down the progression of HD. Considerable research has been performed to gain more insight into HD pathology. Next to the well-known toxic gain of polyQ protein function, loss of wild-type function and a toxic gain of expanded CAG repeat RNA was also suggested recently, and needs to be examined in more detail. Recent results using SNP specific siRNAs and CAG targeting AONs look promising both in vitro and in vivo. To develop an effective HD therapy, it is likely that a combination of different RNA modifying approaches will be optimal to lower mutant htt levels. Extensive research is required to rule out toxic off-targets effects and elucidate the exact mode of action of these RNA modulating therapeutics. Ongoing clinical trials for other neurodegenerative disorders, such as in ALS, will give us more insights in the potential of RNA modulating therapeutics.
2019-02-27T10:53:20.527Z
2012-02-15T00:00:00.000
{ "year": 2012, "sha1": "e6de54b342e907b4bed2615e69023e12479717ad", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/30283", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "fe038b2f3ea1ab0135f3ac63075d07c4d31854df", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
247999786
pes2o/s2orc
v3-fos-license
Diagnostic validity and clinical utility of genetic testing for hypertrophic cardiomyopathy: a systematic review and meta-analysis Objective This study summarises the diagnostic validity and clinical utility of genetic testing for patients with hypertrophic cardiomyopathy (HCM) and their at-risk relatives. Methods A systematic search was performed in PubMed (MEDLINE), Embase, CINAHL and Cochrane Central Library databases from inception through 2 March 2020. Subgroup and sensitivity analyses were prespecified for individual sarcomere genes, presence/absence of pathogenic variants, paediatric and adult cohorts, family history, inclusion of probands, and variant classification method. Study quality was assessed using the Newcastle-Ottawa tool. Results A total of 132 articles met inclusion criteria. The detection rate based on pathogenic and likely pathogenic variants was significantly higher in paediatric cohorts compared with adults (56% vs 42%; p=0.01) and in adults with a family history compared with sporadic cases (59% vs 33%; p=0.005). When studies applied current, improved, variant interpretation standards, the adult detection rate significantly decreased from 42% to 33% (p=0.0001) because less variants met criteria to be considered pathogenic. The mean difference in age-of-onset in adults was significantly earlier for genotype-positive versus genotype-negative cohorts (8.3 years; p<0.0001), MYH7 versus MYBPC3 cohorts (8.2 years; p<0.0001) and individuals with multiple versus single variants (7.0 years; p<0.0002). Overall, disease penetrance in adult cohorts was 62%, but differed significantly depending on if probands were included or excluded (73% vs 55%; p=0.003). Conclusions This systematic review and meta-analysis is the first, to our knowledge, to collectively quantify historical understandings of detection rate, genotype-phenotype associations and disease penetrance for HCM, while providing the answers to important routine clinical questions and highlighting key areas for future study. INTRODUCTION Hypertrophic cardiomyopathy (HCM) is characterised by left ventricular hypertrophy in the absence of predisposing cardiac conditions, most commonly inherited as autosomal dominant, and has a prevalence of 1/500. 1 Since the first pathogenic variant for HCM was discovered in 1990, 2 numerous studies have individually addressed genetic testing for HCM and current professional guidelines recommend genetic testing for affected individuals and their at-risk relatives. 3 4 While these recommendations primarily focus on the benefits of cascade genetic testing for at-risk relatives, permitting early diagnosis and risk stratification for sudden cardiac death (SCD), the direct benefits for patients with HCM are less clear. The objective of this Key questions What is already known about this subject? ► As one of the most common inherited conditions, hypertrophic cardiomyopathy (HCM) is a routine indication for genetic testing. However, our understanding of the impact of genetic testing on clinical outcomes has been limited to individual studies or small analyses until now, What does this study add? ► In this systematic review and meta-analysis, historical understandings of HCM from across 25 years are collectively quantified. Detection rate based on pathogenic and likely pathogenic variants was highest in paediatric cohorts and adults with a positive family history. Application of current, improved, variant interpretation standards significantly impacted the adult detection rate of gene panel testing. Age-of-onset in adults was significantly earlier for genotype-positive cohorts and those with MYH7 or multiple variants. Overall, disease penetrance was 62%, but differed significantly depending on if probands were included or excluded. How might this impact on clinical practice? ► A refined understanding of genetic testing validity and clinical utility for HCM provides critical clinical information to guide and optimise management for patients and at-risk relatives. systematic review was to assess the diagnostic validity and clinical utility of genetic testing for patients with HCM and at-risk relatives. METHODS A systematic review was performed to align with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 5 reporting checklist to address the overarching research question, 'Does genetic testing lead to improved outcomes for individuals diagnosed with HCM and their at-risk relatives?' This question has several components, including the detection rate for gene panel testing, genotype-phenotype correlations, penetrance and management implications, which are reported in this manuscript. Additional questions relating to uptake, utility and patient-reported outcomes for genetic testing and genetic counselling are detailed in a second manuscript that has been submitted for publication. The research team, consisting of medical librarians, a methodologist and genetic counsellors, defined the PICOTS (population, interventions, comparators, outcomes, timing and setting), which are presented in online supplemental methods table 1. A search strategy was developed using keywords pertaining to HCM, genetic counselling and genetic testing. We queried the PubMed (MEDLINE), Embase, CINAHL and Cochrane Central Library databases with minor modifications to accommodate the search input parameters for each database. The initial search was conducted on 7 July 2017 and updated on 2 March 2020. The PubMed (MEDLINE) search strategy is presented in online supplemental methods table 2. Articles were limited to English-language publications. All phases of the review and extraction process were performed in duplicate by blinded reviewers, and disagreements were adjudicated through discussion, or with the aid of a third reviewer. Deduplicated citations were uploaded to Rayyan 6 for abstract and full-text review according to prespecified inclusion and exclusion criteria based on the PICOTS (online supplemental method table 3). Outcome-specific exclusion criteria are reported in online supplemental method table 4. Studies identified in the updated literature search were screened and reviewed in their entirety in Covidence. Relevant data were extracted into an Excel spreadsheet by reviewers. Study quality was assessed using the Newcastle-Ottawa tool. 7 Data analysis We prespecified the analysis plan and data were grouped into three main categories: detection rate, genotypephenotype correlations and penetrance. Data analysis, including generation of forest plots, was performed using R V.4.0.2 with 'meta', 'metafor' and 'stats' packages. Meta-analysis of single proportions was calculated with generalised linear mixed model, random-effects settings. 8 Continuous variables and multiple proportions were assessed using inverse variance, random-effects meta-analyses . Because genes tested included those with definitive, strong, moderate and weak associations to HCM, further subgroup analysis was limited to the eight sarcomeric genes with definitive association to disease (ACTC1, MYBPC3, MYH7, MYL2, MYL3, TNNT2, TNNI3 and TPM1). 9 In addition, subgroup and sensitivity analyses were performed for genotype-positive (G+) versus genotype-negative (G−) patients, inclusion of probands in study population, paediatric and adult cohorts, family history and variant classification standard used. In studies not reporting the unique number of patients with a family history of either SCD or cardiomyopathy (CM), we included the largest reported group (either SCD or CM history) in our meta-analysis of detection rate, to avoid double-counting patients and inflating the pooled estimate. Studies not included in the meta-analyses were narratively synthesised and their results were compared with the meta-analysis results. Between-group comparisons were calculated with the appropriate statistic (eg, χ 2 ) for articles that presented their data alternatively, where possible. [10][11][12][13] Meta-analyses are reported as the pooled estimate with accompanying CIs and p values for between-group comparisons. Heterogeneity was calculated as I 2 and τ 2 and is reported on the accompanying forest plots. Significance was set at p<0.05; no adjustment was made for multiple comparisons. RESULTS A total of 3196 non-duplicated articles were screened and 596 were reviewed in their entirety for inclusion. Data extraction and quality assessments were performed on 132 articles meeting inclusion criteria (online supplemental figure 1). In total, 80 studies reported on detection rate, 44 described genotype-phenotype associations and 51 provided penetrance estimates (categories not mutually exclusive). No studies reporting on management implications were identified. Online supplemental table 1 provides a summary of all studies and more comprehensive data are provided in online supplemental tables 2-12. Detection rate Detection rate (table 1) was evaluated in predominantly adult and paediatric cohorts (online supplemental tables 2-5). The detection rate was based on both pathogenic and likely pathogenic variants, as defined per publication. Subgroup data analyses were based on the application of American College of Medical Genetics and Genomics (ACMG) and Association for Molecular Pathology (AMP) variant classification standards, relevant family history, presence of multiple variants and gene prevalence. 9 14 In addition, utilisation of exome and genome sequencing in HCM cohorts is described. Adults The pooled detection rate in predominantly adult HCM cohorts was 42% (figure 1) with an inconclusive Meta-analysis rate (ie, rate of results with ≥1 variants of uncertain significance) of 12% (online supplemental figure 2). Studies that applied current ACMG/AMP standards had a lower detection rate than those that did not (33% vs 43%; p=0.0001), and a higher inconclusive rate (24% vs 10%; p<0.0001). Identification of two or more disease-causing variants was reported in 2% of cases (online supplemental figure 3). The majority of individuals with a positive result (96%) had at least one disease-causing variant identified in one of the eight sarcomeric HCM genes, and MYBPC3 and MYH7 were collectively the most commonly observed among positive results (81%; online supplemental figure 5). Pediatrics The pooled detection rate in paediatric HCM cohorts (≤21 years old) was 56% (online supplemental figure 6) with an inconclusive rate of 19%-31%. The detection rate for paediatric cohorts was significantly higher compared with the predominantly adult cohorts (56% vs 42%; p=0.01; online supplemental figure 6). Studies that applied current ACMG/AMP standards had a higher detection rate than those who did not (78% vs 52%; p<0.0001). Identification of two or more diseasecausing variants was reported by two studies as 5% and 6% of cases, which was not significantly higher compared with adults (5% vs 2%; p=0.06; online supplemental figure 3). The detection rate for paediatric HCM cohorts with a positive family history (HCM: 58%; SCD: 48%; or CM±SCD: 57%) did not differ significantly from either sporadic cases (49%) or the overall detection rate unselected for family history (56%; between-group comparison p=0.49; online supplemental figure 7). Exome/genome sequencing Few studies that met our inclusion criteria reported on the detection rate of exome (n=3) and genome (n=2) sequencing. Results are summarised in online supplemental 17 performed exome sequencing on 200 individuals with HCM and found variants in 88%, though the majority were in genes other than the eight sarcomeric HCM genes and limited information was provided on the variant classification approach. Two studies of genome sequencing directly compared findings against other testing methods. 18 19 Cirino et al 18 identified 19 of 20 variants previously found by panel testing and 1 pathogenic variant in a previously negative case. Bagnall et al 19 identified disease-causing variants in 9 of 46 cases (20%) with previously negative genetic testing, including four in genes not previously tested and four deep intronic splice variants in MYBPC3. Genotype-phenotype implications for prognosis Analyses focused on genotype-phenotype associations for age-of-onset, sudden cardiac arrest (SCA), presence of an implantable cardioverter-defibrillator (ICD), heart failure (HF), septal reduction therapy and mortality. Genotype comparisons included: genotype-positive (G+) Figure 1 Forest plot of detection rate in predominantly adult HCM cohorts by usage of the ACMG/AMP standards. The pooled detection rate was 42%. Studies that applied ACMG/AMP standards had a lower detection rate than those that did not use ACMG/AMP standards (33% vs 43%; p=0.0001). ACMG/AMP, American College of Medical Genetics and Genomics/Association for Molecular Pathology; HCM, hypertrophic cardiomyopathy. Age-of-onset The pooled mean age-of-onset in predominantly adult cohorts was 8.3 years earlier for G+ versus G− cohorts (p<0.0001; figure 2A). One additional study reported median age-of-onset and similarly found that G+ individuals were younger at disease onset (50 years vs 59 years). 20 Comparatively, three paediatric studies did not observe differences. 12 21 22 The pooled mean age-of-onset in adult cohorts was 8.2 years later for variants in MYBPC3 versus MYH7 (p<0.0001; figure 2B). Two additional studies reported median age-of-onset and findings were consistent with the meta-analysis. 23 24 Two paediatric studies found no significant difference in age-of-onset for MYBPC3 cohorts compared with MYH7 cohorts. 21 25 The pooled mean age-of-onset in adults with multiple variants was 7.0 years earlier than those with a single variant (p<0.0002; figure 2C). One study reporting median ages-of-onset also found that multiple variants were significantly associated with an earlier age-of-onset. 23 Findings from two paediatric studies were discordant. 21 25 Sudden cardiac arrest SCA was defined as resuscitated cardiac arrest, SCD, appropriate ICD therapy or a combination of these events. Kaplan-Meyer analysis in a British study (n=874) and a Portuguese registry (n=422) found that G+ individuals were significantly more likely to experience SCD compared with G− individuals (p=0.03 and p=0.02, respectively). 26 27 Although a similar trend was seen in our meta-analysis of five studies comparing G+ versus G− cohorts, the OR of SCA (OR 1.4; online supplemental figure 8) did not reach statistical significance. Finally, the hazard ratio (HR) determined by van Velzen et al 28 (HR 1.0; 95% CI 0.6 to 1.9) did not suggest a difference in SCA between groups. Meta-analysis of six studies that compared SCA in MYBPC3 versus MYH7 cohorts (online supplemental figure 8) found no significant difference between groups (OR 0.9), consistent with HRs from a large registry study. 29 Findings from four adult studies comparing SCA in cohorts with multiple versus single variants were mixed; two studies found no significant difference, whereas two studies reported a higher incidence of SCD in individuals with multiple variants. 11 29-31 The one paediatric study did not report a significant difference. 32 ICD implantation ICD implantation was more common in G+ cohorts than G− cohorts in an analysis of 10 studies (OR 1.9; p<0.0001; online supplemental figure 9). However, the same comparison in two paediatric studies did not find a significant difference between groups. 12 21 No significant difference was found across six studies of adults comparing MYPBC3 versus MYH7 cohorts (OR 1.2; online supplemental figure 9) nor in two studies of adult cohorts with multiple versus single variants. 13 30 However, one paediatric study reported a significantly higher hazard of ICD implantation in individuals with multiple variants (HR 4.4; 95% CI 1.8 to 11.0; p<0.001). 32 Heart failure HF outcomes included New York Heart Association (NYHA) class III/IV, cardiac transplantation, left ventricular ejection fraction, HF admissions and HF symptoms. No significant differences were observed in NYHA class outcomes when comparing G+ versus G−, MYBPC3 versus MYH7 or multiple versus single variants (online supplemental figure 10). One large registry-based study that found individuals with MYH7 variants more likely to require cardiac transplant or ventricular assist device (VAD) than individuals with MYBPC3 variants (HR 2.8; 95% CI 1.3 to 5.8) and individuals with multiple variants Meta-analysis were more likely to require cardiac transplant (HR 7.5; 95% CI 2.7 to 20.5). 29 32 Septal reduction therapy Septal reduction therapy included myectomy, ablation or a combined outcome of myectomy and/or ablation. No significant differences were observed in these outcomes when comparing G+ versus G−, MYBPC3 versus MYH7 or multiple versus single variants (online supplemental figure 11). Mortality Mortality was reported as death, all-cause mortality, cardiac mortality, HCM-related death and survival. While there were discordant findings, the majority of studies found no significant difference in mortality across seven studies comparing G+ versus G− cohorts 23 26 28 33-36 and seven studies comparing MYBPC3 versus MYH7 cohorts. 10 24 29 36-39 Findings were split when comparing individuals with multiple versus single variants with two adult studies showing a significant difference, while one adult and one paediatric study did not. 23 29 30 32 Disease penetrance The penetrance of HCM disease-causing variants was evaluated in 51 predominantly adult and paediatric cohorts that both included and excluded probands, and at a genespecific level (table 3; online supplemental table 12). The pooled penetrance of HCM across adult cohorts was 62%. Overall penetrance differed significantly depending on if probands were included or excluded (73% vs 55%; p=0.003; online supplemental figure 12). Three studies reported a higher disease penetrance in men. [44][45][46] Two of which presented age-based penetrance for each sex: one included probands and penetrance by age 40 years was 92% for men and 67% for women 45 ; the other excluded probands and penetrance by age 40 years was 77% for men and 35% for women. 46 The mean age at study enrolment and of disease onset was only reported in 39% and 22% of cohorts, respectively, limiting the opportunity to further evaluate age-based penetrance. The oldest mean cohort age was 57 years with the majority of cohorts having a mean age in the 40s. DISCUSSION This systematic review and meta-analysis summarises and quantifies data on HCM detection rate, genotype-phenotype associations and disease penetrance from 132 publications across 25 years, confirming several well-reported trends and previously established associations. Detection rate Numerous studies have published on the detection rate of HCM genetic testing and meta-analysis of these data demonstrate that the yield of pathogenic and likely pathogenic variants is influenced by multiple factors. Consistent with traditional convention and prior systematic reviews and meta-analyses of adult cases with HCM, a relevant family history significantly increases detection rate. 47 48 Alternatively, while paediatric cases have a significantly higher detection rate compared with adults, family history does not significantly alter the detection rate. Approaches to variant classification are an established variable to these analyses, but improvements in variant classification have also impacted detection rate with the 2015 ACMG/AMP standards being considered the most accurate guidelines in North America. 14 Not surprisingly, adult studies that applied the ACMG/AMP standards had a lower overall detection rate based on pathogenic and likely pathogenic variants and higher inconclusive rate (variants of uncertain significance), likely representing a more accurate estimate of current detection rates. Interestingly, when looking at the same comparison for paediatric cases, the two studies that applied ACMG/ AMP standards had a significantly higher detection rate, despite not finding any notable differences in these cohorts. Given that these findings are counter-intuitive and the number of paediatric studies that applied ACMG/AMP standards was limited, additional research is needed. Expanding to exome/genome sequencing With exome and genome sequencing for HCM increasingly available, studies have found that most diseasecausing variants remain identifiable by large gene panels. [15][16][17] Technical differences between exome/ genome sequencing and traditional gene panels remain an important consideration, potentially impacting the sensitivity for some genes. 16 A recent example is deep intronic and other non-coding variants identifiable by genome sequencing, but missed by traditional panels and exome sequencing, though additional evidence supporting pathogenicity is needed (eg, segregation analysis, functional studies, additional case data). 19 49 Genotype-phenotype implications on outcomes Analyses of genotype-phenotype associations focused on three comparisons: genotype-positive (G+) versus genotype-negative (G−), MYBPC3 versus MYH7 and multiple versus single variants. A significant difference in age-of-onset was observed in all three comparison groups, as has been reported previously, 47 supporting that genotype influences age-of-onset in HCM. However, while Lopes et al did not observe a significant difference between MYBPC3 and MYH7, our analysis included six additional studies. In a more recent review, Sedaghat-Hamedani et al approached their meta-analysis differently by looking at individuals across studies who were G−, MYBPC3 + and MYH7 + and concluded that age of onset was earliest for MYH7 + individuals. 50 Although a significantly higher rate of SCA was reported for G+ individuals by two large studies, the association did not reach significance in our meta-analysis. The large multisite Sarcomeric Human cArdiomyopathy REgistry (SHaRe) study (n=2763) reported a similar association but was excluded from our analysis of G+ versus G− individuals as not all genotype-negative individuals had genetic testing for all eight sarcomere genes. 29 Furthermore, in the meta-analysis by Sedaghat-Hamedani et al, they conclude that G+ individuals have a higher rate of SCA compared with G− individuals. 50 We found that G+ individuals were more likely to have an ICD. While this raises the possibility that genotype status influences ICD utilisation, family history may contribute since individuals with a family history of SCA (a factor considered in SCA risk stratification) are also more likely to carry a disease-causing variant. Similarly, Ingles et al found that individuals with a negative family history are less likely to have an ICD. 51 We found no significant differences between genotype across other outcomes, which contrasts prior findings supporting that genotype status and the gene involved are predictive of worse outcomes. 29 50 However, this is likely in part due to limitations of how the existing data were categorised and the ability to directly compare across studies. Alternatively, these findings may suggest that genotype is only one of several predictors influencing phenotypic outcomes. More standardised research comparing outcomes across multiple potential predictors is required. Disease penetrance Determination of disease penetrance is challenging due to the possibility of selection bias in the included studies. While use of unselected populations would be the most informative, current studies are limited to those that included probands (73%; likely provide an overestimate) versus those that exclude probands (55%; likely provide an underestimate). Because of this, both approaches were considered and the overall disease penetrance across studies was 62%. It should be noted, however, that the average age of most cohorts was in the 40s, limiting the follow-up time and as such the conclusions that can be drawn. Our findings support that MYBPC3 and TNNI3 variants have lower penetrance compared with MYH7 and TNNT2, and penetrance is higher in men, irrespective of the gene, as has been previously reported. 47 50 Future research is needed to assess the impact of additional environmental and genetic modifiers on disease penetrance. Although there is evidence to suggest that male sex, obesity, hypertension and exercise could increase penetrance, additional studies are needed. 52 Furthermore, polygenic risk scores and a greater understanding of epigenetic factors may further elucidate the risk of disease within and across families. While beyond the scope of this review, data on penetrance in unselected populations will accumulate over time as the HCM genes are recommended to be reported as medically actionable secondary findings from diagnostic exome and genome sequencing by the ACMG. 53 As one example, van Rooij et al found that only 22% of unselected individuals with HCM disease-causing variants showed convincing evidence of disease over 25 years of follow-up. 54 Future assessments of HCM disease penetrance should consider presentation of disease in probands, at-risk relatives and unselected patients. Meta-analysis Finally, penetrance during childhood ranged from 7% to 61% with many cases presenting prior to 12 years old. Outcomes from these cohorts may be skewed as a result of the most severe paediatrics cases coming to medical attention. While older guidelines recommend that cardiac screening begin around 10-12 years of age for children with a first-degree relative with HCM, newer guidelines recommend cardiac screening for children under 5 years. 55 56 Our findings are consistent with newer recommendations aiming to identify all individuals with disease onset in childhood. Points of consideration and future research This study identified multiple limitations that impacted the ability to analyse data across studies, including variability in study design and reporting of outcomes. The predominant study design was observational case series and some outcomes of interest were not the primary focus. With regards to detection rate, variability included the genes tested, methodology used and the variant classification standards applied. The impact of the ACMG/AMP standards was assessed for detection rate; however, it was not evaluated for the other outcomes assessed and, therefore, the impact on these areas remains unclear. For the analysis of genotype-phenotype associations, there was variability in study design, and in how outcomes were defined and reported. Outcomes related to cardiac events such as SCA were limited to meta-analysis of ORs rather than rates due to how the data were reported by the majority of studies (eg, the studies did not report on timeto-event risks/rates). Our meta-analysis also focused on studies that performed head-to-head comparisons between genotypes, which differs from the meta-analysis performed by Sedaghat-Hamedani et al that evaluated genotypes across studies. 50 When considering penetrance, limitations included unreported or younger mean age of the cohorts, limited follow-up time and variability in the proportion of at-risk relatives included in analysis. Often the evaluation of relatives was not the primary focus of the study and, therefore, very limited demographic data were provided. Finally, there is a specific need for collaborative efforts to standardise approaches. In particular, the areas identified by this systematic review include consistent definitions and reporting on cohorts (both probands and at-risk relatives) and cardiac outcomes, reporting the minimum genetic analyses completed and describing the classification of diseasecausing variants. Increased consistency in how results are reported and interpreted would improve transparency and allow for more direct comparisons across studies. CONCLUSIONS As one of the most common inherited conditions, HCM has long been the focus of research and clinical interest. This systematic review and meta-analysis is the largest for any particular outcome and the first, to our knowledge, to collectively refine and quantify historical understandings of detection rate, genotype-phenotype associations and disease penetrance for HCM. Although the variabilities in study design and reporting of outcomes limited the analyses that could be performed, the large amount of data evaluated provide answers to important routine clinical questions, particularly those related to detection rate and genotype/phenotype correlations. Key areas for future research include expanding genotype-phenotype associations and disease penetrance estimates across various populations. While additional studies are needed, our current analyses serve as an important stepping stone to understanding the clinical utility of genetic testing for a condition that impacts so many families. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests JM is a contract methodologist for the NSGC. AM is a paid consultant for Concert Genetics. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Data availability statement Data are available upon reasonable request. Data are available from Susan Christian at smc12@ ualberta. ca. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise. Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
2022-04-08T06:22:44.596Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "417356fb8f2381263a42ede196ec2c068ec6bfbf", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "cab9ed22e41a4764d1493fe741e7f4a8ae6788f7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
117863290
pes2o/s2orc
v3-fos-license
Search for $^{21}$C and constraints on $^{22}$C A search for the neutron-unbound nucleus $^{21}$C was performed via the single proton removal reaction from a beam of 22 N at 68 MeV/u. Neutrons were detected with the Modular Neutron Array (MoNA) in coincidence with $^{20}$C fragments. No evidence for a low-lying state was found, and the reconstructed $^{20}$C+n decay energy spectrum could be described with an s-wave line shape with a scattering length limit of |as|<2.8 fm, consistent with shell model predictions. A comparison with a renormalized zero-range three-body model suggests that $^{22}$C is bound by less than 70 keV. Introduction Since the discovery of the large matter radius of 11 Li [1], neutron halos have been a topic of intense study near the neutron drip line. The halo structure results from one or more valence nucleons being loosely bound which, combined with the short range of the nuclear force, allows them to have a large probability of being found at distances much greater than the normal nuclear radius [2]. For two-neutron halos the two-body subsystems are typically unbound [3] and knowledge of the basic properties of these subsystems is critical for the understanding of the halo nuclei [3,4]. One two-neutron halo candidate is 22 C, consisting of twice the number of protons and neutrons of 11 Li. It has attracted significant experimental [5,6] and theoretical [7,8,9,10,11] attention recently. 22 C was first observed to be bound in 1986 [12]. It is a Borromean nucleus because the two-body sub-system 21 C had been shown to be unbound [13], contradicting an earlier measurement which had claimed that 21 C was bound [14]. It took nearly 25 years before first properties of 22 C were measured. A large matter radius was extracted from the measured reaction cross section, suggesting that 22 C exhibits a two-neutron halo [5]. Using a simple model relating the measured radius and the two-neutron separation energy S 2n , the authors deduced a strong s-wave configuration in 22 C. Subsequent momentum distribution measurements following neutron removal reactions supported this suggestion of a significant νs 2 1/2 valence neutron configuration in 22 C [6]. Very recently the mass excess of 22 C was measured to be 53.64(38) MeV corresponding to S 2n = −140(460) keV [15]. Given the fact that 22 C is actually bound this value probably should be better quoted as S 2n = 0 +320 −0 keV. Theoretically, an upper limit of 220 keV for S 2n was calculated assuming a dominating s-wave contribution [9]. These observations strongly support the presence of a two-neutron halo in 22 C. This typically implies that the "two-body subsystems must be either very weakly bound or low-lying resonances, or virtual states must be present, in order to support a halo state" [3]. As mentioned earlier, 21 C is unbound but no spectroscopic information is presently known. The ground state of 21 C is expected to be a 1/2 + state with a large νs 1 1/2 single particle configuration although the possibility of a degeneracy or even level inversion with the νd 1 5/2 has been suggested [16]. In the present paper we present a search for a low lying resonance/virtual state in 21 C. We attempted to populate 21 C with one-proton removal reactions from a secondary beam of 22 N and extracted the decay-energy spectrum via invariant mass spectroscopy by measuring neutrons in coincidence with 20 C. Experimental Setup The experiment was performed at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University. A 90 pnA 48 Ca primary beam at 140 MeV/u impinged on a 2068 mg/cm 2 9 Be production target, and isotopic separation of a 68 MeV/u 22 N secondary beam was achieved using the A1900 fragment separator [17] with a 1057 mg/cm 2 Al achromatic wedge degrader placed at the dispersive image. All data were taken with A1900 momentum slits set at 2.5% acceptance, which resulted in a 22 N particle rate of 37/s with a purity of 32%. The beam composition is shown in Figure 1. The two heavy ion contaminants were 26 F at 6% and 20 C at 2.8%; light ions comprised the rest of the contamination. The energy loss (∆E) versus time of flight plot shown in the left panel demonstrates the clean separation of 22 N from the contaminates as a function of time-of-flight. The shaded area in the right panel shows the applied gate in time-of-flight to select 22 N. Figure 2 shows a diagram of the experimental setup. After exiting the A1900, the secondary beam passed through two position-sensitive cathode readout drift chambers (CRDC-1 and CRDC-2). Downstream of the CRDCs, a quadrupole triplet magnet focused the beam onto the 481 mg/cm 2 9 Be reaction target. A 0.254 mm plastic scintillator (SCI-1) was placed immediately upstream of the reaction target to determine beam, charged fragment, and neutron time of flight. The flight path length from the A1900 to the target position was 11.6 m. Neutron unbound isotopes produced in the reaction target immediately decayed into charged fragments and one or more neutrons. A large-gap superconducting dipole magnet [18] bent the charged fragments away from the beam axis, and the neutrons were detected near zero degrees by the Modular Neutron Array (MoNA) [19]. The magnet has a maximum rigidity of 4 Tm, a vertical gap of 14 cm through which neutrons pass, and a bending angle of 43 • . The magnet was set to a rigidity of 3.8 Tm to center the 20 C fragments on the charged particle detectors. Downstream of the magnet, two 30 cm square CRDCs separated by 1.82 m provided fragment trajectory information (CRDC-3 and CRDC-4). An ionization chamber and 4.5 mm thick plastic scintillator (SCI-2) provided energy loss information for element separation, while a 150 mm thick plastic scintillator (SCI-3) provided a total residual kinetic energy measurement. MoNA [19] consists of 144 10 × 10 × 200 cm 3 plastic scintillator bars with photomultiplier tubes (PMTs) attached to each end. The modules were arranged in walls that were 16 modules tall and centered on the beam axis. Walls of 2 by 16 modules each were positioned with their front faces at 5.90 m, 6.93 m, and 7.95 m from the reaction target. A block of three walls was placed at 8.65 m. Neutron time of flight was measured from the mean time of the two PMT signals of the detector module that detected an interaction, while position across the bar was measured by the time difference between the signals. Data Analysis A one-proton removal reaction was used to populate neutron unbound 21 C from the 22 N secondary beam. Charged fragments were separated by element using energy loss information from the ionization chamber and the time of flight between SCI-1 and SCI-2 as shown in Figure 3. Finally, isotopes of a given element were separated by correcting the time of flight between SCI-1 and SCI-2 for correlations with both dispersive and nondispersive angle and position measured by CRDC-3 and CRDC-4. This separation is shown in Figure 4, with the grey region representing the selected 20 C fragments of interest. The solid line indicates the result of a fit including three convoluted Gaussian functions to estimate cross contamination between isotopes, while the dashed lines indicate each isotope's contribution to the total. With the 20 C selection gate shown, the contamination level is 2% and 20% of the 20 C fragments are lost. Reconstruction of the decay energy of 21 C was performed using the invariant mass method. In this method, the decay energy E d can be expressed as where E f and E n are energies, p f and p n are momentum vectors, and m f and m n are masses for the charged 20 C fragment and neutron respectively. Energies and angles for the charged fragments at the target were constructed from the post-magnet trajectory information using a transformation matrix generated by COSY INFINITY [20] based on measured magnetic field maps. The reconstruction was improved by including the measured dispersive position at the target in the transformation matrix as described in Ref. [21]. The dominant uncertainties for this process were position and angular resolution in CRDC-3 and 4, with values σ pos = 1.3 mm and σ ang = 1 mrad respectively as determined by data taken with a tungsten mask placed in front of the detectors. Neutron energies and angles were determined from the position and time of flight of the neutron interactions in MoNA. The time of flight resolution had σ tof = 0.3 ns, while position resolution along a bar was characterized by shadow bar data and simulations as a sum of Laplacians [22,23] f e −|x/σ1| where f = 0.534, σ 1 = 16.2 cm, and σ 2 = 2.33 cm. Discretization of the detector bars in the y and z directions resulted in uniform uncertainty of ±5 cm for those components. Monte Carlo simulations of the full experimental setup, reaction process, and decay characteristics of 21 C were performed to account for the effects of each detector's geometrical acceptance and resolution on the reconstructed decay energy. The reaction in the target was treated by a Goldhaber model [24] with a friction term [25]. These simulations were validated and all free parameters were constrained by comparison to experimental spectra except for those related to the neutron decay. Figure 5 shows the comparison between simulation and data for fragment position in CRDC-3 and reconstructed fragment kinetic energy. Simulated data sets were generated to populate grid points of the decay parameter phase space in question; these were analyzed using the same software as the experimental data and the two data sets were directly compared. Figure 6 shows the measured decay energy for 21 C. The data (black squares) are distributed over a broad energy range and do not exhibit any obvious resonance. A sharp resonance was not expected because the ℓ = 2 states were not expected to be populated in the proton removal reaction from the 22 N ground state, which is accepted to possess a J π of 0 − [26,27]. The calculated spectroscopic factors for the ℓ = 2 5/2 + and 3/2 + excited states in 21 C are 0 and 0.05, respectively. In contrast, the spectroscopic factor for populating the ℓ = 0 1/2 + state is 0.75. Therefore, the decay energy spectrum was fit assuming a pure s-wave decay, with the line shape for the decay calculated from the analytic approximation from Eq. (30) of Ref. [31]: Results where the parameterization δ = a s k is used for the phase shift, γ = √ −2mǫ i /h is the decay length of the initial 22 N ground state, ǫ i is the binding energy of 22 N, k = 2mǫ f /h is the final momentum of the neutron in the continuum, and ǫ f is the decay energy of the neutron. The scattering length a s was allowed to freely vary in the fitting process from 0 to −100 fm. The fitting process used a minimum χ 2 technique and favored a 1σ limit of |a s | < 2.8 fm, with the best fit being a s = −0.05 fm (shown in Figure 6 as the dash-dotted and dotted lines respectively). The figure also demonstrates the sensitivity of the present set-up to any potential low energy virtual state by comparing an s-wave decay with a s = −15 fm (dashed line) to the data and short scattering length curves. It is clear that any low lying s-wave state would have been apparent in the decay energy spectrum. The small magnitude of the extracted scattering length of a s = −0.05 fm is essentially equivalent to no interaction at all, thus quoting a resonance energy is not meaningful [32]. It should be mentioned that we cannot rule out the possibility of decay to the 2 + state in 20 C as the setup did not measure coincident γ-rays. However, it is not likely that this decay mode be observed. The spectroscopic factor for decay to the 0 + ground state of 20 C was calculated to be 0.38, while that for the 2 + first excited state was 1.2. Since the spectroscopic factors are relatively close in magnitude, the lack of momentum barrier for ℓ = 0 neutron decay to the 0 + will cause it to dominate the ℓ = 2 transition to the 2 + . Furthermore, the ℓ = 2 transition to the 2 + would appear as a well-defined resonance in the decay energy spectrum, and no such resonant peak is observed. Discussion The results of different shell model calculations for 21 C and 22 C are shown in Figure 7 together with the experimental level scheme for 20 C and the recently measured S 2n for 22 C. We performed calculations with NuShellX@MSU [33] in a truncated s − p − sd − pf model space with a modified WBP [34] interaction labeled WBP*. In this interaction the neutron sd two body matrix elements (TBME) were reduced to 0.75 of their original value in order to reproduce a number of observables in neutron rich carbon isotopes [29]. Comparison was then made to the unmodified WBP interaction as well as the WBT and WBT* interactions, where WBT* incorporates the same TBME reduction as WBP*. These interactions have been used extensively for calculations in this neutron rich region with Z<8 [16,26,29,35,36]. For all interactions, the 21 C 1/2 + s-state is expected to be unbound by more than 1.4 MeV, which is consistent with our measurement as no low lying states are predicted. However, as seen in the figure, all interactions also incorrectly calculate 22 C to be unbound. Two recent shell model calculations with different interactions do not have the same problem. Coraggio et al. derived the single-particle energies and the residual two-body interaction of the effective shell-model Hamiltonian from the realistic chiral NN potential N 3 LOW [30]. The calculations predict 21 C to be unbound by 1.6 MeV and 22 C to be bound by 601 keV which is even more bound than the recent measurement of 0 +320 −0 keV by Gaudefroy et al. [15]. The calculations by Yuan et al. use a newly constructed shellmodel Hamiltonian developed from a monopole-based universal interaction (V MU ) [11]. The results of the calculations are consistent with the two-neutron separation energy of 22 C and predict 21 C to be slightly less unbound (∼1.3 MeV, extracted from Figure 6 of Ref. [11]) as compared to the other calculations. All models predict 21 C to be unbound by significantly more than 1 MeV. This is consistent with the non-observation of any resonance structure in the present data. However, it is somewhat unexpected as it represents the first case where the two-body subsystem of a Borromean two-neutron halo nucleus does not have a low-lying virtual state or resonance below 1 MeV. This might be another indication that the two-neutron separation energy of 22 C is extremely low and very close to the binding threshold. The relationship between the two-neutron separation energy of a halo nucleus and the virtual state or resonance of the unbound two-body subsystem has been demonstrated before for 11 Li renormalized zero-range three-body model assuming a pure s-wave valence neutron configuration [7,8]. They establish limits on S 2n of ∼120 keV and ∼70 keV based on the measured matter radius of 22 C [5] for virtual state energies of 0 keV and 100 keV, respectively. For states near threshold the energy of a virtual state can be related to the scattering length with E ≃h 2 /2µa 2 s , where µ is the reduced mass [37]. This relationship places a 100 keV virtual state in 21 C at approximately a s = −15 fm. As shown in Figure 6 our data are not consistent with such a low-lying state, limiting the two-neutron separation energy of 22 C to less than 70 keV. Conclusions In summary, we have searched for the neutron-unbound nucleus 21 C via one-proton removal from a 22 N beam. The reconstructed 20 C+n decay energy spectrum does not contain any evidence for a lowlying state, which agrees with shell model calculations that predict this state to be neutron unbound by 1.5 MeV. From this non-observation an upper limit on the two-neutron separation energy of 22 C can be placed at S 2n < 70 keV based on calculations from a renormalized zero-range three-body model [7,8]. This is consistent with the recently measured limit of S 2n < 320 keV [15] as well as calculated limits of S 2n < 220 keV [9] and S 2n < 50 keV [10]. In the future, the search for 21 C should concentrate on populating the d 5/2 state. Although the s 1/2 is still predicted to be the ground state it cannot be observed at the calculated decay energy. The d 5/2 state can be populated with a one-neutron transfer reaction with a 20 C beam on a deuteron target. The excitation energy is predicted to be above the first excited 2+ state of 20 C and thus γ-rays have to be detected in order to determine if any observed decay populates the ground state or the first excited state of 20 C. : Experimental levels of 20 C and theoretical 1/2 + and 0 + ground states for 21 C and 22 C, respectively. The data for 20 C are from [28,29] and the measured two-neutron separation energy for 22 C from [15]. The N 3 LOW and VMU calculations are from [30] and [11], respectively.
2013-04-16T16:11:31.000Z
2013-04-16T00:00:00.000
{ "year": 2013, "sha1": "877bb3b9e87f835decc104e5ac98b177e87fbe94", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1304.4507", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "877bb3b9e87f835decc104e5ac98b177e87fbe94", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
1667901
pes2o/s2orc
v3-fos-license
Epstein Barr Virus-Encoded EBNA1 Interference with MHC Class I Antigen Presentation Reveals a Close Correlation between mRNA Translation Initiation and Antigen Presentation Viruses are known to employ different strategies to manipulate the major histocompatibility (MHC) class I antigen presentation pathway to avoid recognition of the infected host cell by the immune system. However, viral control of antigen presentation via the processes that supply and select antigenic peptide precursors is yet relatively unknown. The Epstein-Barr virus (EBV)-encoded EBNA1 is expressed in all EBV-infected cells, but the immune system fails to detect and destroy EBV-carrying host cells. This immune evasion has been attributed to the capacity of a Gly-Ala repeat (GAr) within EBNA1 to inhibit MHC class I restricted antigen presentation. Here we demonstrate that suppression of mRNA translation initiation by the GAr in cis is sufficient and necessary to prevent presentation of antigenic peptides from mRNAs to which it is fused. Furthermore, we demonstrate a direct correlation between the rate of translation initiation and MHC class I antigen presentation from a certain mRNA. These results support the idea that mRNAs, and not the encoded full length proteins, are used for MHC class I restricted immune surveillance. This offers an additional view on the role of virus-mediated control of mRNA translation initiation and of the mechanisms that control MHC class I restricted antigen presentation in general. Introduction Presentation of antigenic peptides on major histocompatibility (MHC) class I molecules is a signal for CD8 + T cells to distinguish between cells that express self or non-self antigens and forms an important part of the immune system's capacity to fight parasite invasion. There are several steps that endogenous peptides pass from their synthesis to the loading onto the MHC class I molecule. On one hand, the digestion of the peptide precursor by the proteasome [1,2,3], the affinity of the peptide for the TAP transporter [4], the trimming of the N-terminus by endopeptidases [5] and the sequence requirements of the peptide to fit the grove on the MHC class I molecules [6], are all important steps in determining the efficiency of peptide presentation. On the other hand, the steps prior to the digestion of the peptide precursor by the proteasome, the so called pre-proteasomal steps, have to ensure that enough peptide material is produced so that a sufficient amount of the correct peptide epitopes reaches the class I molecules in order to trigger a T cell response. It has been estimated that approximately 10 4 -10 5 MHC class I molecules are expressed by individual cells at any time to ensure a sufficient antigen presentation. Proteins and polypeptides exhibit a wide range of half-life, with an overall average of 1 to 2 days [7]. As the stability of viral proteins is many times high, it would take many hours for the cells to accumulate a sufficient amount of viral peptides to trigger the most efficient T-cell response if these were derived from the degradation of the full length protein. To explain the rapidity of viral-antigen presentation, a model has been proposed in which a fraction of rapidly degraded mRNA translation products (RDPs) [8] or defective ribosomal products (DRiPs) [9] with a half-life of less than 10 minutes constitute the main source for antigenic peptides. This model has been supported by the rapid slow down of TAP system by blocking protein synthesis and the equal rapid suppression of antigen presentation when transcription of an mRNA encoding a protein with a long half life is shut off [10]. In addition, cryptic mRNA translation products derived from different reading frames throughout the message can provide substrates for the MHC class I pathway [11]. The use of alternative translation products as a source for antigenic peptides, together with the fact that continues ribosomal activity is required for antigen presentation, implicates mRNA translation as an important pre-proteasomal step in regulating MHC class I restricted antigen presentation. However, the translation mechanisms that govern the synthesis of antigenic peptide products are unknown. Viruses adapt to their environment and manipulate their host cells in order to serve their needs. Controlling the MHC class I antigen presentation pathway is an important target for latent viruses in order to avoid detection of the infected host cell by the immune system. There are many examples of different strategies whereby viral products target the MHC class I pathway on the post-proteasomal level [12,13] but there is so far little known about how viruses affect the steps that control the production of antigenic peptides. The Epstein-Barr virus (EBV) expresses the nuclear antigen-1 (EBNA1) in all types of infected cells and in its type I latent form, e.g. observed in Burkitt's lymphoma, it is the only viral antigen detected [14]. A Glycine-Alanine repeat sequence (GAr) located in the N-terminal part of EBNA1 with no apparent biochemical function has a cis-acting capacity to suppress presentation of antigenic peptides to the MHC class I pathway and plays an important role for the EBV to evade immune detection [15,16]. Like EBNA1, Kaposi's sarcoma-associated herpesvirus LANA1 protein and the MHV-68 gamma herpes virus ORF73 are latent origin binding proteins that act for maintaining viral episomes in infected cells. These two proteins have more recently been suggested to use a similar strategy, but with different sequences, as EBNA1 to escape the MHC class I pathway [17,18], indicating that this might be a more commonly used concept among viruses to evade the immune system. It was recently shown that the nascent GAr peptide targets the initiation step of translation of any mRNA to which it is fused [19]. Here we have used GAr-mediated control of mRNA translation initiation to study its effect on MHC class I restricted antigen presentation. By manipulating the GAr sequence we can control the rate of translation initiation of a reporter mRNA in cis and we can thereby demonstrate that the rate of translation initiation, as opposed to other means of translation control, directly determines the amount of presented peptides derived from the main open reading frame as well as from cryptic translation products of a given mRNA. We discuss how these results fit together with the concept of EBNA1 as an immunologically silent protein and the proposed models for the source of antigenic peptide material for the MHC class I pathway. Results Direct influence of a Gly-Ala repeat on the presentation of a reporter epitope The Gly-Ala repeat sequence (GAr) of EBNA1 can prevent mRNA translation initiation and MHC class I restricted antigen presentation from open reading frames (ORF) to which it is fused [19,20]. To test to which degree the GAr is responsible for the suppression of EBNA1 antigen presentation we fused the SIINFEKL encoding antigenic peptide sequence (SL8) derived from chicken Ovalbumin (Ova) [21,22] into the EBNA1 (EBNA1-SL8) or in an EBNA1 in which the Gly-Ala repeat (GAr) was deleted (EBNA1DGA-SL8) (Fig. 1A). This allowed us to monitor any effect the GAr has on antigen presentation from these mRNAs using the B3Z CD8 + T hybridoma that is specific for the SL8 in the context of H-2K b MHC class I molecules [23]. H-2K b EL4 cells expressing Ova or EBNA1DGA-SL8 gave a similar level of presentation of the SL8 epitope, demonstrating that there is no significant difference in how this antigen is processed and presented between these two constructs. However, presentation of SL8 in the context of EBNA1 was dramatically suppressed and comparable to cells expressing an empty vector (Fig. 1B, left graph). Similar results were obtained using the H1299 human cell line in which a vector coding the mouse H-2K b MHC class I molecule was cotransfected together with the expression vectors for Ova, EBNA1-SL8 or EBNA1DGA-SL8 respectively (Fig. 1B, right graph). To test how this difference in antigen presentation correlates with GAr's inhibitory effect on protein synthesis, H1299 cells expressing each construct were pulsed for 1 h in the presence of 35 S-methionine and proteasome inhibitor in order to minimize any effects of proteasomal degradation before harvested. This revealed that EBNA1-SL8 is translated with approximately 60% reduced efficiency as compared to the EBNA1DGA-SL8 (Fig. 1C). Western blot analysis showed that the steady state protein expressions of the respective proteins correlate with their respective rate of synthesis (Fig. 1D). Similar results in terms of control of synthesis and antigen presentation were also obtained when the GAr sequence was fused to the N-terminus of Ova itself. Fusion of the full length GAr to the N-terminus of ovalbumin (GAr-Ova) effectively prevents the presentation of the SL8 peptide over a wide range of mRNA concentrations and eight times the amount of a GAr-Ova cDNA was required to reach the same level of antigen presentation as from cells expressing Ova alone (Fig. 1E, left panel). To ensure that the antigen presentation reporter system was not saturated under these conditions we increased the number of Ova expressing EL4 cells that were exposed to the same fixed number (5610 4 ) of B3Z cells used in the above experiments (Fig. 1E, right panel). These results, together with previous reports, collectively support the notion that the GAr domain alone inhibits presentation of peptides to the MHC class I pathway from the EBNA1 or from any mRNA to which it is fused, irrespectively of location [19,20]. The GAr suppresses presentation of antigenic peptides throughout the entire mRNA It is known that the GAr in addition to preventing translation initiation also has the capacity to inhibit protein unfolding and proteasome-mediated degradation in a substrate-and positiondependent fashion [24]. In order to investigate if the capacity of the GAr to affect protein stability plays a role in its capacity to provide immune evasion we separated its two functions. The SL8 was inserted in the 39 untranslated region (UTR) of the GAr sequence, either in the same or in a different reading frame (GAr-1 and GAr-2, respectively) where it is expressed as a cryptic minigene [25,26]. Thus, any effect the GAr has on antigen presentation from these mRNAs can be separated from its capacity to inhibit proteasomal degradation since the GAr and the SL8 epitope are expressed as separate polypeptides. We also fused the SL8 peptide in frame with the C-terminus of the GAr (GAr-3) ( Fig. 2A). In addition, we made the corresponding constructs where we exchanged the GAr sequence for that of the GFP (GFP-1, GFP-2 and GFP-3 respectively) ( Fig. 2A). The GFP is a suitable replacement for the GAr as it is also a protein with a low turnover rate, a poor substrate for the proteasomes [27] and, Author Summary The presentation of short peptides on major histocompatibility (MHC) class I molecules forms the cornerstone for which the immune system tells apart self from non-self. It is important for viruses such as the Epstein-Barr virus (EBV) to avoid this antigen presentation pathway in order to escape recognition and killing of its host cells. All EBVinfected cells, including cancer cells, express EBNA1 without attracting the attention of the immune system. In this report we describe the mechanism by which EBNA1 escapes antigen presentation. This should open up for new approaches to target EBV-associated diseases including cancers and immuno proliferative disorders and for understanding the underlying mechanisms of the source and regulation of antigenic peptide production. or on human cells co-expressing a genomic K b construct (right) was determined using B3Z CD8 + T cells [23]. The GAr domain suppresses presentation of SL8 by over 90% in either cell type. C) Autoradiograph of a 1 hour 35 S-methionine pulse label in the presence of proteasome inhibitors shows that importantly, the levels of mRNAs expressing the SL8 are similar in the context of either ORF (Fig. 2B). These different constructs allowed us to compare the effect of the GAr on suppressing MHC class I antigen presentation independently of its capacity to influence protein degradation. The level of presentation of SL8 from the GFP-3 is similar to that from Ova itself, demonstrating that there is no significant difference in how the antigen is processed in these two settings. However, presentation of the SL8 in the context of all GAr-carrying mRNAs is suppressed as compared with the corresponding GFP constructs, demonstrating that the GAr prevents mRNA antigen presentation throughout the entire mRNA (Fig. 2C). The GFP-1 and GFP-2 constructs give a lower level of presentation of the SL8 compared to when fused to the C-terminus of GFP (GFP-3) (Fig. 2C). This is explained by the fact that the antigenic peptide in GFP-1 and 2 are expressed as cryptic translation products compared with when it is fused to the main reading frame in GFP-3 and Ova. To ensure that the expression of SL8 from the GFP-2 and GAr-2 minigene constructs are indeed derived from an initiation event and not from a readthrough from the main ORF we substituted the AUG codon with GGC or GCC. As this completely prevented antigen expression from either constructs it shows that the expression of the SL8 is not due to a read-through event and, thus, that the GAr suppresses a reinitiating event (Fig. 2D). Western blot analysis confirms that the expression levels from the main ORF of the different constructs are similar (Fig. 2E). The notion that SL8 expressed from the 39UTR is derived from an independent initiation event is further supported by treating cells with IFNc. IFNc stimulates the induction of immunoproteasomes and N-terminal trimming peptidases that together give a more efficient processing of peptides longer than 8-10 residues for loading onto MHC class I molecules [28,29]. IFNc treatment does not affect presentation of the SL8 when inserted in the 39UTR, which is what is expected if it is expressed as a minigene, and only when fused directly to the C-terminus of GFP or GAr (Fig. 2F, left panel). By treating cells with the proteasome inhibitor epoxomicin we observed that that presentation of SL8 is proteasome-dependent when derived from Ova, or fused to GFP, but not when translated as an out-of-frame minigene downstream of GFP (GFP-1 &-2) (Fig. 2F, right panel) or GAr (GAr-1 &-2) (data not shown). These results show that the cryptic translated SL8 peptides are derived from independent translation initiation and do not carry additional residues from the main upstream reading frame that could interfere with the processing of the MHC class I peptide. Taken together, these results show that the GAr suppresses the presentation of the SL8 epitope within the same reading frame and out-of-frame epitopes. Hence, the GAr suppresses MHC class I restricted antigen presentation by preventing translation initiation throughout the entire mRNA and its potential capacity to control protein stability is not required to impose immune evasion. Altering mRNA translation initiation overrides the effect of the GAr The nascent GAr peptide is regulating the synthesis of EBNA1 by directly blocking initiation of the EBNA1 mRNA translation in cis which is caused by a delay in the assembly of the initiation complex [19]. The molecular target of the GAr is not yet known but we have observed that insertion of the c-myc IRES [30] in the 59UTR of GAr-Ova (c-myc-GAr-Ova) overrides GAr-dependent inhibition of protein synthesis and restores the rate of expression to approximately 70% of that of Ova alone (Figs. 3A and 3B, left panel). Similarly, the c-myc IRES also induced the expression of the GAr alone approximately 3-fold, demonstrating that this effect is restricted to the GAr itself (Fig. 3B, right panel). The c-myc IRES has no effect on the rate of Ova synthesis when inserted in an identical way in the 59UTR (Fig. 3B, left panel), indicating its specific effect on GAr-mediated translation control. Moreover, the c-myc IRES and the GAr domain only affect mRNA translation in cis as we do not see any changes on the rate of translation of actin or of the exogenous GFP (Fig. 3C). Thus, the combination of the GAr and the c-myc IRES provides us with tools with which we can study the relationship between the rate of mRNA translation initiation and the production of antigenic peptides from a single mRNA without targeting protein synthesis or degradation using general chemical inhibitors. When we tested the effect of the c-myc IRES on GArdependent control of antigen presentation we observed a 70% presentation of SL8, as compared to Ova alone or c-myc-Ova (Fig. 3D). Under the same conditions the presentation of SL8 from the GAr-Ova fusion construct was approximately 5-fold less. As the c-myc IRES does not affect the GAr-Ova ORF this result further underlines that the potential effect of the GAr to control the stability of the protein to which it is fused is not sufficient to suppress antigen presentation. Insertion of the c-myc IRES in the 59UTR of the GAr-2 mRNAs also resulted in a sharp increase in antigen presentation, demonstrating that the same mechanism of translation initiation control that regulates the production of antigenic peptides derived from the main ORF also regulates the production peptides derived from cryptic minigenes. The capacity of the c-myc IRES to neutralise the translation inhibitory effect of the GAr is cell specific and was observed in three out of three human cell lines tested (Table 1). However, it has been shown that the efficiency of the c-myc IRES-driven translation varies between cell lines from different origins. In murine cell lines the c-myc IRES-driven translation is much lower than in human cell lines and importantly it has been shown to be inactive in murine adult tissue [31,32]. This explains the finding that the c-myc IRES was incapable to override the translation inhibitory effect of the GAr in all the murine cell lines tested ( Table 1). The c-myc IRES has been characterised and consists of different domains and predicted ribosome entry window (Figs. 4A and 4B). It has been shown that deletion of the domain 1 reduces its activity with about 60% [33]. In line with this, fusion of a c-myc IRES, that lacks domain 1 (Dcmyc-IRES), in the 59UTR of Ova-GAr results in a reduced capacity to override suppression of translation and antigen presentation (Fig. 4C). This further links the effect of the c-myc IRES and its capacity to overcome GArdependent suppression with its capacity to control mRNA translation initiation (Fig. 4C). These results show that GAr-mediated suppression of translation initiation is sufficient and necessary to prevent antigen the EBNA1-SL8 mRNA is translated 60% less efficiently as compared with the EBNA1DGA-SL8. The graph below shows values determined from phosphoimager analysis. D) Western blot shows the steady state level of expression of indicated constructs without proteasome inhibitors. E) Doseresponse curve shows that approximately 8 mg of GAr-Ova cDNA is required to reach a similar level of SL8 presentation as that of 1 mg of Ova (left panel). Increasing number of EL4 cells expressing indicated constructs in the presence of a fixed amount (5610 4 ) of B3Z (right graph). The results show representative data from at least three independent experiments in which transfected cells were split and tested for protein synthesis or antigen presentation with SD. doi:10.1371/journal.ppat.1001151.g001 presentation from the main ORF as well as from cryptic translation products and that its effect can be neutralised by alternative mechanisms of initiation provided by the c-myc IRES. The rate of mRNA translation initiation directly determines antigen presentation The observations that the increase in rate of synthesis after insertion of the c-myc-IRES corresponds to antigen presentation indicates a close correlation between the rate of mRNA translation initiation and the capacity of CD8 + T cells to detect and destroy virus-infected host cells. In order to look more closely at this relationship, we wanted to change the rate of Ova synthesis more subtly compared with the ''on/off'' effect obtained with the c-myc-IRES and we used mutated GAr sequences that have been shown to affect the synthesis of GAr fusion proteins. The lymphocrypto-Papio and the Rhesus viruses infect Old World primates and express EBNA1 homologues that carry shorter GAr-like sequences that have previously been shown not to prevent antigen presentation [34]. The EBV-GAr sequence consists of single alanine residues separated by one, two or three glycines while the Papio-GAr carries single serine residues inserted in every seven residues of the repeat (Fig. 5A and [19]). When we fused a 30 amino acid Papio-GAr sequence (30GAr-Papio-Ova) and a corresponding 30 amino acid EBV-GAr sequence (30GAr-EBV-Ova) to the N-terminus of Ova we observed that the Papio-GArlike sequence had no effect on mRNA translation or antigen presentation while insertion of the corresponding EBV-GAr sequence resulted in an approximately 4-fold less synthesis and antigen presentation (Figs. 5B and 5C). Similar results were also obtained with the Rhesus GAr-like sequence which also carries serine insertions (data not shown). Interestingly, the Papio-GAr has the capacity to control protein stability and fusion of the Papio-GAr to the p53 protein, which is normally targeted for the ubiquitin-dependent degradation pathway by the MDM2 E3 ligase, resulted in a stabilisation and in an accumulation of polyubiquitinated 30GAr-Papio-p53 products in the presence of MDM2 (Fig. 5D). This indicates that while the Papio sequence retains the effect on proteins stability, the disruption of its GAr sequence is sufficient to render it inefficient in preventing antigen presentation or protein synthesis control [24]. We next tested a construct in which the GAr repeat had been disrupted by inserting two alanines next to each other on three locations (32GAr3A-Ova) (Fig. 5E). This retains the GC rich content of the GAr RNA sequence without introducing new amino acid residues. In this case, the rate of synthesis was approximately 50% less compared with Ova alone but over twofold more efficient than that of the wild type GAr (30GAr-EBV-Ova) (Fig. 5F, left panel). If two glycine residues were instead replaced by serines (31GAr2S-Ova) we obtained a 75% translation efficiency as compared to Ova alone. When we next compared the effects of these GAr sequences on antigen presentation we observed that the rate of mRNA translation initiation is closely followed by the amount of antigens presented to the MHC class I molecules (Fig. 5F, right panel). Taken together, these results indicate a direct and proportional relationship between endogenous antigen presentation and mRNA translation initiation control. Discussion Our results further underline the notion that the capacity of EBNA1 to evade the MHC class I antigen presentation pathway and the detection by CD8 + T cells relies on the Glycine-Alanine repeat (GAr) sequence [15,16,20,35]. Deletion of the GAr sequence from EBNA1-SL8 resulted in the same amount of antigen presentation as when SL8 was presented from the Ova message indicating that no other regions of EBNA1 are needed to evade MHC class I antigen presentation. Dose-response experiments show that this effect is not dependent on the amount of mRNA expressed in the cells and that at least eight times the amount of a GAr-carrying mRNA is required in order to reach a similar level of antigen presentation as that of a corresponding non-GAr carrying mRNA. The GAr has the unique dual capacity to suppress both its own mRNA translation initiation as well as the stability of proteins to which it is fused. By separating these two functions from each other we have shown that the GAr suppresses peptide production from different reading frames of an mRNA to which it is fused and, hence, that control of mRNA translation initiation is both sufficient and necessary for its capacity to suppress MHC class I antigen presentation (Fig. 6). Previous studies have shown that the GAr can prevent unfolding of substrates targeted for the 26S proteasome by affecting 19S-dependent unfolding in a substrate and position-dependent fashion. However, the GAr has no, or little, effect on the stability of EBNA1 itself, suggesting that fusion of GAr to 26S proteasome substrates gives unspecific effects on protein stability that are unlikely to play any physiological role for the virus [24]. These observations together with our results instead support a model in which the cis-mediated effect of the GAr on EBNA1 mRNA translation initiation is the sole mechanism by which EBNA1-expressing latently EBV-infected cells can evade recognition by the immune system. However, this does not mean that EBNA1 stability is not an important feature in EBV's strategy to evade the immune system for the simple reason that a low rate of EBNA1 synthesis requires a low turnover rate in order to allow a sufficient amount of EBNA1 to be expressed (Fig. 6). The expression of EBNA1 in the host resting B memory cells, as compared to rapidly proliferating BL cells, is likely less, which could further contribute to help the virus to establish an immune evasive latency. reading frame (GAr-2) or fused to GAr (GAr-3). The corresponding constructs were also made in which the SL8 was inserted in the GFP mRNA in an identical way (GFP-1 to 3). The AUG, GCC or GGC initiation codons for SL8 in GAr-2 were used. B) Northern blot analysis using the SL8 sequence as probe shows that the GAr sequence does not influence the expression levels of the corresponding mRNAs. C) The GAr suppresses presentation of SL8 throughout the entire mRNA, demonstrating that its capacity to prevent antigen presentation does not depend on controlling protein degradation. D) Changing the initiation codon of the SL8 from AUG to GGC (Glycine) or GCC (Alanine) in the GAr-2 prevents expression and antigen presentation, demonstrating that the SL8 is derived from an individual translation initiation event. Presentation of SL8 expressed as a minigene using the initiation codons AUG, GGC and GCC (SL8AUG, SL8GGC and SL8GCC, respectively). E) Western blot shows the steady state level of expression of indicated constructs (GAr-1 to 3 and GFP-1 to 3). F) IFNc stimulates antigen presentation by optimising processing of peptides longer than 8-10 residues and has only an effect on antigen presentation when SL8 is expressed in the main ORF and not when expressed as cryptic peptide in an alternative reading frame (left graph). Epoxomicin is a specific proteasome inhibitor and prevents presentation of SL8 when expressed in the main ORF (right graph). This shows that expression of the SL8 epitope as a cryptic peptide does not carry additional residues from the main reading frame. Data are representative of three or more independent experiments and values are shown with SD. Protein synthesis and antigen presentation data are derived from cells transfected with the indicated constructs before split and tested separately. doi:10.1371/journal.ppat.1001151.g002 Several results lead us to propose that control of initiation of mRNA translation, and not prevention of elongation, is the key feature for the GAr domain in controlling MHC class I antigen presentation. Firstly, the insertion of the c-myc IRES in the 59UTR of GAr-carrying mRNAs prevents the GAr from suppressing mRNA translation and antigen presentation. It is unlikely that the insertion of an IRES in the 59UTR without any changes to the main reading frame could impose differences in GAr-dependent control of synthesis other than via altering the initiation conditions. This is further supported by the observations that deletion of the domain 1 of the c-myc IRES prevents its effect on overcoming the GAr and its cell line-dependent mode of action. This might also indicate that the target factor of the GAr is independently recruited to the polysome by domain I and could further help to elucidate the mechanisms of GAr action on translation initiation. Secondly, the GAr-like sequence derived from the Papio virus, where a single serine residue is inserted at every seven amino acids, has little effect on the rate of protein synthesis or on antigen presentation. Furthermore, by inserting three alanines into the EBV-GAr sequence we retain the GC rich mRNA sequence but increase synthesis and antigen presentation. Finally, the GAr suppresses presentation of antigenic peptides that are derived from independent translation initiation events in the 39UTR of the GAr-encoding reading frame. The latter type of translation initiation of cryptic minigenes was shown by the group of Shastri to be sufficient to provide antigenic peptides for the MHC class I pathway [25]. It is difficult to see how pretermination of the translation due to difficulties for the ribosome in reading through the GC rich region could explain suppression of independent down-stream initiation events [36]. In addition, truncated EBNA1 peptides due to failure of elongation are not observed in EBV infected cells. Taken together, neither of the observations presented here are likely to occur if the GAr acts via mechanisms related to elongation, including difficult ribosomal read-through, codon exhaustion, or other more specific mechanisms [36,37]. Previous studies have shown that changes to the GAr peptide sequence, but not RNA sequence impair its efficiency to suppress translation [19], underlining that the effect is mediated by the peptide, and not the RNA sequence. The GAr is predicted to be unstructured and does not included charged residues and, as expected, a 30 amino acid GAr peptide was found not to bind the EBNA1 mRNA ( [20] and data not shown). However, a recent report suggests that the Gly-Arg repeat of EBNA1 has RNA binding capacity [38]. Blocking translation initiation offers an explanation to how the GAr can succeed in suppressing production of DRiPs/RDPs and thus antigenic peptides derived from initiation events from all reading frames throughout the mRNA. Based on the model that antigenic peptides are not derived from degradation of full length proteins it should not be possible for the GAr to avoid presentation of peptides derived from EBNA1 by controlling its turnover rate since this would only affect peptides derived from the full length EBNA1 and not DRiPs [39] or RDP products that do not carry the GAr. Our previous work and toeprint analysis carried out by others indicate that the GAr peptide has to be synthesised in order to suppress mRNA translation initiation and that it does not affect the site of initiation [19,36]. This is line with the notion that the GAr would not give rise to truncated EBNA1 peptides due to alternative initiation sites or diffuse pre-termination events that would not serve the function of the protein and thereby not support viral replication, nor prevent presentation of upstream antigenic peptides, but to an overall suppression of translation in cis. In fact, using GAr specific polyclonal sera one does not see any massive accumulation of truncated EBNA1 products in normal EBV-infected cells that would have been expected from a pre-termination event derived from within the GAr sequence. One question that arises from this study is if this mechanism of evading the immune system is efficient, it should be adapted by other viruses. It has recently been suggested that the MHV-68 gamma herpes virus ORF73 is using a similar mechanism as EBNA1 to evade MHC class I restricted antigen presentation. In this case, however, the sequence is not identical to the GAr, restores translation in H1299 cells but does not affect translation when inserted in the 59UTR of Ova alone (left panel). Western blot shows that the effect of the c-myc IRES is restricted to the GAr alone (right panel). C) Autoradiograph of a 30 minutes 35 S-metabolic pulse label experiment of the endogenous protein (actin) and the exogenous GFP protein in H1299 cells in the presence of the c-myc IRES and the GAr constructs. D) The c-myc IRES stimulates SL8 presentation in the context of the GAr from the main open reading frame as well as cryptic translated products (see Fig. 2A). Data are representative of three or more independent experiments plus SD. doi:10.1371/journal.ppat.1001151.g003 Table 1. GAr-dependent inhibition of antigen presentation in different cell lines from different origins. The presentation of SIINFEKL from chicken ovalbumin (Ova) itself or Ova inserted in the EBNA1 coding sequence was detected using the B3Z reporter cells. The presentation of SIINFEKL from Ova or from an EBNA1 construct that lacks the GAr sequences (EBNA1DGA) was given the arbitrary value of 100%. The right column shows the effect of the c-myc IRES on GAr-dependent inhibition of antigen presentation in cell lines from different origins. The table shows suggesting that other amino acid sequences can achieve a similar effect [17]. The LANA-1 of Kaposi's sarcoma virus is also believed to use a similar mechanism but with a different repeat sequence [18]. Hence, several viruses might use a similar concept but different sequences. This makes it more difficult to predict how wide spread this type of mRNA translation control is among viruses but it indicates that the strategy to escape the MHC class I pathway by manipulating mRNA translation initiation applies to several viruses and is not restricted to the EBV. The GAr acts in cis and offers a unique opportunity to study the relationship between antigen presentation, protein stability and mRNA translation without the addition of general chemical inhibitors of protein synthesis or degradation that might have indirect or unspecific effects. By making minimal changes to the GAr amino acid sequence we have shown that we can fine tune translation initiation of the mRNAs and, as far as we are aware, there are no other systems described that allow this. The GAr consists of single alanines disrupted by one, two or three glycines. Disruption of the GAr by two alanines next to each other on three locations is sufficient to reduce the translation and antigen presentation inhibitory effect of 30 amino acids GAr sequence by approximately 50%. Introduction of two serines on two locations has a more disruptive effect and reduces its effect with approximately 75%. This demonstrates a close correlation between the rate of mRNA translation initiation and MHC class I restricted antigen presentation that has consequences for understanding the source of antigenic peptides for the MHC class I pathway. These results are in line with other studies suggesting that protein stability does not affect antigen presentation [40] and indicates that a fundamental part of the immune system's capacity to detect virus-infected host cells is reliant on the mechanism of viral mRNA translation, as opposed to any features linked to the actual viral proteins. This supports the notion that MHC class I restricted immune surveillance is in fact directly correlated with the mechanisms that regulate protein synthesis and not protein degradation and supports the model where it is in fact the presence of mRNA, and not the full length proteins, that is surveilled by the MHC class I pathway [41]. This opens up for novel ways of interpreting viral control of mRNA translation and new approaches for therapeutic intervention aimed at virus associated diseases. These results also have broader implications in the understanding of the peptide selection process and will allow the prediction of antigenic peptide production from specific mRNAs that has implications for generating more efficient DNA vaccines and potentially also for better understanding of dysregulated antigen presentation in autoimmune disease and the generation of self tolerance. label. C) Presentation of SL8 derived from corresponding constructs in EL4 cells. D) The p53 protein is targeted for the ubiquitin-dependent degradation pathway by the E3 ligase MDM2 [42]. Fusion of the Papio GAr to p53 results in an accumulation of polyubiquitinated products in the presence of MDM2, showing that the Papio GAr retains the capacity to affect protein degradation [24]. E) The GAr sequence consists of single alanines separated by one, two or three glycines. Introducing 2 alanines (GCC) next to each other on 3 separate places (32GAr-3A-Ova) does not alter the overall GC content of the RNA sequence. F) Introducing a single serine next to an alanine at two locations (31GAr-2S-Ova) is more disruptive in terms of mRNA translation as compared with the 32GAr-3A-Ova (left, upper panel). The corresponding effect on antigen presentation is shown in the graph below. The right graph shows the arbitrary values of the rate of mRNA translation initiation and antigen presentation. Data are representative of three or more independent experiments and values are shown with SD. Cells were transfected with the indicated constructs before split and tested separately for antigen presentation or synthesis. doi:10.1371/journal.ppat.1001151.g005 Figure 6. Antigenic peptides (A.P.) can be derived from the main open reading frame as well as cryptic peptides from alternative reading frames (yellow) and from the 39UTR (pink) [25]. The nascent GAr polypeptide (red) of the EBNA1 prevents translation initiation throughout the entire mRNA, including its own reading frame and cryptic peptides. This allows the EBV to evade the MHC class I restricted antigen presentation of peptides from the EBNA1 message and helps the virus to evade the immune system. The GAr also prevents the synthesis of the EBNA1 full length protein but its long half life ensures that functional levels of EBNA1 are expressed. doi:10.1371/journal.ppat.1001151.g006 Electrophoresis and western blotting Following separation on 12% SDS-PAGE, proteins were transferred to 0.45 mm nitrocellulose membranes, and blots were blocked for 1 hour at room temperature with a 5% skim milk in TBS solution consisting of 20 mM Tris, 500 mM NaCl, 0.1% Tween 20, pH 7.5. Blots were incubated overnight at 4uC with anti-EBNA1 mouse monoclonal antibody (OT1X) (1:1000) or polyclonal anti-GA antibody (1:500), raised against the Gly-Ala sequence of EBNA1 protein or a monoclonal actin antibody (1:1000) Chemicon International (Temecula, CA) or anti-p53 rabbit polyclonal antibody (CM-1). The membranes were washed before incubated with horseradish peroxidase-conjugated rabbit anti-mouse or mouse anti-rabbit immunoglobulin antibody (1:5000) for another 1 h and detected using ECL (Amersham Bioscience). The ECL signal was quantified using CCD camera and associated software (Vilber Lourimat, France). Pre-stained molecular markers were from Fermenta (Ontario, Canada). Plasmid constructions All plasmids were generated using standard procedures. Restriction enzymes, T4 DNA ligase and calf intestinal alkaline phosphatase were obtained from New England Biolabs (Ipswich, MA). Purified synthetic oligonucleotides were obtained either from MWG biotech (Ebersberg, Germany) or Eurogentec. Routine plasmid maintenance was carried out in DH5a and TOP10 bacteria strains. The EBNA1 and EBNA1DGA were generated using oligonucleotide pairs 59AGTATAATCAACTTTGAAAAACTCTGA-GAAG39 and 59CTTCTCAGAGTTTTTCAAAGTTGATTA-TACT39, encoding the SL8 peptide, inserted into the unique Bstx1 site found in EBNA1 sequence, right after the GAr sequence. The GFP-1 construct was prepared using oligonucleotide pairs 59AATTCTGAATGAGTATAATCAACTTTGAAAAACTCT-GAT39 and 59CTAGATCAGAGTTTTTCAAAGTTGATTA-TACTCATTCAG39, encoding the SL8 peptide, inserted into the EcoR1/Xba1 sites of EGFP-C2 vector (BD Biosciences Clontech, Palo Alto, CA). The GAr-1 construct was made using the same oligonucleotide pairs inserted in the 39UTR of the GAr, itself cloned into the pCDNA-3 vector (Invitrogen, Carlsbad, CA). The GFP-2 and GAr-2 constructs were made in the same way using the oligonucleotide pairs 59AATTCCTGAATGAGTATAATCA-ACTTTGAAAAACTCTGAT39 and 59CTAGATCAGAGTT-TTTCAAAGTTGATTATACTCATTCAGG39. The different GAr-2 constructs were prepared by mutating the AUG codon in GCC or GGC using standard procedures. The GFP-3 and GAr-3 constructs were prepared using the oligonucleotide pairs 59AATT-CAGTATAATCAACTTTGAAAAACTCTGAT39and 59CTA-GATCAGAGTTTTTCAAAGTTGATTATACTG39 and inserted in the same vectors. The 30GAr-EBV-Ova construct was made by replacing the fulllength GAr sequence in the Gar-Ova construct with an oligonucleotide sequences corresponding to 30 amino acids of the GAr. The same approach was used to produce 32GAr3A-Ova, 31GAr2S-Ova and 30GAr-Papio-Ova. Metabolic cell labeling and immunoprecipitation All mRNA translation assays were carried out in H1299 cells transfected with indicated constructs. Transfected cells were cultured for 36 hours before treated with 20 mM MG132 for one hour in methionine free medium containing 2% dialysed FCS. 0.15 mCi/ml of [ 35 S] methionine (Perkin Elmer, Boston, USA) was added in the presence of proteasome inhibitor and the cells were harvested at indicated time points using a rubber policeman after 2x washing in cold PBS. Cell pellets were snap frozen at 280uC before lysed in PBS containing 1% NP40 and Complete protease inhibitor cocktail (Roche Diagnostics GmbH, Mannheim, Germany) at 4uC. Lysates were centrifuged for 15 minutes at 14.000 rpm and pre-cleared by addition of mouse sera and protein G sepharose. An equal amount of total protein was incubated with specific antibodies for 4 hours at 4uC before the immune complexes were recovered using protein G sepharose. The proteins were separated on precast Bis-Tris 4-12% SDS-PAGE (Invitrogen) and the amount of labelled protein was visualized by autoradiography and the relative amount of protein synthesis was determined using phosphoimager. T-cell assay EL4-Kb restricted cells (5610 4 ) expressing the indicated constructs for 48 h were washed in medium and cultured with 5610 4 B3Z T cell hybridoma for at least 20 h in 96-well plates. T cell assays in human H1299 cell lines were done by co-transfecting the Kb expression vector together with the reporter construct. The
2014-10-01T00:00:00.000Z
2010-10-01T00:00:00.000
{ "year": 2010, "sha1": "fc057a685d20a0997e77c9b285e006da2cd4bcda", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1001151&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc057a685d20a0997e77c9b285e006da2cd4bcda", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
209248779
pes2o/s2orc
v3-fos-license
Analysis of stapled hemorrhoidopexy outcomes: A single-institution based study Background: Although traditional surgery is the gold standard treatment for hemorrhoids, stapled hemorrhoidopexy (SH) is an alternative surgical technique. However, this technique has concerns of recurrence. We conducted this study to assess the clinical outcomes and complications of SH in patients visiting our institution Methods: A prospective study was conducted on 115 patients from 2010 to 2012 who underwent SH with PPH03 kit under spinal anesthesia. Clinical outcomes assessed included the operation time, hospital stay and rate of post-operative pain. Results: SH had lower operative procedural time (30 minutes), postoperative pain and hospital stay (1.8 days) along with minimal procedural complications and were comparable to the previous reports. Conclusions: Stapled hemorrhoidopexy is an effective alternative to traditional surgical technique in treating 3rd and 4th degree hemorrhoids, in terms of lesser operative procedural time, post-operative pain, use of analgesics and hospital stay along with reduced procedure related complication. Introduction Hemorrhoids, commonly known as piles is a clinical condition affecting the anorectal region characterized by symptomatic enlargement and prolapsed anal cushion [1] . It is a common condition affecting the adults with a global incidence ranging between 50 to 80% and in India about 75% of the population are diagnosed with hemorrhoids [2] . Conventional excisional hemorrhoidectomy has been the most effective technique for patients with hemorrhoids. However, the efficacy of the technique is masked by the complication of significant postoperative pain, thereby leading to the deferral of treatment [3] . To overcome this complication, Dr. Antonio Longo introduced stapled hemorrhoidopexy (SH) in 1998 and this procedure involves repositioning of the prolapsed hemorrhoidal tissue through a circular resection of the inner layers unlike the complete removal of the tissue followed in conventional methods [4] . Further the mechanical anopexy interrupts the vascular supply to the hemorrhoid cushions thereby reducing the hemorrhoid tissue [3] . SH is commonly indicated in circumferential grade II, hemorrhoidal prolapse, grade III and IV hemorrhoids [4] . Evidences from clinical trials and meta-analysis studies have reported the safety and efficacy of SH in comparison to traditional excisional techniques [6,7] . The advantages of SH include shorter operative time, reduced hospital stay, lesser pain with earlier recovery. However, the higher symptomatic recurrence rate reported with SH has raised concerns over this procedure especially in cases of larger and prolapsed grade IV hemorrhoids [8] . Further, the eTHoS trial, a large, openlabel multicenter, randomized controlled trial demonstrated traditional excisional surgery as appropriate treatment of choice especially in a tailored management plan [9] . Based on negative effects reported with SH technique Giordano et al. suggested that the patients should choose the procedure either with higher risk of recurrence and additional operation or conventional hemorrhoidectomy associated with longer operation time and recovery time [10] . In spite of the controversy regarding the use of SH, it has been successfully used for the surgical management of hemorrhoids by many clinicians including our institution. However, surgical indications for SH may not be the same of conventional excisional techniques. The aim of the current study was to report the data regarding type of analgesia, post-operative morbidity, ~ 227 ~ complication rate, the hospital stay, and the rate of recurrence observed with SH in our institutional experience. Materials and Methods This was a single-center prospective study conducted at ESIC MC PGIMSR, Bangalore. Data were collected prospectively for all consecutive patients with a diagnosis of hemorrhoids and were treated by SH in our institution between] 2010 and 2012. Patients were included in the study if they had: i) symptomatic 2 nd , 3 rd or 4 th degree hemorrhoids and preferred undergoing SH. Patients who reported acute hemorrhoidal episodes with thrombosis, prior hemorrhoidectomy and associated anal pathology were excluded from the study. The medical history of all of the patients were recorded and the clinical examination included inspection, digital exploration and proctoscopy. Before the SH, patients were advised to undergo routine blood and urine examinations. The study protocol was approved by the institutional review board (IRB), while confirming to the standards of the Declaration of Helsinki and its subsequent revisions. All the included patients signed the informed consent to participate in the study. Treatment All the patients underwent SH under regional anesthesia as suggested by the anesthetist. The procedure was done in a lithotomy position. All the patients underwent SH using the procedure for prolapse and Haemorrhoids (PPH03) Proximate haemorrhoide stapler (Ethicon Endo-Surgery kit). Post-operative follow-up The follow up period ranged from 6 months to 32 months. All the patients were advised to have normal diet post operatively. They were also prescribed stool softener, mild analgesics and antibiotics for a period of 5 to 7days. Statistical analysis Statistical analysis All statistical analyses were performed using SPSS version 22.0 (IBM software suite; Armonk, NY). Data are expressed in its frequency and percentage as well as mean and standard deviation. Baseline characteristics The study included a total of 115 patients with a mean age of 40 years (range: 21-70 years). Majority of the patients were in the age group of 41 to 50 years and were predominantly males (M: F=70:30), figure 1 and 2. Hospital stay The mean hospital stay in our study was 1.8 days. About 85% of the patients had ≤ 2 days of hospital stay following SH, while the remaining 15% of the patients had 2 to 5 days of hospital stay, figure 4. Post-operative analgesia Pain was assessed using a visual analogue scale (VAS) in which 0 corresponds to "no pain" and 10 to "severe pain". In our study, most of the patients complained scheduled analgesia following SH and only 6.9% were reported severe pain and were treated with opiates. Post-operative complications In our study, about 9.5% of the patients reported post-operative bleeding, 11.3% had incontinence to flatus and 1.7% had faecal impaction. However, there were no cases of recurrences noted in our study. Further there were no other complications of rectal perforation and pelvic sepsis, rectal diverticulum and others as reported in other global studies. Discussion Hemorrhoids are one of the most common benign diseases and the management of this benign condition depends upon grade or degree of disease. Usually the grade I and II are treated by conservative measures while grade III and IV are managed surgically. Currently hemorrhoidal surgeries are performed by various intervention methods including Milligan-Morgan, Parks, and Ferguson and these procedures are still considered as gold standard treatment [11] . Stapled hemorrhoidopexy is an alternative to excisional surgery that is associated with clinical benefits including lesser surgical time and reduced time for recovery. This study was conducted to compare the results of SH performed in our institution with those reported in world literature. The duration of surgery is less with SH in comparison to conventional methods. In our study the mean duration was about 30 minutes. This is similar to the duration reported by Panigrahi et al (28 minutes) but is higher than that reported by Vineet et al and Ng KH et al where the surgery duration was 22 minutes and 15 minutes respectively [11,12] . The varying difference in the duration of surgery may be due to the different grades of haemorrhoid patients included in different studies. SH is usually performed as a day care procedure wherein the patients are discharged from hospital within a day. In our study, the mean hospital stay was 1.8 days. Our results were in line with that of Ganio et al, where they reported a mean hospital stay of 1 day [13] . Panigrahi et al reported a mean hospital stay of 2.08 days [5] . The slight differences noted in hospital stay with different studies is the reflection of differences in hospital discharge protocols and the way in which the length of hospital stay is determined in different studies. The postoperative pain reported with SH is generally lesser compared to the traditional excisional biopsy. In our study only a limited number of patients had severe pain and had opiates had to be prescribed. Panigrahi et al had described a significantly lower pain score in SH patients compared to excisional method. Usually a purse string suture placed too close to the Dentate line or a low placed staple line can cause persistent postoperative pain following SH [14,15] . In the current study, the common post-operative complication was incontinence to flatus followed by post-operative bleeding and faecal impaction. The rates or de novo incontinence to flatus reported in various prospective studies range between 3% to 19% [16,17] . The use of anal dilator devices or stretching of the anal canal during insertion or firing of the stapler has been proposed as the cause of incontinence [18,20] . Further, the use of an Eisenhammer retractor for inserting purse string suture has been shown to reduce the incidence of incontinence [21,22] . Another common complication of SH include bleeding, however this is lower when compared to the other methods of hemorrhoidectomy [23] . The rates of rectal bleeding after PPH for second-, third-and fourth-degree piles without thrombosis range between 1% and 11% [16,17] and in our study it was observed to be 9%. Bleeding following stapled hemorrhoidopexy mostly occur immediately after surgery or later from the seventh day. Bleeding following SH usually occurs secondary to an arteriolar bleed along the staple line, either from defective techniques resulting in mucosal injury or due to the rejection of the staples [24] . The introduction of PPH03 has greatly reduced the incidence of early bleeding following SH. In our study a minor group of patients accounting 1.1% reported faecal impaction. However, some of the studies have reported an occurrence of about 1% to 6.6% [25][26][27][28] . Though previous studies have reported higher recurrences following SH, in our study there was no cases of recurrence reported during the follow-up. Recurrence following SH has been reported up to 58.9% with a median recurrence rate of 6.9%. 29 Grade IV hemorrhoidal disease is usually associated with recurrence. 29 The meta-analysis study by Giordano et al reported recurrence rates following stapled hemorrhoidopexy in 4th degree hemorrhoids up to 22%, when compared to 3.6% in conventional hemorrhoidectomies [10] . Recurrence is believed to occur secondarily to the irreducibility of the prolapse preventing the lifting effect of the stapled hemorrhoidopexy [22,29] . The limitation of our study is that it is a single arm study. We did not compare the SH procedure with the gold standard technique to get the definite difference between the two procedures. Conclusion In conclusion, stapled hemorrhoidopexy is a safe and efficacious surgical procedure in treating grade III and IV haemorrhoids. Most of the results obtained in our study were comparable with the currently available literature. However, the treating proctologist should be adequately and appropriately trained in this method of hemorrhoidectomy to achieve the best patient outcomes. Further studies with higher sample size reporting minimal recurrence are required to validate stapled hemorrhoidopexy as a gold standard treatment for hemorrhoids.
2019-11-14T17:07:58.941Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "ffdb6a3d867a6e2d5ed009351d138e169f643f89", "oa_license": null, "oa_url": "https://www.surgeryscience.com/articles/246/3-4-18-454.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "efb3c442d8ae234cbdf32689bba54f59cff92ed2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8562641
pes2o/s2orc
v3-fos-license
An Automatic Localization Algorithm for Ultrasound Breast Tumors Based on Human Visual Mechanism Human visual mechanisms (HVMs) can quickly localize the most salient object in natural images, but it is ineffective at localizing tumors in ultrasound breast images. In this paper, we research the characteristics of tumors, develop a classic HVM and propose a novel auto-localization method. Comparing to surrounding areas, tumors have higher global and local contrast. In this method, intensity, blackness ratio and superpixel contrast features are combined to compute a saliency map, in which a Winner Take All algorithm is used to localize the most salient region, which is represented by a circle. The results show that the proposed method can successfully avoid the interference caused by background areas of low echo and high intensity. The method has been tested on 400 ultrasound breast images, among which 376 images succeed in localization. This means this method has a high accuracy of 94.00%, indicating its good performance in real-life applications. Introduction Breast cancer is one of the most common malignant tumors women face. It has the highest incidence and mortality of all diseases affecting women and even holds a rising trend. The early detection of suspicious lesions is very important for effective treatment of breast cancer. Ultrasound, mammography and magnetic resonance imaging (MRI) are general methods for the clinical detection of lesions, among which ultrasound (US) has been widely used due to its non-invasion, non-ionization radiation and non-injury [1,2]. However, the shortcomings of US breast images, such as low contrast, serious speckles, and low spatial resolution, make it difficult for doctors to read and analyze these suspicious lesions. Furthermore, with an increasing number of patients, doctors feel heavily burdened, resulting in a higher rate of misdiagnosis. In recent years, to improve the usability of US computer-aided diagnosis (CAD) has been preferred to achieve more reliable and accurate diagnostic conclusions and to reduce the unnecessary MRI and biopsies. In addition, CAD techniques used to investigate suspicious lesions in US breast images can also help reduce the workload of doctors [3][4][5][6]. Generally, a CAD system mainly contains three steps: image segmentation, feature extraction and object classification. Image segmentation is an essential step. In recent decades, many segmentation techniques have been proposed [7], such as neural network (NN)-based methods [8,9], deep learning, active contour models [6], and region-based methods [10,11]. For these segmentation algorithms, due to the interference caused by shadowing artifacts or similar tumor-like structures for the lesion, the automatic localization of tumors in US breast images is a difficult and critical link [8][9][10][11]. In fact, generally, lesions are manually marked as an alternative method, but artificial intervention makes it impossible to realize fully automatic segmentation. 2 of 15 To date, several methods have been proposed for the automatic localization of breast tumors in US images. Upon review of these methods, they can be generally classified into two types: point-based methods [10][11][12] and region-based methods [13][14][15]. Madabhushi et al. [11] proposed an automatic localization method using seeds point. This is a type of point-based method. In Madabhushi's method, the reference point is generated based on empirical rules, with which suspicious lesions are more likely to be positioned at or near the center of an image. As their seed point selection method combines the probability distribution functions (pdfs) for intensity, texture and the location of the potential seed points, it is very likely to result in an incorrect location, where the true tumor is not located at the center of the image, or pixels with similar pdfs near the center of the image. Unfortunately, these two issues persist in US breast images. Therefore, the reference point assumption is ineffective and influenced by the operation habits of physicians. Besides, this point-based method is sensitive to speckle noise. Other researchers have proposed region-based detection methods. Yap et al. [14] considered two factors for selection object region: the size and the distance of each region to the reference point. In this method, histogram equalization is firstly used to pre-process the images, which is followed by hybrid filtering and multifractal analysis stages. Then, the region of interest (ROI) is identified by a single-valued threshold segmentation method and a rule-based approach. Compared to point-based methods, this method has better performance in case of noise immunity. However, if the tumors are particularly small or the background areas are complex with serious artifacts, this method loses efficacy. Liu [15] also presented a region-based detection method based on a support vector machine (SVM) classifier. He divided the images into 16 × 16 blocks and calculated the Gray Level Co-occurrence Matrix (GLCM) of each region as the training data of his SVM classifier. However this localization method needs many post-processing operations to select the ROI, including removing linear connected areas, filling holes and experience-based region selection. This method cannot separate tumor areas from normal tissues if only depending on the local texture feature. In addition, this method is unable to localize tumors in small-sized tumors and large-sized tumors shown in Section 4. In this paper, we propose a new localization method combining the point-based and region-based methods. This is based on the Itti HVM model [16]. Itti HVM model is a classic visual attention model, which has been widely applied to localize the most salient region in natural images by combining color, intensity, and orientation features. According to the characteristics of tumors in US images, we used the intensity and blackness ratio feature to compute the saliency map. In addition, in order to avoid the drawbacks of point-based methods, we add the superpixel feature so that these three features greatly improve the performance of this model, especially for the large and relatively small tumors. Finally, the automatically localized region is presented for the initial contour of the Chan-Vese (CV) level set. We employed the automatic segmentation of a breast tumor to demonstrate the practical significance of the proposed method. Model Architecture The human eye, working as a biological photoelectric sensor, can quickly obtain visual information. This information can stimulate the visual light-sensitive cells, producing electrical pulses, and guiding the eye movement. Different visual information causes different stimulation on the visual light-sensitive cells, with the most significant information gaining the attention of the human eyes. HVM has been successfully applied in the field of image processing [16][17][18][19][20], which can quickly localize the ROI, and automatically discard the unimportant parts. Thus, this saves a lot of image post-processing time. In HVM, the movement of human eyes is mainly determined by visual features. Therefore, in HVM models, feature extraction is the most important part. For natural images, there are some features successfully applied in HVM models, such as color, texture and orientation. However, many features are usually invalid for US images. For instance, tumors are usually dark and directional features are too weak to distinguish the tumors from other dark areas. In our model, we used intensity, blackness ratio and superpixel contrast features to achieve tumor detection. This localization method can be divided into four steps: (a) getting feature maps and saliency maps of blackness ratio and intensity; (b) computing the superpixel saliency map of input image; (c) combining all saliency maps; and (d) locating the most salient region. The framework of the automated object localization method is shown in Figure 1. many features are usually invalid for US images. For instance, tumors are usually dark and directional features are too weak to distinguish the tumors from other dark areas. In our model, we used intensity, blackness ratio and superpixel contrast features to achieve tumor detection. This localization method can be divided into four steps: (a) getting feature maps and saliency maps of blackness ratio and intensity; (b) computing the superpixel saliency map of input image; (c) combining all saliency maps; and (d) locating the most salient region. The framework of the automated object localization method is shown in Figure 1. Gaussian Pyramid Image In this algorithm, to avoid information loss through sub-sampling process, the input image is first smoothed by convolution with a linearly separable Gaussian filter and then sub-sampled by a factor of two [19]. In our model, we use a 6 × 6 linearly separable Gaussian kernel filter of [1,5,10,10 Figure 2, in which images in higher levels have lower resolution, while the information in the object area is well retained, overall improving the efficiency of the procedure. Gaussian Pyramid Image In this algorithm, to avoid information loss through sub-sampling process, the input image is first smoothed by convolution with a linearly separable Gaussian filter and then sub-sampled by a factor of two [19]. In our model, we use a 6 × 6 linearly separable Gaussian kernel filter of [1,5,10,10,5,1,1]/32. The next levels σ = [1, 2, ..., 9] of the pyramid are obtained by repeating the sub-sampling and decimation processes. σ = 1 represents the input image. The resolution of layer σ is the 1/2 σ times of the original image, as this representation can effectively reduce time complexity of the algorithm. The Gaussian pyramid of input image is shown in Figure 2, in which images in higher levels have lower resolution, while the information in the object area is well retained, overall improving the efficiency of the procedure. Intensity and Blackness Ratio Features In US breast images, the intensity of tumors is lower than that of its surrounding areas, so tumors have great contrast to backgrounds. We obtained the intensity feature I M by computing the average of three color components: US images are intrinsically gray-scale images, so the color components r , g and b are equal. For the record, the US breast images we obtained are stored in 24-bite bitmap (BMP) format, a standard image file format in Window operating system. In these images, the color components r , g and b have weak differences in gray values. In addition, we defined the blackness ratio feature, in order to enlarge the weight of lower intensity, and weaken the influence of high-intensity areas. The blackness ratio map BR M is explained as To avoid the divisor to be zero, ε is added in Equation (2), where ε is set to 0.001. Pyramid Feature Map This involved extracting the intensity and blackness ratio features of each Gaussian pyramid image and repeating for pyramid levels Generating Saliency Map Center-surround receptive fields are simulated by across-scale subtraction between two maps at the center ( c ) and the surround ( s ) levels in these pyramids, yielding intensity and blackness ratio contrast maps [16,21,22]. N is an iterative, nonlinear normalization operator, simulating local competition between neighboring salient localization. Between one and five iterations are used for the simulations in this paper. The contrast maps are summed over by across-scale addition  to Intensity and Blackness Ratio Features In US breast images, the intensity of tumors is lower than that of its surrounding areas, so tumors have great contrast to backgrounds. We obtained the intensity feature MI by computing the average of three color components: US images are intrinsically gray-scale images, so the color components r, g and b are equal. For the record, the US breast images we obtained are stored in 24-bite bitmap (BMP) format, a standard image file format in Window operating system. In these images, the color components r, g and b have weak differences in gray values. In addition, we defined the blackness ratio feature, in order to enlarge the weight of lower intensity, and weaken the influence of high-intensity areas. The blackness ratio map MBR is explained as To avoid the divisor to be zero, ε is added in Equation (2), where ε is set to 0.001. Pyramid Feature Map This involved extracting the intensity and blackness ratio features of each Gaussian pyramid image and repeating for pyramid levels σ = [1, 2, ..., 9] to get the intensity pyramid feature map MI(σ) and blackness ratio pyramid feature map MBR(σ). Generating Saliency Map Center-surround receptive fields are simulated by across-scale subtraction between two maps at the center (c) and the surround (s) levels in these pyramids, yielding intensity and blackness ratio contrast maps [16,21,22]. CI(c, s) = N{|MI(c)ΘMI(s)|} CBR(c, s) = N{|MBR(c)ΘMBR(s)|} where N{.} is an iterative, nonlinear normalization operator, simulating local competition between neighboring salient localization. Between one and five iterations are used for the simulations in this paper. The contrast maps are summed over by across-scale addition ⊕ to the σ = 5 level. Following this, the sums are normalized again to generate a saliency map, as shown in Figure 3. level. Following this, the sums are normalized again to generate a saliency map, as shown in Figure 3. In Figure 3b, a high-intensity area has higher contrast than tumor area, but the tumor area is more salient than other high-intensity regions whose values are near to zero. In the blackness ratio saliency map shown in Figure 3c, the tumor area has the highest saliency compared to any other regions. When the local contrast of tumors is smaller than that of some background areas or speckle noise, the blackness ratio and intensity features are unable to localize the tumors. In order to avoid the interference of speckle noise and some background areas, we propose a novel global saliency feature, superpixel saliency, to enhance the saliency of tumors. Superpixel A superpixel is a region that consists of a series of adjacent features with similar color and brightness. Most of these small areas retain the effective information for further image processing and generally preserve the boundaries of the object. We used simple linear iterative clustering (SLIC) to generate a superpixel image [23], in which the gray values and Euclidean distance are respectively used to compute the color similarity and spatial proximity of two pixels. Compared to existing superpixel methods, SLIC has been shown to outperform in respect to algorithm speed, memory efficiency and boundary adherence. Furthermore, the SLIC algorithm has excellent performance. In the SLIC algorithm, parameter K of K-means and compactness m are of importance to the segmentation performance. We discussed the segmentation performance of variant K and m. Combined with the reference segmentation results in Figure 4, we can see from the four lines in Figure 5 that compactness comes at the expense of boundary adherence. In other words, a higher compactness indicates more missed edges. Furthermore, as shown in Figure 5b1-b4 and c1-c4, the boundary adherence increases with K and then decreases rapidly when K > 400, resulting in poor segmentation performance. In our method, we choose the parameters K = 400 and m = 40, with which the SLIC algorithm can segment the locally pointed region well as shown in Figure 5b3 and the reference segmented region in Figure 4. In Figure 3b, a high-intensity area has higher contrast than tumor area, but the tumor area is more salient than other high-intensity regions whose values are near to zero. In the blackness ratio saliency map shown in Figure 3c, the tumor area has the highest saliency compared to any other regions. When the local contrast of tumors is smaller than that of some background areas or speckle noise, the blackness ratio and intensity features are unable to localize the tumors. In order to avoid the interference of speckle noise and some background areas, we propose a novel global saliency feature, superpixel saliency, to enhance the saliency of tumors. Superpixel A superpixel is a region that consists of a series of adjacent features with similar color and brightness. Most of these small areas retain the effective information for further image processing and generally preserve the boundaries of the object. We used simple linear iterative clustering (SLIC) to generate a superpixel image [23], in which the gray values and Euclidean distance are respectively used to compute the color similarity and spatial proximity of two pixels. Compared to existing superpixel methods, SLIC has been shown to outperform in respect to algorithm speed, memory efficiency and boundary adherence. Furthermore, the SLIC algorithm has excellent performance. In the SLIC algorithm, parameter K of K-means and compactness m are of importance to the segmentation performance. We discussed the segmentation performance of variant K and m. Combined with the reference segmentation results in Figure 4, we can see from the four lines in Figure 5 that compactness comes at the expense of boundary adherence. In other words, a higher compactness indicates more missed edges. Furthermore, as shown in Figure 5b1-b4 and c1-c4, the boundary adherence increases with K and then decreases rapidly when K > 400, resulting in poor segmentation performance. In our method, we choose the parameters K = 400 and m = 40, with which the SLIC algorithm can segment the locally pointed region well as shown in Figure 5b3 and the reference segmented region in Figure 4. Superpixel Saliency Map The superpixel saliency map is computed as Superpixel Saliency Map The superpixel saliency map is computed as Superpixel Saliency Map The superpixel saliency map is computed as In Equation (5), K is the number of superpixels; N and ni are respectively the total number of pixels in input image and in superpixel ri; δ is used to control the influence of surrounding superpixels on the superpixel being computed. In Equations (6) and (8), g is the set of average gray values gk of all superpixels. (xk, yk) and (xi, yi) in Equation (6) are the centroids of rk and ri, with d(rk, ri) then normalized to a range of [0,1]. In our method, three factors are considered: (a) Compared with normal tissues, a tumor has lower echo that makes it more salient than other areas, which is achieved by the weight W(rk). in Equation (5), in which δ controls the influence of surrounding superpixels ri on the current superpixel rk with respect to Euclidean distance, while δ is set to 0.05 to reduce the contribution of far superpixels. (c) Larger areas denote a greater effect on other areas, as illustrated with parameter ni N . In superpixel saliency map in Figure 7b, the superpixels in tumor area all have relatively high contrast. Other areas, such as the marked high-intensity area, are less salient because of their small areas. surrounding superpixels on the superpixel being computed. In Equations (6) and (8) In our method, three factors are considered: (a) Compared with normal tissues, a tumor has lower echo that makes it more salient than other areas, which is achieved by the weight Closer superpixels have a greater influence on the contrast of the current superpixel, as illustrated by the parameter in Equation (5), in which δ controls the influence of surrounding superpixels i r on the current superpixel k r with respect to Euclidean distance, while δ is set to 0.05 to reduce the contribution of far superpixels. (c) Larger areas denote a greater effect on other areas, as illustrated with parameter N ni . In superpixel saliency map in Figure 7b, the superpixels in tumor area all have relatively high contrast. Other areas, such as the marked high-intensity area, are less salient because of their small areas. Combined Saliency Map We found above that the saliency of tumor area is greatly affected by high-intensity background areas and low-intensity background areas, respectively, in the intensity saliency map and black ratio saliency map. In addition, as illustrated in Section 2.3.2, spindly high-intensity areas are far less salient due to their smaller sizes in the superpixel saliency map. Therefore, by combining these three saliency maps, we can effectively reduce the salient values of dark background areas and high-intensity background areas, as shown in combined saliency map in Figure 8. Combined Saliency Map We found above that the saliency of tumor area is greatly affected by high-intensity background areas and low-intensity background areas, respectively, in the intensity saliency map and black ratio saliency map. In addition, as illustrated in Section 2.3.2, spindly high-intensity areas are far less salient due to their smaller sizes in the superpixel saliency map. Therefore, by combining these three saliency maps, we can effectively reduce the salient values of dark background areas and high-intensity background areas, as shown in combined saliency map in Figure 8. surrounding superpixels on the superpixel being computed. In Equations (6) In our method, three factors are considered: (a) Compared with normal tissues, a tumor has lower echo that makes it more salient than other areas, which is achieved by the weight   In superpixel saliency map in Figure 7b, the superpixels in tumor area all have relatively high contrast. Other areas, such as the marked high-intensity area, are less salient because of their small areas. Combined Saliency Map We found above that the saliency of tumor area is greatly affected by high-intensity background areas and low-intensity background areas, respectively, in the intensity saliency map and black ratio saliency map. In addition, as illustrated in Section 2.3.2, spindly high-intensity areas are far less salient due to their smaller sizes in the superpixel saliency map. Therefore, by combining these three saliency maps, we can effectively reduce the salient values of dark background areas and high-intensity background areas, as shown in combined saliency map in Figure 8. Winner Take All Winner-take-all (WTA) neural networks have been extensively discussed as a way of making decisions. The maximum of the saliency map defines the most salient image localization, to which the focus of attention (FOA) should be directed (Figure 9). For our model, we used the WTA network proposed by Itti et al. [16,21], in which the saliency map is modeled as a 2D layer of leaky integrate-and-fire neurons. These model neurons consist of a capacitance that integrates the charge delivered by synaptic input, a leakage conductance, and a voltage threshold. Winner-take-all (WTA) neural networks have been extensively discussed as a way of making decisions. The maximum of the saliency map defines the most salient image localization, to which the focus of attention (FOA) should be directed (Figure 9). For our model, we used the WTA network proposed by Itti et al. [16,21], in which the saliency map is modeled as a 2D layer of leaky integrate-and-fire neurons. These model neurons consist of a capacitance that integrates the charge delivered by synaptic input, a leakage conductance, and a voltage threshold. Post-Processing Operation Based on the imaging mechanisms of the ultrasound imaging system, US breast images contain three layers, including the fat layer, mammary layer and muscle layer. The locations of suspicious lesions are in the mammary layer of US breast images, and the tumors have no junctions with the image edges. In order to overcome the interference of low-intensity areas, which are positioned in the fat layer or near the image edges, we propose an alternative strategy to automatically detect "wrong" localization by considering the position of the detected region and then selecting the "second" most probable lesion, using the information in the saliency map. The workflow of this strategy is depicted in Figure 10. The saliency map neurons receive excitatory pulses from the values of saliency map xi. The potential of saliency map neurons at more salient locations hence increases faster and each neuron excites its corresponding WTA neuron independently, until the most salient saliency map neuron first reaches threshold and fires. Post-Processing Operation Based on the imaging mechanisms of the ultrasound imaging system, US breast images contain three layers, including the fat layer, mammary layer and muscle layer. The locations of suspicious lesions are in the mammary layer of US breast images, and the tumors have no junctions with the image edges. In order to overcome the interference of low-intensity areas, which are positioned in the fat layer or near the image edges, we propose an alternative strategy to automatically detect "wrong" localization by considering the position of the detected region and then selecting the "second" most probable lesion, using the information in the saliency map. The workflow of this strategy is depicted in Figure 10. decisions. The maximum of the saliency map defines the most salient image localization, to which the focus of attention (FOA) should be directed (Figure 9). For our model, we used the WTA network proposed by Itti et al. [16,21], in which the saliency map is modeled as a 2D layer of leaky integrate-and-fire neurons. These model neurons consist of a capacitance that integrates the charge delivered by synaptic input, a leakage conductance, and a voltage threshold. Post-Processing Operation Based on the imaging mechanisms of the ultrasound imaging system, US breast images contain three layers, including the fat layer, mammary layer and muscle layer. The locations of suspicious lesions are in the mammary layer of US breast images, and the tumors have no junctions with the image edges. In order to overcome the interference of low-intensity areas, which are positioned in the fat layer or near the image edges, we propose an alternative strategy to automatically detect "wrong" localization by considering the position of the detected region and then selecting the "second" most probable lesion, using the information in the saliency map. The workflow of this strategy is depicted in Figure 10. In order to find the second most probable lesion, the Inhibition of Return (IOR) algorithm [16] is applied in our model to inhibit the saliency of the currently attended location to 0. In Figure 10, Lj is the number of junction pixels of the segmented region and image edges. When Lj > 0, the saliency of current-localized region is inhibited to 0 through the IOR algorithm and then find the second-most salient region by the winner-take-all method. Results An automatic localization algorithm for US breast images is tested in this section. To evaluate the proposed algorithm, 400 US breast images from the Ultrasound Department of West China Hospital The results of the proposed method shown in Figure 11 demonstrate that our automatic localization scheme is valid for breast tumors with different backgrounds and sizes. The first line shows four small tumors, the second line shows four tumors of medium sizes, and the third line shows four tumors with pretty large sizes. In our method, incorporating the superpixel saliency map along with low and high-level intensity knowledge makes it possible to avoid shadowing artifacts and lowers the chance of confusing similar tumor-like structures for the lesion. saliency of current-localized region is inhibited to 0 through the IOR algorithm and then find the second-most salient region by the winner-take-all method. Results An automatic localization algorithm for US breast images is tested in this section. To evaluate the proposed algorithm, 400 US breast images from the Ultrasound Department of West China Hospital of Sichuan University are tested. The proposed method successfully localized the tumors in 376 images with a high accuracy of 94.00%. The results of the proposed method shown in Figure 11 demonstrate that our automatic localization scheme is valid for breast tumors with different backgrounds and sizes. The first line shows four small tumors, the second line shows four tumors of medium sizes, and the third line shows four tumors with pretty large sizes. In our method, incorporating the superpixel saliency map along with low and high-level intensity knowledge makes it possible to avoid shadowing artifacts and lowers the chance of confusing similar tumor-like structures for the lesion. After tumor localization by the proposed method, images are further processed through the CV level set to extract a precise contour of tumors [24]. Figures 12 and 13 show the localization and segmentation procedures of two US breast images. After tumor localization by the proposed method, images are further processed through the CV level set to extract a precise contour of tumors [24]. Figures 12 and 13 show the localization and segmentation procedures of two US breast images. is the number of junction pixels of the segmented region and image edges. When 0  j L , the saliency of current-localized region is inhibited to 0 through the IOR algorithm and then find the second-most salient region by the winner-take-all method. An automatic localization algorithm for US breast images is tested in this section. To evaluate the proposed algorithm, 400 US breast images from the Ultrasound Department of West China Hospital of Sichuan University are tested. The proposed method successfully localized the tumors in 376 images with a high accuracy of 94.00%. The results of the proposed method shown in Figure 11 demonstrate that our automatic localization scheme is valid for breast tumors with different backgrounds and sizes. The first line shows four small tumors, the second line shows four tumors of medium sizes, and the third line shows four tumors with pretty large sizes. In our method, incorporating the superpixel saliency map along with low and high-level intensity knowledge makes it possible to avoid shadowing artifacts and lowers the chance of confusing similar tumor-like structures for the lesion. After tumor localization by the proposed method, images are further processed through the CV level set to extract a precise contour of tumors [24]. Figures 12 and 13 show the localization and segmentation procedures of two US breast images. We can see from Figure 12b-d that the tumor has the highest saliency than other regions in these three saliency maps. It is also clear that the dark region located in the bottom-right background has equally high saliency with the tumor in the blackness ratio and superpixel saliency maps. However in the intensity saliency map, the saliency of this region is much lower than tumor, making it less salient in the combined saliency map (Figure 12e). However, in some US breast images, the local contrast of the tumor region is lower than that of some background areas (Figure 13a). In these images, the intensity and blackness ratio features have less contributions to the localization of tumors (Figure 13b,c), in which the high-intensity bottom-right background area and the low-echo background area on the top left hold the highest saliency compared to other areas respectively in the intensity and blackness ratio saliency maps. The superpixel saliency map, in which the tumor has the highest saliency as shown in Figure 13e, devotes greatly to the localization of tumors. Among the 400 tested images, we found that the tumors in 32 images have lower saliency compared with some background areas. Furthermore, in eight images, the positions of the first localized areas are in the fat layer or near the image edges. As demonstrated in Figure 14b-d, the localized dark region in these three saliency maps has equal or even higher saliency compared to tumor, making the Winner Take All network ineffective to localize the tumor. This is consistent with that of the example in Figure 15. We can see from Figure 12b-d that the tumor has the highest saliency than other regions in these three saliency maps. It is also clear that the dark region located in the bottom-right background has equally high saliency with the tumor in the blackness ratio and superpixel saliency maps. However in the intensity saliency map, the saliency of this region is much lower than tumor, making it less salient in the combined saliency map (Figure 12e). However, in some US breast images, the local contrast of the tumor region is lower than that of some background areas (Figure 13a). In these images, the intensity and blackness ratio features have less contributions to the localization of tumors (Figure 13b,c), in which the high-intensity bottom-right background area and the low-echo background area on the top left hold the highest saliency compared to other areas respectively in the intensity and blackness ratio saliency maps. The superpixel saliency map, in which the tumor has the highest saliency as shown in Figure 13e, devotes greatly to the localization of tumors. Among the 400 tested images, we found that the tumors in 32 images have lower saliency compared with some background areas. Furthermore, in eight images, the positions of the first localized areas are in the fat layer or near the image edges. As demonstrated in Figure 14b-d, the localized dark region in these three saliency maps has equal or even higher saliency compared to tumor, making the Winner Take All network ineffective to localize the tumor. This is consistent with that of the example in Figure 15. We can see from Figure 12b-d that the tumor has the highest saliency than other regions in these three saliency maps. It is also clear that the dark region located in the bottom-right background has equally high saliency with the tumor in the blackness ratio and superpixel saliency maps. However in the intensity saliency map, the saliency of this region is much lower than tumor, making it less salient in the combined saliency map (Figure 12e). However, in some US breast images, the local contrast of the tumor region is lower than that of some background areas (Figure 13a). In these images, the intensity and blackness ratio features have less contributions to the localization of tumors (Figure 13b,c), in which the high-intensity bottom-right background area and the low-echo background area on the top left hold the highest saliency compared to other areas respectively in the intensity and blackness ratio saliency maps. The superpixel saliency map, in which the tumor has the highest saliency as shown in Figure 13e, devotes greatly to the localization of tumors. Among the 400 tested images, we found that the tumors in 32 images have lower saliency compared with some background areas. Furthermore, in eight images, the positions of the first localized areas are in the fat layer or near the image edges. As demonstrated in Figure 14b-d, the localized dark region in these three saliency maps has equal or even higher saliency compared to tumor, making the Winner Take All network ineffective to localize the tumor. This is consistent with that of the example in Figure 15. We analyzed the localization procedures and then improved the results by implementing a post-processing operation as described in Section 2.6. The two images in Figure 15 are the segmented results from the CV level set, in which the number of junction pixels 0  j L . Therefore using the processing operation, the proposed method successfully localizes tumors in the first-wrong localization, using the information in the saliency maps, shown in Figure 16. We analyzed the localization procedures and then improved the results by implementing a post-processing operation as described in Section 2.6. The two images in Figure 15 are the segmented results from the CV level set, in which the number of junction pixels 0  j L . Therefore using the processing operation, the proposed method successfully localizes tumors in the first-wrong localization, using the information in the saliency maps, shown in Figure 16. We analyzed the localization procedures and then improved the results by implementing a post-processing operation as described in Section 2.6. The two images in Figure 15 are the segmented results from the CV level set, in which the number of junction pixels Lj > 0. Therefore using the processing operation, the proposed method successfully localizes tumors in the first-wrong localization, using the information in the saliency maps, shown in Figure 16. Discussion In order to validate the effectiveness of the proposed scheme, it is compared with Liu's detection method [15]. We followed the algorithm in [15] exactly and tested it using our data sets. Liu's method was successfully executed only for 325 images Compared with Liu's method, our method has better performance with a much higher accuracy of 94.00% compared to 81.25% (Table 1). Figure 17a-f show the localization results of three representative examples in this case. Liu's method cannot separate tumor areas from normal tissues. It is unable to detect tumors with particularly large sizes, because unusually large areas are easily mistaken as background areas as a result of its classification process (Figure 17a). In addition, it is also difficult to use Liu's method to localize tumors with serious artifacts (Figure 17b) or small sizes (Figure 17c). Liu et al. only considers local features for training SVM classifier, which consequently causes artifacts with similar textures to be more likely regarded as objects when compared with tumors with smaller sizes. In the proposed method, the saliency of tumors depends on both local and global contrast, so sizes of the tumors and serious artifacts have little effect on our localization. Discussion In order to validate the effectiveness of the proposed scheme, it is compared with Liu's detection method [15]. We followed the algorithm in [15] exactly and tested it using our data sets. Liu's method was successfully executed only for 325 images Compared with Liu's method, our method has better performance with a much higher accuracy of 94.00% compared to 81.25% (Table 1). Figure 17a-f show the localization results of three representative examples in this case. Liu's method cannot separate tumor areas from normal tissues. It is unable to detect tumors with particularly large sizes, because unusually large areas are easily mistaken as background areas as a result of its classification process (Figure 17a). In addition, it is also difficult to use Liu's method to localize tumors with serious artifacts (Figure 17b) or small sizes (Figure 17c). Liu et al. only considers local features for training SVM classifier, which consequently causes artifacts with similar textures to be more likely regarded as objects when compared with tumors with smaller sizes. In the proposed method, the saliency of tumors depends on both local and global contrast, so sizes of the tumors and serious artifacts have little effect on our localization. However, two methods both fail for localization in 24 images, in which the background areas are unusually complex, like the example shown in Figure 18. The localization procedures of this image are shown in Figure 19. However, two methods both fail for localization in 24 images, in which the background areas are unusually complex, like the example shown in Figure 18. The localization procedures of this image are shown in Figure 19. However, two methods both fail for localization in 24 images, in which the background areas are unusually complex, like the example shown in Figure 18. The localization procedures of this image are shown in Figure 19. However, two methods both fail for localization in 24 images, in which the background areas are unusually complex, like the example shown in Figure 18. The localization procedures of this image are shown in Figure 19. We can see from Figure 19b, c that the localized region by the proposed method has a higher saliency than the tumor both in intensity and blackness ratio saliency maps. Furthermore, in a superpixel saliency map, these two regions have nearly equal saliency. Therefore, the localized region is the most salient area in this US breast image. The post-processing operation is ineffective in this case because the wrongly localized region is positioned near the center of the images and has no junction with the image edges. Thus, the post-processing operation is unable to define this localized region as "wrong localization." Conclusions In this paper, a novel auto-localization method for US breast images is proposed. In the proposed method, based on the distinct characteristics of breast tumors in US images, the blackness ratio, intensity, and superpixel features are combined to compute the saliency map. The results demonstrate that the proposed method can automatically detect the tumor regions in US images with good performance and high accuracy. Comparing to Liu's method, the proposed method performs better in US breast images with serious speckle noise or artifacts and images with particularly large-sized tumors. Combined with this method, the CV level set will no longer need the human-computer interaction to finish the extraction of tumors, which achieves the fully automatic segmentation of US breast tumors. However, this localization method also has some weaknesses, as it is unable to localize tumors in US breast images with complex background areas. In future work, more improvement will be employed to reduce the interference of backgrounds on the localization of tumors. Furthermore, the proposed method will be extended to other human tissues, such as the thyroid, kidney, and gallbladder.
2017-05-01T16:09:47.716Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "02fcfe0f72e0ba1561c1118649831d3287f972d5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/17/5/1101/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "02fcfe0f72e0ba1561c1118649831d3287f972d5", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
210842453
pes2o/s2orc
v3-fos-license
Real-world evidence on sodium-glucose cotransporter-2 inhibitor use and risk of Fournier’s gangrene Background Sodium-glucose cotransporter-2 inhibitors (SGLT2i) have been associated with increased occurrence of Fournier’s gangrene (FG), a rare but serious form of necrotizing fasciitis, leading to a warning from the Food and Drug Administration. Real-world evidence on FG is needed to validate this warning. Methods We used data from IBM MarketScan (2013–2017) to compare the incidence of FG among adult patients who initiated either SGLT2i, a dipeptidyl peptidase-4 inhibitor (DPP4i), or any non-SGLT2i antihyperglycemic medication. FG was defined using inpatient International Classification of Diseases, Ninth Edition and Tenth Edition diagnosis codes 608.83 and N49.3, respectively, combined with procedure codes for debridement, surgery, or systemic antibiotics. We estimated crude incidence rates (IRs) using Poisson regression, and crude and adjusted HRs (aHR) and 95% CIs using standardized mortality ratio-weighted Cox proportional hazards models. Sensitivity analyses examined the impact of alternative outcome definitions. Results We identified 211 671 initiators of SGLT2i (n=93 197) and DPP4i (n=118 474), and 305 329 initiators of SGLT2i (n=32 868) and non-SGLT2i (n=272 461). Crude FG IR ranged from 3.2 to 3.8 cases per 100 000 person-years during a median follow-up of 0.51–0.58 years. Compared with DPP4i, SGLT2i initiation was not associated with increased risk of FG for any outcome definition, with aHR estimates ranging from 0.25 (0.04–1.74) to 1.14 (0.86–1.51). In the non-SGLT2i comparison, we observed an increased risk of FG for SGLT2i initiators when using FG diagnosis codes alone, using all diagnosis settings (aHR 1.80; 0.53–6.11) and inpatient diagnoses only (aHR 4.58; 0.99–21.21). Conclusions No evidence of increased risk of FG associated with SGLT2i was observed compared with DPP4i, arguably the most relevant clinical comparison. However, uncertainty remains based on potentially higher risk in the broader comparison with all non-SGLT2i antihyperglycemic agents and the rarity of FG. Trial registration number EUPAS Register Number 30018. Significance of this study What is already known about this subject? ► Sodium-glucose cotransporter-2 inhibitors (SGLT2i), the newest class of antihyperglycemic medications, have been linked to an increased occurrence of Fournier's gangrene, a rare, necrotizing fasciitis of the perineum, using data from the Food and Drug Administration (FDA) Adverse Event Reporting System, and an FDA warning was issued in response to this finding. What are the new findings? ► Using administrative data from the commercially insured US population and an active-comparator, new-user cohort study design, we found no difference in risk of Fournier's gangrene between patients initiating SGLT2i and patients initiating dipeptidyl peptidase-4 inhibitors, a similar branded secondline antihyperglycemic medication class. How might these results change the focus of research or clinical practice? ► This study suggests that patients who are prescribed SGLT2i in real-world practice may not be at increased risk for Fournier's gangrene compared with patients who are prescribed similar secondline, branded antihyperglycemic medications. ► Given the very low incidence of Fournier's gangrene in the USA, this evidence must be weighed against the clinical benefits of SGLT2i in patients with type 2 diabetes mellitus. ABSTRACT Background Sodium-glucose cotransporter-2 inhibitors (SGLT2i) have been associated with increased occurrence of Fournier's gangrene (FG), a rare but serious form of necrotizing fasciitis, leading to a warning from the Food and Drug Administration. Real-world evidence on FG is needed to validate this warning. Methods We used data from IBM MarketScan (2013-2017) to compare the incidence of FG among adult patients who initiated either SGLT2i, a dipeptidyl peptidase-4 inhibitor (DPP4i), or any non-SGLT2i antihyperglycemic medication. FG was defined using inpatient International Classification of Diseases, Ninth Edition and Tenth Edition diagnosis codes 608.83 and N49.3, respectively, combined with procedure codes for debridement, surgery, or systemic antibiotics. We estimated crude incidence rates (IRs) using Poisson regression, and crude and adjusted HRs (aHR) and 95% CIs using standardized mortality ratio-weighted Cox proportional hazards models. Sensitivity analyses examined the impact of alternative outcome definitions. InTRoduCTIon In August 2018, the US Food and Drug Administration (FDA) released a safety warning linking sodium-glucose cotransporter-2 inhibitors (SGLT2i), the newest class of antihyperglycemic medications, to an increased occurrence of Fournier's gangrene (FG), a rare, necrotizing fasciitis of the perineum. 1 Despite overall low incidence in the USA (1.6 cases per 100 000 male patients), 2 FG is often accompanied by poor management options and prognosis 3 and results in debilitating complications and disfigurement in most infected patients. Approximately 7.5% of patients with FG die. 2 4-6 The FDA warning, which attracted media attention, 7 was based on 12 FG cases (7 men, 5 women) reported through the FDA Adverse Epidemiology/Health Services Research Event Reporting System (FAERS) from 2013 to 2018, as well as individual case reports in the medical literature. [8][9][10] A more recent review of the FAERS and case reports yielded similar conclusions based on 55 FG cases among patients receiving SGLT2i from 2013 to 2019, compared with 19 FG cases among patients receiving other antihyperglycemic medications from 1984 to 2019. 11 However, the structure and quality of FAERS reporting precluded any ability to estimate comparative incidence or establish causality. 12 To address these limitations and to validate evidence behind the FDA warning in a real-world setting, we evaluated the association between SGLT2i initiation and FG risk in a large healthcare administrative claims database from the commercially insured US population, using an active-comparator, new-user (ACNU) study design. 13 MeTHods data source We used data from IBM MarketScan from 1 January 2012 to 31 December 2017. The base population consisted of all patients with ≥1 prescription dispensing claim for an SGLT2i or dipeptidyl peptidase-4 inhibitor (DPP4i) between 1 April 2013 (US market entry date of canagliflozin 14 ) and 30 June 2017 (to allow at least 6 months of follow-up), identified using national drug codes (NDCs). Eligible patients were adults aged 18-64 years with ≥12 months of continuous enrollment in MarketScan prior to first eligible prescription dispensing claim without a prescription for either SGLT2i or DPP4i (washout period to define new use). The study protocol was registered with the European Network of Centres for Pharmacoepidemiology and Pharmacovigilance on 5 June 2019 (http://www. encepp. eu/ encepp/ viewResource. htm? id= 30019). exposure definitions Exposure to a study drug was defined by ≥2 same-drug class prescription dispensing claims of either an SGLT2i or DPP4i within a predefined 'prescription window' (first prescription's recorded days' supply plus a 30-day grace period). The second prescription served as the index date for the analysis. This two-prescription approach restricts the analysis to a cohort for whom we are more confident were taking the cohort drugs, compared with cohorts requiring just a single prescription. We excluded patients who received a comparator drug (including the empagliflozin-linagliptin combination drug) prior to or on the index date. outcome definitions Fournier's gangrene was defined using the International Classification of Diseases, Ninth Edition and Tenth Edition (ICD-9 and ICD-10) diagnosis codes 608.83 ('vascular disorders, including Fournier's disease') and N49.3 ('Fournier's gangrene'), respectively. Patients who received an FG diagnosis code in the 12 months prior to the index date were excluded from analysis. To increase outcome specificity, we required FG diagnoses to either occur in an inpatient setting or be followed by a hospitalization within 7 days, and be additionally accompanied by systemic antibiotic, debridement, or related surgical procedure 4 within 7 days (primary outcome definition). Codes were informed through prior literature [15][16][17][18][19][20][21][22][23][24][25] and clinical guidance. Acknowledging the lack of standard, validated claimsbased definitions for FG, and to examine the possible impact of outcome misclassification in a rare outcome setting, we considered less stringent (higher sensitivity, reduced specificity) secondary outcome definitions. First, we removed the systemic antibiotic requirement (as NDCs may not be billed during inpatient stays) and considered only debridement and surgical procedures. Second, we applied various diagnosis-only definitions, which have typically been used in published literature. 2 4 5 We further identified and evaluated additional ICD-9 and ICD-10 diagnosis codes with a forward-backward mapping approach using the General Equivalence Mappings crosswalk provided by the Centers for Medicare and Medicaid Services. 26 statistical analysis Follow-up for outcomes started at the index date (date of the second prescription) and was censored at the first occurrence of either FG, treatment discontinuation, a prescription of the comparator drug class (all cohorts), disenrollment, or 31 December 2017. We estimated propensity scores using multivariable logistic regression to control for measured confounding, with baseline covariates (patient demographics, comorbidities, risk factors for FG, 5 6 9 10 12 medication use history, and baseline healthcare utilization) measured in the 12 months prior to index date. All propensity score analyses were performed separately for patients with index dates before versus after 1 October 2015, to account for possible billing changes in the ICD-9 versus ICD-10 eras. Covariate balance was assessed using standardized mean differences. 27 We estimated the average treatment effect in the treated by standardizing the distribution of measured covariates to the SGLT2i cohorts using standardized mortality ratio (SMR) weighting. 28 We estimated crude incidence rates (IRs) using Poisson regression, and crude and adjusted HRs (aHR) and 95% CIs using SMRweighted Cox proportional hazards models. All analyses were performed using SAS V.9.4. sensitivity analyses To reflect the comparison made in the FAERS analysis, we broadened the comparator group to include all non-SGLT2i antihyperglycemic medications (sulfonylureas, thiazolidinediones, DPP4i, glucagon-like peptide-1 receptor agonists, and insulins). In this analysis, we Table 1 Key baseline characteristics of eligible initiators of SGLT2i, compared with initiators of DPP4i and non-SGLT2i antihyperglycemic medications, before and after implementation of SMR weighting* (365-day washout period) *Weighted by standardizing the comparator drug initiators to the population of SGLT2i initiators, using the propensity score odds (PS/(1−PS)), to estimate the average treatment effect in the treated (ATT). †The number of eligible SGLT2i initiators is different between comparisons due to the washout period exclusion used to define new use of the study drugs in each pairwise comparison. For the SGLT2i vs DPP4i comparison, we exclude patients who were prevalent users of either drug at baseline. For the SGLT2i vs non-SGLT2i comparison, we exclude patients who were prevalent users of any antidiabetic drug at baseline, which results in a much greater number of patients being excluded at baseline. For example, an SGLT2i initiator who receives sulfonylurea during the baseline period is included in the SGLT2i vs DPP4i comparison, but not in the SGLT2i vs non-SGLT2i comparison. Therefore, the SGLT2i vs DPP4i comparison cohort is not a strict subset of the SGLT2i vs non-SGLT2i comparison cohort. This approach is common and seeks to emulate the drug initiation protocol in a randomized trial. ACEI, ACE inhibitor; COPD, chronic obstructive pulmonary disease; DPP4i, dipeptidyl peptidase-4 inhibitors; GLP, glucagon-like peptide; HbA1c, hemoglobin A1c; SGLT2i, sodium-glucose cotransporter-2 inhibitors; SMR, standardized mortality ratio. Using a state-of-the-art ACNU study design for real-world evidence, we observed an increased risk of FG for SGLT2i initiators compared with initiators of a broad assortment of non-SGLT2i antihyperglycemic medications. However, estimates were imprecise due to the low number of observed events. These estimates appear to corroborate evidence behind the FDA's initial warning, but also demonstrate the importance of considering the most appropriate active comparator when conducting comparative safety studies. When compared with initiators of DPP4i, the most commonly prescribed, branded second-line antihyperglycemic medication class, 29 SGLT2i initiation did not appear to be associated with an increased risk of FG. The DPP4i comparator reflects the clinical decision between prescribing two similar branded second-line antihyperglycemic medications, whereas the non-SGLT2i comparator group contains a more heterogeneous set of medications including first-line metformin and third-line insulin as well as generic medications. Our results in the Epidemiology/Health Services Research DPP4i comparison, which we believe to be the most relevant active-comparator class of glucose-lowering drugs to SGLT2i, are also corroborated by a recent meta-analysis of randomized trials 30 as well as a recent commentary, 12 both of which addressed the topic of FG risk and concluded no increased risk for SGLT2i users. Our results also highlight the important role of outcome classification and confounding control decisions in the setting of very rare outcomes. Although our primary outcome definition aimed to maximize specificity, the precision of resulting estimates was low due to only six observed events from 2013 to 2017 in the DPP4i comparison. This likely contributed to differences between crude and adjusted HR estimates; this difference decreased for higher-sensitivity outcome definitions. Furthermore, the ICD-9 diagnosis code 608.83 ('vascular disorders, including Fournier's disease') is non-specific to FG and can include hematoma of the scrotum, testicles, and seminal vesicle. However, this is the best available ICD-9 code and maps directly to the ICD-10 diagnosis for FG, N49.3, and has been used in previous studies of FG. 2 4 5 Finally, the observational design of this study precluded our ability to control for unmeasured confounding, although the ACNU study design helps to minimize the potential for unmeasured confounding by implicitly controlling for confounding by indication. 13 This study fills an important evidence gap regarding FG risk with SGLT2i use conducted using population-level data. The cohort study design allows for measurement of incidence, which was lacking in prior FAERS analyses. The two-prescription exposure definition increases confidence that patients are actually taking study drugs compared with cohorts requiring just a single prescription. Finally, the range of outcome definitions yielded consistent conclusions. ConClusIons No evidence of increased risk of FG associated with SGLT2i was observed in the most relevant comparison with the commonly used, branded second-line antihyperglycemic agent, DPP4i. However, uncertainty remains based on potentially higher risk in the broader comparison with all non-SGLT2i antihyperglycemic medications and the low absolute incidence of FG. Contributors JYY designed the study, performed the analyses, interpreted the results and drafted the manuscript. TW contributed to study design decisions, interpreted the results and edited the manuscript. VP performed data extraction and provided data analysis support and verification. JBB provided clinical oversight and expertise, contributed to study design decisions, interpreted the results and edited the manuscript. TS provided methods oversight and expertise, contributed to study design decisions, interpreted the results and edited the manuscript. JYY served as the guarantor.
2020-01-22T14:01:49.612Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "0dc70d9629fb03987f6017de493d552fa18d166c", "oa_license": "CCBYNC", "oa_url": "https://drc.bmj.com/content/bmjdrc/8/1/e000985.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c29bd6df6eeeb0aa8e8ff7125e857399b05ecbc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12003782
pes2o/s2orc
v3-fos-license
A study of blood utilization in a tertiary care hospital in South India Background: Monitoring blood utilization helps in effective management of blood stock to meet present and future demands in a hospital. Hence, we analyzed the age, gender and frequency distribution of each blood product used in different diseases conditions. Materials and Methods: We included all blood products utilized from January 2008 to December 2012 in our tertiary care hospital in South India. The primary and secondary discharge diagnoses (International Classifi cation of Diseases [ICD-10]) were matched with clinical information provided in the request forms. The most relevant indication requiring blood transfusion was selected for each recipient and grouped into broad diagnostic categories according to the headings of ICD-10. The utilization of stored whole blood, packed red blood cells (RBCs), fresh frozen plasma (FFP) and platelets was stratifi ed according to age, gender and diagnosis. Results: Our results indicated decline in usage of whole blood and an increase in use of FFP and platelets over the years. While packed RBCs were frequently used for treating injury and poisoning conditions, platelets and FFP were preferred for infectious and parasitic diseases. Various blood products were used less frequently in patients aged over 60 years and the overall usage of blood products was higher in males. Conclusion: The patterns of blood products utilization is in contrast to the Western nations, which may be due to difference in age structure among Indian population and higher prevalence of infectious diseases such as Dengue in our region. Nevertheless, this study highlights the importance of understanding the epidemiology of blood transfusion locally to improve usage of blood and blood products. INTRODUCTION Blood obtained from voluntary non-remunerated blood donors is a scarce and precious resource, which must be effectively managed and stocked. [1] The patterns of blood transfusion have changed considerably in the recent years due to advances in blood banking techniques, increased frequency of complex surgical procedures, aging populations, initiatives aimed at improving health care standards and a decrease in donor availability because of stringent screening criteria. [1,2] Although several regional and national surveys on usage of blood components have been reported from the Western nations, [1][2][3][4][5][6][7][8] such studies from developing nations like India are lacking. Hence, this study was designed to evaluate the transfusion practices in our hospital over a period of 5 years and analyze the utilization pattern of each blood product based on age group, gender and disease conditions requiring transfusion. MATERIALS AND METHODS A retrospective study was planned in our 590-bed tertiary care teaching hospital in Puducherry, South India. Our hospital has most of the medical and surgical specialties. In the present study, all the requests for various blood products from January 2008 to December 2012 were evaluated retrospectively. We collected data for all patients who had been issued stored whole blood, packed red blood cells (RBCs), fresh frozen plasma (FFP), platelets and cryoprecipitate. We included all units cross matched and issued for use. All transfusions included in the study were allogenic. Platelets were prepared from whole blood donation (random donor platelets) during the study period. Some patients had multiple admissions and multiple transfusions for different indications. The clinical data and transfusion details were obtained from the request forms, blood bank records and computerized patient registration information. For each patient, the data included hospital number, age, gender, type and numbers of each blood component issued, date of issue of blood components, diagnosis requiring transfusion and any other relevant details. The International Classifi cation of Diseases (ICD-10) version was used for classifi cation of the diagnoses requiring transfusion of blood products. We reviewed the primary diagnosis and secondary diagnoses available as ICD-10 codes in our medical record department (MRD). The ICD-10 codes assigned by our MRD for the particular period of care in which the transfusion occurred were matched with the clinical details and diagnosis obtained from the request forms. The most likely diagnosis requiring transfusion was selected. Some request forms lacked a specifi c diagnosis or the requesting physician had mentioned only a provisional diagnosis which did not tally with the ICD-10 codes for the primary or secondary discharge diagnoses. In such cases the most relevant diagnosis requiring transfusion was selected after a careful review of the clinical details recorded in case fi les. Though the primary ICD-10 code was chosen as the diagnosis requiring transfusion in majority of the cases, the secondary ICD-10 code proved to be the most appropriate indicator in some cases. The diagnoses were then grouped into broad categories according to the headings of ICD-10 for further analysis. The data were entered in Microsoft Excel (Microsoft Corp., USA) and analyzed using IBM SPSS Statistics for Windows (version20. 0. Armonk, New York: IBM Corporation). RESULTS During the study period, 3876 stored whole blood units, 7137 packed RBC units, 5215 FFP units, 4455 platelet units and nine cryoprecipitate units were issued for use in patients admitted to our hospital. We excluded cryoprecipitate from further analysis since the total number of units was markedly less compared to other blood products. Variation in the usage of whole blood and its components over the 5-year study period is summarized in [ Figure 1]. An overall progressive increase in requirement of blood products was evident. While a marked decline in the utilization of whole blood and a signifi cant increase in consumption of FFP and platelets over the years was observed. The following change from 2008 to 2012 for whole blood (33.4-3.9%), RBCs (35.1-32.6%), FFP (20.3-32.3%) and platelets (11.1-31.2%) was observed respectively. Patients aged 60 years or less had received 85.9% of all whole blood units, 79.1% of RBCs, 86.7% of FFP and 88.8% of platelets during the study period (tabulation not shown). The utilization of blood products in patients aged over 60 years was markedly less compared to the younger age groups. Table 2 compares our utilization by broad age groups with that of the PROTON (PROfi les of TransfusiON recipients) study from the Netherlands. [7] In our study, males had received 58.1% of whole blood units, 55.7% of RBCs, 67.1% of FFP and 61.1% of platelet units [ Table 1]. Analysis by broad ICD-10 headings reveals Table 1]. Utilization of whole blood, RBCs and FFP for injury and poisoning was signifi cantly higher in males. Similarly FFP utilization was more in males compared with females for the digestive diseases. Platelet and FFP usage for infectious and parasitic diseases were also significantly higher in males. The utilization of whole blood and components was higher in males in most of the categories evaluated in this study. Figures 2 and 3 reveal a variation of whole blood and components with age and gender. Variation was minimal in pediatric age group. Requirement of RBCs was marginally higher in younger females between 17 and 40 years, mainly related to childbirth. However use of platelets and FFP was higher in younger males. Requirement of whole blood and components was higher in middle aged males between 41 and 64 years as well as in elderly men 65 years or more compared to females in the respective age groups. DISCUSSION The previous studies have used either the primary discharge diagnosis ICD code [7,9] or the diagnosis mentioned in preprinted data collection forms for allocation of the indication for transfusion. [4][5][6] The primary ICD code refl ects the fi nal discharge diagnosis and can be retrieved with ease in a retrospective study. The request form refl ects the actual condition for which transfusion was given. In our pilot study, the clinical data in the request forms was compared with ICD-10 codes as well as details mentioned in case files. We observed that the most appropriate diagnosis requiring transfusion need not be based on the primary ICD-10 code always. Though the request form often refl ects the indication of transfusion, the physicians may mention a provisional diagnosis at the time of transfusion or a diagnostic procedure like bone marrow aspiration or histopathological evaluation could subsequently alter the initial diagnosis. Hence instead of choosing the principal code in all the cases, we matched the primary and secondary ICD-10 codes with the clinical details provided in the request form to select the most appropriate condition requiring transfusion. Western studies generally use computerized systems to match blood products usage with clinical data. This permits easy retrieval and management of enormous quantities of data. Blood banks in developing nations often lack such facilities and depend on paper-based resources, which may account for relative paucity of studies on patterns of blood utilization in developing nations. We have compared our patterns of utilization of blood products with studies from other high, middle and low income countries, especially the PROTON study from the Netherlands [7] and two institutional studies from Sao Paulo, Brazil [9] and Uganda [10] to understand variations in utilization patterns across countries. The PROTON study has compiled information from 20 centers including general, academic and cancer hospitals. Though most whole blood units are separated into components in the Western countries, low and middle income countries still continue to utilize whole blood. [10][11][12] Only about 35% blood units are separated into components in India. [11] Component separation is increasing (33.7% in 2006-07-50.5% in 2011-12) in states like Gujarat [13] and National AIDS control organization is supporting the installation of blood component separation units in various states to facilitate component separation and encourage appropriate clinical use of blood. The utilization of whole blood in our hospital is also declining [ Figure 1] and accounts for less than 4% of all blood products used in 2012. A major worry in Western countries is increase in utilization of various components with increasing age among transfusion recipients. [2,4,7,8,14] The dynamic demographic changes in the developed nations pose a tremendous challenge for blood services because ultimately it will become more diffi cult to recruit blood donors. Small increases in the number of elderly people will have large effects on demand [4] and a 30% increase in RBC transfusion is projected by year 2030. [8] The age structure of blood recipients in our study differs markedly in comparison to the West [ Table 2]. The maximal utilization of all products in our study was in the age groups of 21-50 years. The age distribution of transfusion recipients in other developing nations also did not match the trends observed in the Western countries. [9,10,12] Previous studies have also noted a variation in blood utilization with respect to gender. [7,9] The PROTON study revealed a higher RBC utilization in younger women due to childbirth. [7] Elderly men were transfused more blood products especially FFP due to higher probability of being hospitalized for circulatory diseases in the Netherlands. [7] Males received 62% of all components in the Brazilian study. [9] The indications for transfusion also vary across countries. The diagnostic categories that used most blood products in the PROTON study were diseases of the circulatory system, neoplasms, injury and poisoning, diseases of the digestive system and disorders of blood. [7] Interestingly platelet and FFP utilization for infectious diseases in the PROTON study was signifi cantly low compared with our study. The common indications in the Brazilian study include neoplasms, diseases of the digestive and circulatory systems and injury and poisoning. Platelets and plasma were used frequently for infectious diseases in the Brazilian study also (13% and 15%, respectively). [9] While malaria (33.1%), non-malarial infections (19.2%) and bleeding (29.6%) Figure 2: Age distribution of whole blood and red blood cells recipients Figure 3: Age distribution of fresh frozen plasma and platelet recipients especially obstetric hemorrhage and gastrointestinal bleeding were common causes of transfusion in Uganda. [10] Dengue was the most common infection utilizing about 80% of FFP and 90% of platelets issued for infectious and parasitic diseases in our study. Malaria accounted only for 1.5% of FFP and platelet usage. Dengue is becoming a major health problem in our region. Outbreak of epidemics is associated with a signifi cant increase in requirement of blood components especially platelets. Dengue can also reduce the blood donor pool in endemic regions. [15] Hence clear and specifi c guidelines for usage of platelets and FFP in dengue are essential to prevent inappropriate usage of these precious blood products especially in areas with limited resources. [15] The pattern of usage of blood products reflects the relative frequency of various diseases conditions. Infectious diseases are more prevalent in developing countries. Puducherry also happens to be a high accident prone region. [16] This explains the frequent usage of blood components for injuries in our study. Our center is not a specialized cancer care hospital and this could also be related with less utilization of blood components for neoplasms. The pattern of blood utilization can also vary because of the age composition of the population. Africa, Latin America and South Asia have very young age structures with about half of the population under age 25. The proportion of population above age 65 is much greater in Europe in comparison to Africa and South Asia. [17,18] Figure 4 shows a comparison of age pyramids. Socioeconomic factors in the developing world could also hinder elderly people especially women from accessing effective health care facilities. Recent studies on patterns of blood product utilization are lacking in India. Our study has limitations because our data is derived from a single center and represents only a small proportion of the South Indian population. Well-designed national studies are essential to understand the variation in transfusion practices and formulate guidelines to improve transfusion practices in India. Nevertheless the current study highlights the importance of understanding the epidemiology of blood transfusion locally for planning optimal blood-saving strategies.
2018-04-03T00:23:37.997Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "5f4a157cec12bd19be3ae1229c61b289411fdfd9", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc4367018", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e771d7ef7670b3162f8eccdc8b7e4e613e5105fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7239581
pes2o/s2orc
v3-fos-license
Large-scale Log-determinant Computation through Stochastic Chebyshev Expansions Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning. Log-determinant computation involves the Cholesky decomposition at the cost cubic in the number of variables, i.e., the matrix dimension, which makes it prohibitive for large-scale applications. We propose a linear-time randomized algorithm to approximate log-determinants for very large-scale positive definite and general non-singular matrices using a stochastic trace approximation, called the Hutchinson method, coupled with Chebyshev polynomial expansions that both rely on efficient matrix-vector multiplications. We establish rigorous additive and multiplicative approximation error bounds depending on the condition number of the input matrix. In our experiments, the proposed algorithm can provide very high accuracy solutions at orders of magnitude faster time than the Cholesky decomposition and Schur completion, and enables us to compute log-determinants of matrices involving tens of millions of variables. Introduction Scalability of machine learning algorithms for extremely large data-sets and models has been increasingly the focus of attention for the machine learning community, with prominent examples such as first-order stochastic optimization methods and randomized linear algebraic computations. One of the important tasks from linear algebra that appears in a variety of machine learning problems is computing the log-determinant of a large positive definite matrix. For example, serving as the normalization constant for multivariate Gaussian models, log-determinants of covariance (and precision) matrices play an important role in inference, model selection and learning both the structure and the parameters for Gaussian Graphical models and Gaussian processes [25,23,10]. Log-determinants also play an important role in a variety of Bayesian machine learning problems, including sampling and variational inference [17]. In addition, metric and kernel learning problems attempt to learn quadratic forms adapted to the data, and formulations involving Bregman divergences of log-determinants have become very popular [9,30]. Finally, log-determinant computation also appears in some discrete probabilistic models, e.g., tree mixture models [20,1] and Markov random fields [31]. In planar Markov random fields [26,16] inference and learning involve log-determinants of general non-singular matrices. For a positive semi-definite matrix B ∈ R d×d , numerical linear algebra experts recommend to compute logdeterminant using the Cholesky decomposition. Suppose the Cholesky decomposition is B = LL T , then log det(B) = 2 i log L ii . The computational complexity of Cholesky decomposition is cubic with respect to the number of variables, i.e., O(d 3 ). 1 For large-scale applications involving more than tens of thousands of variables, this operation is not feasible. Our aim in this paper is to compute accurate approximate log-determinants for matrices of much larger size involving tens of millions of variables. Contribution. Our approach to compute accurate approximations of log-determinant for a positive definite matrix uses a combination of stochastic trace-estimators and Chebyshev polynomial expansions. Using the Chebyshev polynomials, we first approximate the log-determinant by the trace of power series of the input matrix. We then use a stochastic trace-estimator, called the Hutchison method [14], to estimate the trace using multiplications between the input matrix and random vectors. The main assumption for our method is that the matrix-vector product can be computed efficiently. For example, the time-complexity of the proposed algorithm grows linearly with respect to the number of non-zero entries in the input matrix. We also extend our approach to general non-singular matrices to compute the absolute values of their log-determinants. We establish rigorous additive and multiplicative approximation error bounds for approximating the log-determinant under the proposed algorithm. Our theoretical results provide an analytic understanding on our Chebyshev-Hutchison method depending on sampling number, polynomial degree and the condition number (i.e., the ratio between the largest and smallest singular values) of the input matrix. In particular, they imply that if the condition number is O(1), then the algorithm provides ε-approximation guarantee (in multiplicative or additive) in linear time for any constant ε > 0. We first apply our algorithm to obtain a randomized linear-time approximation scheme for counting the number of spanning trees in a certain class of graphs where it could be used for efficient inference in tree mixture models [20,1]. We also apply our algorithm for finding maximum likelihood parameter estimates of Gaussian Markov random fields of size 5000 × 5000 (involving 25 million variables!), which is infeasible for the Cholesky decomposition. Our experiments show that our proposed algorithm is orders of magnitude faster than the Cholesky decomposition and Schur completion for sparse matrices and provides solutions with 99.9% accuracy in approximation. It can also solve problems of dimension tens of millions in a few minutes on our single commodity computer. Furthermore, the proposed algorithm is very easy to parallelize and hence has a potential to handle even a bigger size. In particular, the Schur method was used as a part of QUIC algorithm [13] for sparse inverse covariance estimation with over million variables, hence our algorithm could be used to further improve its speed and scale. Related work. Stochastic trace estimators have been studied in the literature in a number of applications. [6,18] have used a stochastic trace estimator to compute the diagonal of a matrix or of matrix inverse. Polynomial approximations to band-pass filters have been used to count the number of eigenvalues in certain intervals [11]. Stochastic approximations of score equations have been applied in [27] to learn large-scale Gaussian processes. The works closest to ours which have used stochastic trace estimators for Gaussian process parameter learning are [33] and [3] which instead use Taylor expansions and Cauchy integral formula, respectively. A recent improved analysis using Taylor expansions has also appeared in [8]. However, as reported in Section 5, our method using Chebyshev expansions provides much better accuracy in experiments than that using Taylor expansions, and [3] need Krylov-subspace linear system solver that is computationally expensive. [22] also use Chebyshev polynomials for log-determinant computation, but the method is deterministic and only applicable to polynomials of small degree. The novelty of our work is combining the Chebyshev approximation with Hutchison trace estimators, which allows us to design a linear-time algorithm with rigorous approximation guarantees. Organization. The structure of the paper is as follows. We introduce the necessary background in Section 2.2, and describe our algorithm with approximation guarantees in Section 3. Section 4 provides the proof of approximation guarantee of our algorithm, and we report experimental results in Section 5. Background In this section, we describe the preliminaries for our approach to approximate the log-determinant of a positive definite matrix. Our approach combines the following two techniques: (a) designing a trace-estimator for the log-determinant of positive definite matrix via Chebyshev approximation [19] and (b) approximating the trace of positive definite matrix via Monte Carlo methods, e.g., Hutchison method [14]. Chebyshev Approximation The Chebyshev approximation technique is used to approximate analytic function with certain orthonormal polynomials. We use p n (x) to denote the Chebyshev approximation of degree n for a given function f : [−1, 1] → R: where the coefficient c i and the i-th Chebyshev polynomial T i (x) are defined as where x k = cos π(k+1/2) n+1 for k = 0, 1, 2, . . . n and T 0 (x) = 1, T 1 (x) = x. Chebyshev approximation for scalar functions can be naturally generalized to matrix functions. Using the Chebyshev approximation p n (x) for function f (x) = log(1−x) we obtain the following approximation to the log-determinant of a positive definite matrix B ∈ R d×d : where A = I − B has eigenvalues 0 ≤ λ 1 , . . . , λ d ≤ 1 and the last equality is from the fact that d i=1 p(λ i ) = tr(p(A)) for any polynomial p. 2 We remark that other polynomial approximations, e.g., Taylor, can also be used to approximate log-determinants. We focus on the Chebyshev approximation in this paper due to its superior empirical performance and rigorous error analysis. Trace Approximation via Monte-Carlo Method The main challenge to compute the log-determinant of a positive definite matrix in the previous section is calculating the trace of T j (A) efficiently without evaluating the entire matrix A k . We consider a Monte-Carlo approach for estimating the trace of a matrix. First, a random vector z is drawn from some fixed distribution, such that the expectation of z Az is equal to the trace of A. By sampling m such i.i.d random vectors, and averaging we obtain an estimate of tr(A). It is known that the Hutchinson method, where components of the random vectors Z are i.i.d Rademacher random variables, i.e., Pr(+1) = Pr(−1) = 1 2 , has the smallest variance among such Monte-Carlo methods [14,5]. It has been used extensively in many applications [4,14,2]. Formally, the Hutchinson trace estimator tr m (A) is known to satisfy the following: Note that computing z Az requires only multiplications between a matrix and a vector, which is particularly appealing when evaluating A itself is expensive, e.g., A = B k for some matrix B and large k. Furthermore, for the case A = T j (X), one can compute z T j (X) z more efficiently using the following recursion on the vector w j = T j (X)z: which follows directly from (2). Log-determinant Approximation Scheme Now we are ready to present algorithms to approximate the absolute value of log-determinant of an arbitrary nonsingular square matrix C. Without loss of generality, we assume that singular values of C are in the interval [σ min , σ max ] for some σ min , σ max > 0, i.e., the condition number κ(C) is at most κ max := σ max /σ min . The proposed algorithms are not sensitive to tight knowledge of σ min , σ max , but some loose lower and upper bounds on them, respectively, suffice. We first present a log-determinant approximation scheme for positive definite matrices in Section 3.1 and that for general non-singular ones in Section 3.2 later. Algorithm for Positive Definite Matrices In this section, we describe our proposed algorithm for estimating the log-determinant of a positive definite matrix whose eigenvalues are less than one, i.e., σ max < 1. It is used as a subroutine for estimating the log-determinant of a general non-singular matrix in the next section. The formal description of the algorithm is given in what follows. Algorithm 1 Log-determinant approximation for positive definite matrices with σ max < 1 Input: positive definite matrix B ∈ R d×d with eigenvalues in [δ , 1 − δ] for some δ > 0, sampling number m and polynomial degree n Initialize: We establish the following theoretical guarantee of the above algorithm, where its proof is given in Section 4.3. Theorem 1 Given ε, ζ ∈ (0, 1), consider the following inputs for Algorithm 1: Then, it follows that The bound on polynomial degree n in the above theorem is relatively tight, e.g., it implies to choose n = 14 for δ = 0.1 and ε = 0.01. However, our bound on sampling number m is not, where we observe that m ≈ 30 is sufficient for high accuracy in our experiments. We also remark that the time-complexity of Algorithm 1 is O(mn B 0 ), where B 0 is the number of non-zero entries of B. This is because the algorithm requires only multiplications of matrices and vectors. In particular, if m, n = O(1), the complexity is linear with respect to the input size. Therefore, Theorem 1 implies that one can choose m, n = O(1) for ε-multiplicative approximation with probability 1 − ζ given constants ε, ζ > 0. Algorithm for General Non-Singular Matrices Now, we are ready to present our linear-time approximation scheme for the log-determinant of general non-singular matrix C, through generalizing the algorithm in the previous section. The idea is simple: run Algorithm 1 with normalization of positive definite matrix C T C. This is formally described in what follows. Algorithm 2 Log-determinant approximation for general non-singular matrices Input: matrix C ∈ R d×d with singular values are in the interval [σ min , σ max ] for some σ min , σ max > 0, sampling number m and polynomial degree n Algorithm 2 is motivated to design from the equality log | det C| = 1 2 log det C T C. Given non-singular matrix C, one need to choose appropriate σ max , σ min to run it. In most applications, σ max is easy to choose, e.g., one can choose or one can run the power iteration [15] to estimate a better bound. On the other hand, σ min is relatively not easy to obtain depending on problems. It is easy to obtain in the problem of counting spanning trees we studied in Section 3.3, and it is explicitly given as a parameter in many machine learning log-determinant applications [31]. In general, one can use the inverse power iteration [15] to estimate it. Furthermore, the smallest singular value is easy to compute for random matrices [29,28] and diagonal-dominant matrices [12,21]. The time-complexity of Algorithm 2 is still O(mn C 0 ) instead of O(mn C T C 0 ) since Algorithm 1 requires multiplication of matrix C T C and vectors. We state the following additive error bound of the above algorithm. • m ≥ M ε, σmax σmin , ζ and n ≥ N ε, σmax σmin , where ε 2 log 1 + κ 2 2 log 2 ζ Then, it follows that Proof. The proof of Theorem 2 is quite straightforward using Theorem 1 for B with the facts that We remark that the condition number σ max /σ min decides the complexity of Algorithm 2. As one can expect, the approximation quality and algorithm complexity become worse for matrices with very large condition numbers, as the Chebyshev approximation for the function log x near the point 0 is more challenging and requires higher degree approximations. When σ max ≥ 1 and σ min ≤ 1, i.e. we have mixed signs for logs of the singular values, a multiplicative error bound (as stated in Theorem 1) can not be obtained since the log-determinant can be zero in the worst case. On the other hand, when σ max < 1 or σ min > 1, we further show that the above algorithm achieves an ε-multiplicative approximation guarantee, as stated in the following corollaries. Corollary 3 Given ε, ζ ∈ (0, 1), consider the following inputs for Algorithm 2: • C ∈ R d×d be a matrix such that singular values are in the interval [σ min , σ max ] for some σ max < 1. Corollary 4 Given ε, ζ ∈ (0, 1), consider the following inputs for Algorithm 2: • C ∈ R d×d be a matrix such that singular values are in the interval [σ min , σ max ] for some σ min > 1. The proofs of the above corollaries are given in the supplementary material due to the space limitation. Application to Counting Spanning Trees We apply Algorithm 2 to a concrete problem, where we study counting the number of spanning trees in a simple undirected graph G = (V, E) where there exists a vertex i * such that (i * , j) ∈ E for all j ∈ V \ {i * }. Counting spanning trees is one of classical well-studied counting problems, and also necessary in machine learning applications, e.g., tree mixture models [20,1]. We denote the maximum and average degrees of vertices in V \ {i * } by ∆ max and ∆ avg > 1, respectively. In addition, we let L(G) denote the Laplacian matrix of G. Then, from Kirchhoff's matrix-tree theorem, the number of spanning tree τ (G) is equal to where L(i * ) is the (|V | − 1) × (|V | − 1) sub matrix of L(G) that is obtained by eliminating the row and column corresponding to i * . Now, it is easy to check that eigenvalues of L(i * ) are in [1, 2∆ max − 1]. Under these observations, we derive the following corollary. Corollary 5 Given 0 < ε < 2 ∆avg−1 , ζ ∈ (0, 1), consider the following inputs for Algorithm 2: The proof of the above corollary is given in the supplementary material due to the space limitation. We remark that the running time of Algorithm 2 with inputs in the above theorem is O(nm∆ avg |V |). Therefore, for ε, ζ = Ω(1) and ∆ avg = O(1), i.e., G is sparse, one can choose n, m = O(1) so that the running time of Algorithm 2 is O(|V |). Proof of Theorem 1 In order to prove Theorem 1, we first introduce some necessary background and lemmas on error bounds of Chebyshev approximation and Hutchinson method we introduced in Section 2.1 and Section 2.2, respectively. Convergence Rate for Chebyshev Approximation Intuitively, one can expect that the approximated Chebyshev polynomial converges to its original function as degree n goes to ∞. Formally, the following error bound is known [7,32]. Theorem 6 Suppose f is analytic with |f (z)| ≤ M in the region bounded by the ellipse with foci ±1 and major and minor semiaxis lengths summing to K > 1. Let p n denote the interpolant of f of degree n in th Chebyshev points as defined in section 2.1, then for each n ≥ 0, To prove Theorem 1 and Theorem 2, we are in particular interested in For notational convenience, we use p n (x) to denote (p n • g −1 )(x) in what follows. We choose the ellipse region, denoted by E K , in the complex plane with foci ±1 and its semimajor axis length is where f •g is analytic on and inside. The length of semimajor axis of the ellipse is equal to Hence, the convergence rate K can be set to The constant M can be also obtained using the fact that |log z| = |log |z| + i arg (z)| ≤ (log |z|) 2 + π 2 for any z ∈ C as follows: Hence, for x ∈ [δ, 1 − δ], Under these observations, we establish the following lemma that is a 'matrix version' of Theorem 6. Lemma 7 Let B ∈ R d×d be a positive definite matrix whose eigenvalues are in [δ, 1 − δ] for δ ∈ (0, 1/2). Then, it holds that where we use Theorem 6. This completes the proof of Lemma 7. Approximation Error of Hutchinson Method In this section, we use the same notation, e.g., f, p n , used in the previous section and we analyze the Hutchinson's trace estimator tr m (·) defined in Section 2.2. To begin with, we state the following theorem that is proven in [24]. The theorem above provides a lower-bound on the sampling complexity of Hutchinson method, which is independent of a given matrix A. To prove Theorem 1, we need an error bound on tr m (p n (A)). However, in general we may not know whether or not p n (A) is positive definite or negative definite. We can guarantee that the eigenvalues of p n (A) will be negative using the following lemma. Proof of the Theorem 1 Now we are ready to prove Theorem 1. First, one can check that sampling number n in the condition of Theorem 1 satisfies 20 log 2( 1 δ − 1) Hence, from Lemma 9, it follows that p n (A) is negative definite where A = I −B and eigenvalues of B are in [δ, 1−δ]. Hence, we can apply Theorem 8 as for m ≥ 54ε −2 log 2 ζ . In addition, from Theorem 7, we have which implies that Combining (3), (4) and (5) leads to the conclusion of Theorem 1 as follows: where Γ = tr m (p n (A)). Experiments We now study our proposed algorithm on numerical experiments with simulated and real data. Performance Evaluation and Comparison We first investigate the empirical performance of our proposed algorithm on large sparse random matrices. We generate a random matrix C ∈ R d×d , where the number of non-zero entries per each row is around 10. We first select five nonzero off-diagonal entries in each row with values uniformly distributed in [−1, 1]. To make the matrix symmetric, we set the entries in transposed positions to the same values. Finally, to guarantee positive definiteness, we set its diagonal entries to absolute row-sums and add a small weight, 10 −3 . Figure 1 (a) shows the running time of Algorithm 2 from d = 10 3 to 3 × 10 7 , where we choose m = 10, n = 15, σ min = 10 −3 and σ max = C 1 . It scales roughly linearly over a large range of sizes. We use a machine with 3.40 Ghz Intel I7 processor with 24 GB RAM. It takes only 500 seconds for a matrix of size 3 × 10 7 with 3 × 10 8 non-zero entries. In Figure 1 (b), we study the relative accuracy compared to the exact log-determinant computation up-to size 3 × 10 4 . Relative errors are very small, below 0.1%, and appear to only improve for higher dimensions. Under the same setup, we also compare the running time of our algorithm with other algorithm for computing determinants: Cholesky decomposition and Schur complement. The latter was used for sparse inverse covariance estimation with over a million variables [13] and we run the code implemented by the authors. The running time of the algorithms are reported in Figure 1 (c). The proposed algorithm is dramatically faster than both exact algorithms. We also compare the accuracy of our algorithm to a related stochastic algorithm that uses Taylor expansions [33]. For a fair comparison we use a large number of samples, n = 1000, for both algorithms to focus on the polynomial approximation errors. The results are reported in Figure 1 (d), showing that our algorithm using Chebyshev expansions is superior in accuracy compared to the one based on Taylor series. Maximum Likelihood Estimation for GMRF GMRF with 25 million variables for synthetic data. We now apply our proposed algorithm for maximum likelihood (ML) estimation in Gaussian Markov Random Fields (GMRF) [25]. GMRF is a multi-variate joint Gaussian distribution defined with respect to a graph. Each node of the graph corresponds to a random variable in the Gaussian distribution, where the graph captures the conditional independence relationships (Markov properties) among the random variables. The model has been extensively used in many applications in computer vision, spatial statistics, and other fields. The inverse covariance matrix J (also called information or precision matrix) is positive definite and sparse: J ij is non-zero only if the edge {i, j} is contained in the graph. We first consider a GMRF on a square grid of size 5000 × 5000 (with d = 25 million variables) with precision matrix J ∈ R d×d parameterized by ρ, i.e., each node has four neighbors with partial correlation ρ. We generate a sample x from the GMRF model (using Gibbs sampler) for parameter ρ = −0.22. The log-likelihood of the sample is: log p(x|ρ) = log det J(ρ) − x J(ρ)x + G, where J(ρ) is a matrix of dimension 25 × 10 6 and 10 8 non-zero entries, and G is a constant independent of ρ. We use Algorithm 2 to estimate the log-likelihood as a function of ρ, as reported in Figure 3. The estimated log-likelihood is maximized at the correct (hidden) value ρ = −0.22. GMRF with 6 million variables for Ozone data. We also consider GMRF parameter estimation from real spatial data with missing values. We use the data-set from [3] that provides satellite measurements of Ozone levels over the entire earth following the satellite tracks. We use a resolution of 0.1 degrees in lattitude and longitude, giving a spatial field of size 1681 × 3601, with over 6 million variables. The data-set includes 172 thousands measurements. To estimate the log-likelihood in presence of missing values, we use the Schur-complement formula for determinants. obtain ML estimates using Algorithm 2. Note that σ min (J) = α. We show the sparse measurements in Figure 2 (a) and the GMRF interpolation using fitted values of parameters in Figure 2 (b). Conclusion Tools from numerical linear algebra, e.g. determinants, matrix inversion and linear solvers, eigenvalue computation and other matrix decompositions, have been playing an important theoretical and computational role for machine learning applications. While most matrix computations admit polynomial-time algorithms, they are often infeasible for largescale or high-dimensional data-sets. In this paper, we design and analyze a high accuracy linear-time approximation algorithm for the logarithm of matrix determinants, where its exact computation requires cubic-time. Furthermore, it is very easy to parallelize since it requires only (separable) matrix-vector multiplications. We believe that the proposed algorithm will find numerous applications in machine learning problems. A Proof of Corollary 3 For given ε < 2 log(σ 2 max ) , set ε 0 = ε 2 log 1 σ 2 max . Since all eigenvalues of C T C are positive and less than 1, it follows that where λ i are i-th eigenvalues of C T C. Thus, We use ε 0 instead of ε from Theorem 2, then following holds if m and n satifies below condition. B Proof of Corollary 4 Similar to proof of Corollary 3, set ε 0 = ε 2 log σ 2 min . Since eigenvalues of C T C are greater than 1, This completes the proof of Corollary 5.
2015-03-21T23:55:12.000Z
2015-03-21T00:00:00.000
{ "year": 2015, "sha1": "dd671954b59a2f8b2e90639abbae1ce0ba0d5c1e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "dd671954b59a2f8b2e90639abbae1ce0ba0d5c1e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
5542174
pes2o/s2orc
v3-fos-license
Single Polymer Dynamics for Molecular Rheology Single polymer dynamics offers a powerful approach to study molecular-level interactions and dynamic microstructure in materials. Direct visualization of single chain dynamics has uncovered new ideas regarding the rheology and non-equilibrium dynamics of macromolecules, including the importance of molecular individualism, dynamic heterogeneity, and molecular sub-populations that govern macroscale behavior. In recent years, the field of single polymer dynamics has been extended to increasingly complex materials, including architecturally complex polymers such as combs, bottlebrushes, and ring polymers and entangled solutions of long chain polymers in flow. Single molecule visualization, complemented by modeling and simulation techniques such as Brownian dynamics and Monte Carlo methods, allow for unparalleled access to the molecular-scale dynamics of polymeric materials. In this review, recent progress in the field of single polymer dynamics is examined by highlighting major developments and new physics to emerge from these techniques. The molecular properties of DNA as a model polymer are examined, including the role of flexibility, excluded volume interactions, and hydrodynamic interactions in governing behavior. Recent developments in studying polymer dynamics in time-dependent flows, new chemistries and new molecular topologies, and the role of intermolecular interactions in concentrated solutions are considered. Moreover, cutting-edge methods in simulation techniques are further reviewed as an ideal complementary method to single polymer experiments. Future work aimed at extending the field of single polymer dynamics to new materials promises to uncover original and unexpected information regarding the flow dynamics of polymeric systems. I. INTRODUCTION A grand challenge in the field of rheology is to understand how the emergent or macroscopic properties of complex materials arise from microscopic interactions. To this end, a tremendous amount of research has been focused on the development of molecular-level constitutive equations and kinetic theory for polymeric liquids [1][2][3][4][5][6]. In tandem, bulk-level experimental methods including mechanical rheometry and optical rheometry have been used to study the macroscopic response of polymer solutions and melts in flow [4,7,8]. Moreover, a substantial amount of our current understanding of macromolecular behavior has been provided by spectroscopic methods such as nuclear magnetic resonance (NMR) [9] and neutron spin echo spectroscopy [10], where the latter technique has been particularly useful in studying densely entangled and glassy systems. Together, these approaches have provided tremendous insight into the properties of polymer solutions and melts. An alternative and particularly powerful method for studying non-equilibrium properties of polymers involves direct observation of single molecules. In this review, we define the term molecular rheology as the use of experimental and computational molecular-based methods to directly observe and study the dynamics of polymers under equilibrium and nonequilibrium conditions. In the context of experimental molecular rheology, these methods generally rely on single molecule fluorescence microscopy (SMFM) to observe the dynamics of fluorescently labeled polymers such as DNA in flow. Single polymer dynamics offers the ability to directly observe the molecular conformations of polymer chains near equilibrium or in strong flows [11,12], thereby providing a window into viewing non-equilibrium chain conformations that ultimately give rise to bulk-level properties such as stress and viscosity. The modern field of single polymer dynamics began in earnest over 20 years ago with the development of new methods for directly visualizing single DNA molecules [13,14]. In the mid-1990's, advances in fluorescence imaging and low-light level detection were complemented by new methods for precise fabrication of microfluidic devices [15] and optical tweezing for manipulating single DNA [16]. Together, these efforts allowed for pioneering experiments of single DNA dynamics in well defined flows and conditions, including DNA relaxation from high stretch [17] and direct observation of the tube-like motion of DNA in a concentrated polymer solution [18]. Fortuitously, around the same time, the development of cutting-edge methods in optical microscopy and DNA manipulation were complemented by advances in theoretical modeling of polymer elasticity [19] and coarse-grained polymer models for computer simulations [20,21]. The confluence of these advances in related fields, including both computation, theory, and experiment, acted in synergy to usher in unprecedented and fundamentally new methods in molecular-scale analysis of polymer dynamics. Over the last few years, researchers have used these techniques to uncover fundamentally new information regarding polymer chain dynamics in non-equilibrium conditions, including the importance of distributions in polymer conformations, heterogeneous chain dynamics at the single polymer level, and molecular individualism [22]. Today, the vibrant field of single polymer dynamics continues to advance into new territory, extending the direct observation of polymer chain dynamics to architecturally complex polymers, densely entangled solutions, and chemically heterogeneous polymers. In this review article, the current state of the field of single polymer dynamics is explored with a particular emphasis on the physical properties of model polymers and new directions in the field. For discussions on related topics, I refer the reader to recent reviews on microfluidic and nanofluidic devices for performing single polymer studies [23,24], DNA dynamics under confinement [25], and the electrophoretic motion of DNA [26]. In this article, I focus specially on the hydrodynamics and nonequilibrium behavior of single polymers in dilute, semi-dilute, and entangled solutions. The review article is organized as follows. In Section II, the physical properties of DNA as for polymer dynamics. The framework of successive fine graining for modeling polymer chain dynamics is further discussed. In Section V, single chain dynamics in semi-dilute unentangled and entangled solutions is discussed, along with observation of elastic instabilities and shear banding in DNA solutions. In Section VI, recent work in extending single polymer dynamics to polymers with complex molecular architectures such as combs and bottlebrush polymers is discussed. Finally, the review article concludes in Section VII with an evaluation of recent progress in the field and perspectives for future work. II. DNA AS A MODEL POLYMER: PHYSICAL PROPERTIES Double stranded DNA has served as a model polymer for single molecule experiments for many years. Although DNA is an interesting and biologically relevant polymer, it was selected for single polymer dynamics because of several inherent properties. First, DNA is water-soluble and can be studied in aqueous buffered solutions, which are generally compatible with microfluidic devices fabricated from poly(dimethylsiloxane) (PDMS) using standard methods in soft lithography. Moreover, a wide variety of fluorescent dyes has been developed for imaging applications in molecular and cellular biology, and these experiments are commonly carried out in aqueous solutions. From this view, single polymer studies of DNA have benefited from these efforts in developing bright and photostable fluorescent dyes and by further leveraging strategies for minimizing photobleaching in aqueous solutions. Second, DNA is a biological polymer and can be prepared as a perfectly monodisperse polymer samples using polymerase chain reaction (PCR) [27] or by extracting and purifying genomic DNA from viruses or microorganisms [28]. Monodisperse polymer samples greatly simplify data analysis and interpretation of physical properties. Third, DNA can be routinely prepared with extremely high molecular weights, thereby resulting in polymer contour lengths L larger than the diffraction limit of visible light (≈ 300 nm). For example, bacteriophage lambda DNA (48,502 bp) is a common DNA molecule used for single polymer dynamics with a natural (unlabeled) crystallographic contour length L = 16.3 µm. Such large contour lengths enable the direct observation of polymer chain conformation dynamics in flow using diffraction-limited fluorescence imaging. Moreover, λ-DNA is commercially available, which circumvents the need for individual polymer physics laboratories to prepare DNA in-house using biochemical or molecular biology techniques. Fourth, the physical properties of DNA are fairly well understood, which enables complementary quantitative modeling and simula-tion of experimental data. Finally, DNA can be often prepared in linear or ring topologies because many forms of genomic DNA occur in naturally circular form [28]. The ability to prepare DNA as a linear macromolecule or as a ring polymer allows for precise investigation of the effect of topology on dynamics [29]. A. Persistence length, effective width, and monomer aspect ratio DNA is a negatively charged semi-flexible polymer with a fairly large persistence length compared to most synthetic polymers [19]. In this review, I use the term semi-flexible to denote a polymer that is described by the wormlike chain or Kratky-Porod model [30]. As noted below, the elasticity of wormlike chains is qualitatively different compared to flexible polymers, especially in the low force regime. Natural B-form DNA (0.34 nm/bp, 10.5 bp/helix turn) has a persistence length of l p ≈ 50 nm in moderate ionic strength conditions consisting of at least 10 mM monovalent salt [31]. At low ionic strengths (< 1 mM monovalent salt), l p can increase to over 200 nm due to the unusually high linear charge density of DNA [19,32]. Nevertheless, most single polymer dynamics experiments are performed under reasonable ionic strength conditions with Debye lengths l D ≈ 1-2 nm, conditions under which DNA essentially behaves as a neutral polymer under moderate or high ionic strength conditions. The generally accepted value of the persistence length for unlabeled DNA is l p = 53 nm [33]. A further consideration is the effect of the fluorescent dye on the persistence length l p of DNA. A broad class of nucleic acid dyes such as the cyanine dimer or TOTO family of dyes (TOTO-1, YOYO-1) are known to intercalate along the backbone of DNA, which is thought to change local structure by slightly unwinding the DNA double helix. The precise effect of intercalating dyes on the persistence length of DNA has been widely debated, with some recent atomic force microscopy (AFM) experiments suggesting that the action of YOYO-1 does not appreciably change the persistence length of DNA upon labeling [34]. For the purposes of this review, the persistence length is taken as l p = 53 nm and the Kuhn length b = 2l p = 106 nm, such that λ-DNA contains approximately N = L/b ≈ 154 Kuhn steps. The effective width w of DNA can be envisioned as arising from electrostatic and steric interactions along the DNA backbone, and these interactions play a role in determining the static properties of DNA such as the radius of gyration (R g ) and excluded volume (EV) interactions. It should be emphasized that the effective width w is different than the hydrodynamic diameter d of DNA, the latter of which is generally smaller than w (such that d ≈ 2 nm) and is important in modeling hydrodynamic friction, chain dynamics, and hydrodynamic interactions (HI), as discussed below. In any event, the rise of the double helix can be roughly estimated by calculating bond sizes to yield an approximate width of 2 nm. However, DNA is a charged polymer, and w is also dependent on the ionic strength of the solution [35]. Calculations by Stigter that considered the second virial coefficient of stiff charged rods predict an effective width w ≈ 4-5 nm under conditions of ≈ 150 mM monovalent salt [36]. Monomer aspect ratio b/w provides a quantitative measure of monomer stiffness or anisotropy. Using the Kuhn length b = 106 nm and an effective width w = 4 nm, DNA has a monomer aspect ratio b/w ≈ 25 under moderate salt concentrations around 150 mM monovalent salt. On the other hand, most synthetic flexible polymers have much smaller monomer aspect ratios such that b/w ≈ O(1). As a comparison, single stranded DNA (ss-DNA) has a persistence length l p ≈ 1.5 nm under moderate salt conditions (150 mM Na + ) [37] and a bare, non-electrostatic persistence length of 0.62 nm [38]. Limited experimental data exist on the effective width of ssDNA, but magnetic tweezing experiments on ssDNA elasticity suggest that w is relatively independent of salt for ssDNA [39]. A reasonable assumption is an effective width w ≈ 1.0 nm [35] for ssDNA, which yields a monomer aspect ratio b/w ≈ 2-3 for ssDNA. B. Excluded volume interactions & solvent quality In order to understand the non-equilibrium dynamics of single polymers such as double stranded DNA or single stranded DNA, it is important to consider key physical phenomena such as excluded volume (EV) interactions. Blob theories and scaling arguments are useful in revealing the underlying physics of polymer solutions and melts [40,41]. For the ease of analysis, we often think about polymer chain behavior in the limits of a given property such as solvent quality (or temperature) and/or polymer concentration. For example, the average end-to-end distance R for a long flexible polymer chain scales as R ∼ N 0.5 in a theta solvent and R ∼ N 0.59 in an athermal solvent, where N is the number of Kuhn steps. A theta solvent is defined such that two-body interactions between monomers are negligible, which occurs when the attractive interactions between monomer and solvent exactly cancel repulsive interactions between monomer-monomer pairs [41]. In a theta solvent, a polymer chain exhibits ideal chain conformations such that R ∼ N 0.5 . Athermal solvents generally refer to the high-temperature limit, wherein monomer-monomer repulsions dominate and excluded volume interactions are governed by hard-core repulsions, such that the Mayer-ffunction has a contribution only from hard-core repulsions in calculating the excluded volume v [41]. In reality, polymer chains often exist in good solvent conditions, which occurs in the transition region between a theta solvent and an athermal solvent. For many years, there was major confusion surrounding the description of DNA in aqueous solution due to the complex influence of solvent quality, chain flexibility, and polymer molecular weight on the static and dynamic scaling properties. In the following sections, we review recent progress in elucidating these phenomena for DNA as it pertains to single polymer dynamics. 1. Theta temperature T θ , chain interaction parameter z, and hydrodynamic radius R H Double stranded DNA is a complex polymer to model due to the influence of chain flexibility, molecular weight, and solvent quality in determining scaling properties. As discussed above, blob theories are instructive in revealing the underlying physics of polymer chains [41], but many blob theories are derived by considering either the effects of polymer concentration or temperature, but not necessarily both in the same scaling relation. In 2012, Prakash and coworkers pointed out the need to consider the double cross-over behavior for static and dynamical scaling properties of polymer solutions in the semi-dilute solution regime, wherein polymer properties are given by power-law scaling relations as a function of scaled concentration and solvent quality [42]. The chain interaction parameter z effectively captures the influence of both temperature T and polymer molecular weight M on the behavior of polymer solutions in the region between theta solvents to athermal solvents: where k is a numerical prefactor that depends on chemistry. In theory, the chain interaction parameter z can be extremely useful in modeling DNA solutions, but in order to make this relation quantitative and practical, the prefactor k and the theta temperature T θ need to be determined. In 2014, Prakash and coworkers performed bulk rheological experiments and light scattering experiments on a series of linear DNA molecules ranging in size from 2.9 kbp to 289 kbp [43]. Static light scattering experiments were used to determine the theta temperature T θ of DNA in aqueous solutions containing monovalent salt, and it was found that T θ = 14.7 ± 0.5 o C. At the theta temperature T = T θ , the second virial coefficient A 2 is zero. Interestingly, these authors further showed that the second virial coefficient A 2 is a universal function of the chain interaction parameter z in the good solvent regime when suitably normalized [41,43]. Moreover, Prakash and coworkers showed that the polymer contribution to the zero shear viscosity η p,0 obeys the expected power-law scaling with polymer concentration in the semi-dilute unentangled regime such that η p,0 ∼ (c/c * ) 2 at T = T θ , where c * is the overlap concentration ( Figure 1a). Dynamic light scattering (DLS) experiments were further used to determine the hydro-dynamic radii R H for these monodisperse DNA polymers in dilute solution (c/c * = 0.1) [43]. First, the authors determined that the hydrodynamic radius R H for DNA was independent of salt concentrations c s > 10 mM monovalent salt, which ensures that charges along the DNA backbone are effectively screened and that putative polyelectrolyte effects are absent for these conditions. Second, the authors found the expected power law scaling of the hydrodynamic radius at T = T θ such that R θ H ∼ M 0.5 . At this point, the significance of the chain interaction parameter z should be noted; any equilibrium property for a polymer-solvent system can be given as a universal value when plotted using the same value of z in the crossover region between theta and athermal solvents [43]. Therefore, determination of the value k is essential. In order to determine the value of k for DNA, Prakash and coworkers measured the swelling ratio α H = R H /R θ H from DLS experiments. It is known that the swelling ratio can be expressed in an expansion such that α H = (1 + az + bz 2 + cz 3 ) m/2 , where a, b, c, m are constants [45]. In brief, the value of k was determined by quantitatively matching the swelling ratio α H between experiments and BD simulations that give rise to the same degree of swelling in good solvents [43]. Remarkably, the swelling ratio for DNA was found to collapse onto a universal master curve when plotted as a function of the chain interaction parameter z across a wide range of DNA molecular weights (Figure 1b). The numerical value of k was determined to be k = 0.0047 ± 0.0003 (g/mol) −1/2 , thereby enabling the chain interaction parameter z to be determined as a function of molecular weight and temperature over a wide range of parameters relevant for single polymer experiments [43]. These results further speak to the universal scaling behavior of DNA as a model polymer relative to synthetic polymers. Radius of gyration R g and overlap concentration c * The overlap concentration c * = M/ (4π/3)R 3 g N A is a useful characteristic concentration scale for semi-dilute polymer solutions, where N A is Avogadro's number and R g is the radius of gyration. A scaled polymer concentration of c/c * = 1 corresponds to a bulk solution concentration of polymer that is equivalent to the concentration of monomer within a polymer coil of size R g . In order to calculate c * for an arbitrary temperature and molecular weight DNA, the value of R g must be determined for a particular size DNA and solution temperature. Prakash and coworkers determined the radius of gyration R g for DNA over a wide range of M and T using the expression R g = R θ g α g (z), where R θ g = L/ √ 6N is the radius of gyration under theta conditions and α g (z) is the swelling ratio for the radius of gyration as a function of the chain interaction parameter z. With knowledge of z, the radius of gyration swelling ratio α g (z) = (1 + a z + b z 2 + c z 3 ) m /2 can be determined at a given M and T , where the constants a , b , c , and m have been determined using BD simulations using a delta function potential for the excluded volume interactions [45]. Therefore, with knowledge of the solvent quality z, the radius of gyration R g and hence the overlap concentration c * can be determined at a given temperature and molecular weight for DNA [43]. Clearly, systematic experiments on the bulk rheology and static and dynamic properties of DNA, combined with complementary BD simulations, have enabled an extremely useful quantitative understanding of the physical properties of DNA that can be leveraged for single polymer dynamics. Excluded volume exponent ν and static chain properties The average size of a polymer chain can be determined using static measurements such as light scattering (thereby leading to R g ) or by dynamic measurements based on diffusion (thereby leading to R H ) [40,47]. It has long been known that these two different measures of average polymer size exhibit different power-law scalings in the limit of large molecular weights [40], such that R g ∼ N 0.59 and R H ∼ N 0.57 under good solvent conditions. First, we consider the power-law scaling behavior of the static properties of DNA chains associated with the root-mean-square end-to-end distance: where the brackets · represent an average over an ensemble and the vectors r N and r 1 represent the ends of a polymer chain. Because we are considering static properties, the power-law scaling behavior for R E should be equivalent to the scaling behavior of R g . In 2010, Clisby performed high precision calculations of static chain properties to show that the ratio R 2 E /R 2 g exhibits a universal value R 2 E /R 2 g ≈ 6.254 in the limit of large molecular weight for self-avoiding chains, together with a Flory excluded volume exponent of ν = 0.587597(7) [48]. In 2013, Dorfman and coworkers investigated the static and dynamic equilibrium prop- calculated by renormalization group theory of Chen and Noolandi [46] reported in the recent work of Dorfman and coworkers [35]. The apparent EV exponent was determined for static chain properties, Here, ν = 1 corresponds to rodlike behavior, ν = 0.5 to a Gaussian chain, and ν = 0.588 to a swollen chain. Results are shown for five different values of of the monomer aspect ratio b/w (from top to bottom): 1.0, 4.5 (corresponding to ssDNA), 25 (corresponding to DNA), 316, 3160, and 0 (no excluded volume). Inset: PERM results for the excess free energy per Kuhn length due to EV interactions in a dilute solution of DNA. Reproduced with permission from Ref. [35]. erties of DNA and ssDNA using a Monte Carlo modeling approach based on the prunedenriched Rosenbluth method (PERM) [35]. In this work, the authors used a discrete wormlike chain model (DWLC), which is a coarse-grained model for polymers that incorporates a series of inextensible bonds of length a linked with a bending potential. Excluded volume interactions are included with a hard-core repulsive potential, which essentially amounts to athermal solvent conditions and does not consider the effect of solvent quality. Hydrodynamic interactions were further included using an Oseen tensor and a bead hydrodynamic radius d. Taken together, the model parameters for the DWLC include the Kuhn step size b, an effective width w, a hydrodynamic diameter d, and a bond length a, where it was assumed that d = a [35]. Using this parametrized model for DNA, PERM calculations were used to investigate the power-law scaling behavior for the average end-to-end distance R for a set of parameters correspond to DNA over a limited range of molecular weights. In order to expand the range of parameters and molecular weights under investigation, Dorfman and coworkers further used renormalization group (RG) theory of Chen and Noolandi [46]. In this way, these authors determined an apparent excluded volume exponent ν, where ν ≡ d ln R E /d ln L, as a function of polymer molecular weight N = L/b and the monomer aspect ratio b/w. Interestingly, these results provided a quantitative description of chain flexibility on the equilibrium structural properties of DNA ( Figure 2). The results clearly show that the apparent excluded volume exponent ν is a sensitive function of the number of Kuhn steps N and the monomer aspect ratio b/w. For double stranded DNA with b/w ≈ 25, the RG theory predicts a value of ν ≈ 0.546 for λ-DNA. Clearly, the properties of λ-DNA (and most common molecular weight DNA molecules used for single molecule studies) appear to lie in the transition regime between an ideal Gaussian chain (without dominant EV interactions) and a fully swollen flexible chain (with dominant EV interactions). However, it should be emphasized that these results only considered hard-core repulsive interactions between monomers in the context of excluded volume interactions, which corresponds to the limit of athermal solvents in the long-chain limit. Therefore, these calculations do not consider the effect of arbitrary solvent quality on the static or dynamic properties of DNA or ssDNA in the crossover regime. In other words, these results show that the intermediate value of the apparent excluded volume exponent ν is dictated by the flexibility (or semi-flexibility) of the DNA polymer chain, but approaching the limit of a theta solvent would only serve to further decrease the value of ν for DNA. Effective excluded volume exponent ν ef f and dynamic chain properties In addition to static measures of average polymer size such as R E and R g , the hydrodynamic radius R H is an additional measure of coil dimensions that can be determined by dynamic light scattering experiments [40,47,49,50]. As previously discussed, R H and R g exhibit different power-law scaling relations in the limit of high molecular weight, essentially because these different physical quantities represent distinct averages (over different moments) of fluctuating variables such as local distances between monomers along a polymer chain. A full discussion of this phenomenon is beyond the scope of this review article, though we provide a brief overview here in order to interpret single molecule diffusion experiments. In brief, Sunthar and Prakash [47] used an Edwards continuum model [51] to show that differences in the crossovers between the swelling of the hydrodynamic radius α H and radius of gyration α g arise due to dynamic correlations to the diffusivity. These differences are important because dynamic correlations are ignored when determining hydrodynamic radius using the Kirkwood expression for hydrodynamic radius [47,50]: where r ij is the distance between two monomers i and j. In brief, the Kirkwood value of the hydrodynamic radius R * H can be used to calculate a short-time approximation to the chain diffusivity, whereas the Stokes-Einstein relation provides the long-time value of the chain diffusivity: where R H is the hydrodynamic radius determined from long-time diffusion measurements. The Stokes-Einstein value of the chain diffusivity is commonly determined from meansquared values of the polymer chain center-of-mass from single molecule experiments, as discussed below. In any event, Sunthar and Prakash showed that the swelling ratio determined for the Kirkwood expression for the hydrodynamic radius is nearly identical to the swelling ratio for the radius of gyration, which suggests that both of these pertain to static measures of chain size [47]. These results are consist with RG theory results from Douglas and Freed for chains with EV interactions [49]. However, the long-time diffusivity (calculated using R H ) needs consideration of dynamic correlations to explain the power-law scaling and crossover behavior. In any event, the ratio R g /R * H is known to exhibit a universal value for self-avoiding walks in the limit of long chains, such that R g /R * H = 1.5803940(45) as determined by recent high-precision Monte Carlo simulations by Clisby and Dünweg [50]. Single molecule experiments have been used to directly measure chain diffusivity by fluorescently labeling DNA molecules and tracking their diffusive motion over time at thermal equilibrium. In these experiments, the mean-squared displacement of the polymer centerof-mass is determined as a function of time for an ensemble of molecules, and these data are used to extract a diffusion coefficient. Based on the discussion above, these experiments yield the long-time diffusion coefficient D from the Stokes-Einstein relation, thereby yielding the hydrodynamic radius R H , which is distinct from the hydrodynamic radius R * H calculated from the Kirkwood approximation. In 1996, the diffusion coefficients D for a series of variable molecular weight linear DNA molecules were determined using fluorescence microscopy [52], and it was found that the apparent excluded volume exponent ν app = 0.61 ± 0.016, where D ∼ L −νapp . However, uncertainly in the actual molecular weights of these DNA molecules was later found to increase uncertainty in these results [53], which unfortunately added to the confusion surrounding the equilibrium properties of DNA. In 2006, these single molecule diffusion experiments were repeated on linear DNA, and it was found that ν app = 0.571 ± 0.014 [53]. Recently, Prakash and coworkers applied the concepts of dynamical scaling the crossover region between theta and athermal solvents [42] to bulk rheological experiments on semi-dilute unentangled DNA solutions [43]. In particular, it is known that the polymer contribution to the zero-shear viscosity η p,0 should depend on both solvent quality and polymer concentration in the cross-over region in semi-dilute solutions, such that η p,0 /η s = f (z, c/c * ) [42]. Using scaling arguments, it can be shown that where ν ef f is an effective excluded volume exponent that depends on solvent quality in the cross-over region [42]. [54], which is in good agreement with the T4 data and within the error bounds of the BD simulations at z ≈ 1. As an aside, it is possible to define an alternative longest relaxation time based on zero-shear viscosity τ η from bulk rheological experiments that can be directly compared to single molecule relaxation times [20,54,55]. In sum, these experiments and simulations show that physical properties such as longest relaxation time and zero-shear viscosity obey power laws in the cross-over regions between theta and athermal solvents. Taken together, this work has elucidated the role of the solvent quality and concentration on the equilibrium properties of DNA. Thermal blobs The thermal blob size ξ T is the length scale at which the monomer interaction energy is comparable to thermal energy k B T . On length scales smaller than ξ T , excluded volume interactions are weaker than thermal energy and the chains follow ideal statistics on these length scales. The thermal blob size is given by ξ T ≡ cb 4 /v(T ), where c is a numerical constant of order unity and v(T ) is the excluded volume which is a function of temperature [41]. First, we consider the asymptotic limit of an athermal solvent, where only hardcore repulsions contribute to monomer interactions and the properties are independent of temperature. In an athermal solvent, the excluded volume for polymers with anisotropic monomers is given by v a = b 2 w, such that the excluded volume is much larger than the monomer occupied volume v o = bw 2 because b w [41]. In any case, the concept of a thermal blob generally refers to scaling arguments and is not generally taken as a quantitative property. For this reason, estimates of the thermal blob size can vary widely depending on how the prefactor c is considered [35,56]. Nevertheless, it is instructive to consider a specific Moving away from the asymptotic limit of athermal solvents, thermal blobs have also been considered in the cross-over regime of intermediate solvent quality [42]. It can be shown that the thermal blob size in the cross-over region is estimated by: where z is the chain interaction parameter given by Eq. (1). Moreover, the temperature dependence of the excluded volume is given by: where v a is the excluded volume in an athermal solvent [41]. Using this approach, Prakash and coworkers estimated the molecular weight contained in a thermal blob M blob as a function of temperature (to within a numerical prefactor) for DNA in the vicinity of the theta temperature [43]. Moreover, these results suggest that the hydrodynamic radius R H for DNA scales with a molecular weight power-law given by ideal chain statistics for molecular weights M < M blob (such that R H ∼ M 0.5 ), followed by the expected molecular weight power-law scaling for self-avoiding chains for M > M blob (such that R H ∼ M 0.59 ) [43]. Finally, it should be noted that an alternative theoretical framework can be used to where f is the applied force, x is the end-to-end extension of a DNA molecule, and k B T is thermal energy. The Marko-Siggia formula generally provides an excellent fit to single molecule elasticity data describing the force-extension of DNA over a wide range of extensions [58], including the low force regime and up to fractional extensions x/L ≈ 0.97, whereupon the stretching force is large enough to disrupt base pairing and base stacking (≈ 300 k B T /l p ). The development of a simple analytic expression for the entropic elasticity of DNA enabled the direct simulation of DNA dynamics in flow using coarse-grained beadspring models and Brownian dynamics simulations [59,60]. It is obvious from Eq. (1) that the low force elasticity of DNA is linear, as given by the WLC interpolation formula. In fact, the low force linearity is consistent with flexible polymers or freely-jointed polymers described by Gaussian coil statistics, such that a Gaussian chain yields a linear entropic restoring force in the end-to-end extension [1,41]. In other words, an ideal chain described by random walk statistics or theta-solvent conditions yields a low-force linear elasticity: Several years ago, however, Pincus considered the effect of excluded volume (EV) interactions on the low force elasticity of flexible polymers [61], and his analytical results showed that real polymer chains with EV interactions exhibit a non-linear low force elasticity: where an EV exponent of ν = 3/5 corresponding to good solvents has been assumed in the derivation [41]. The key idea in the Pincus analysis is that the applied force generates a tensile screening length, known as a tensile blob or a Pincus blob ξ P : such that long-range EV interactions are screened for distances greater than ξ P . In other words, a polymer chain under tension will break up into a series of tensile blobs of size ξ P ; within each tensile blob, chain conformation is described by a self-avoiding walk in good solvent conditions such that ξ P = bg 3/5 , where g is the number of monomers in a tensile blob. For length scales larger than ξ P , chains are extended into an array of tensile blobs. Using this framework, Pincus blobs form for applied forces greater than where the Pincus blob size is on the order of the coil dimension R F . For applied tensions again showed a non-linear low force elasticity f ∼ (x/L) 3/2 which is clearly different than the entropic force response of an ideal chain f ∼ x/L [62]. Moreover, the initial experiments on ssDNA elasticity were complemented with a detailed scaling analysis [39]. Recently, the elasticity of ssDNA was further elucidated by developing a new model incorporating intrapolymer electrostatic repulsion, which generates a salt-dependent internal tension [64]. This model was inspired by a mean-field approach, and it was shown that the internal tension can be related to the linear charge density along a charged polymer backbone. This work shows that mesoscopic polymer conformation emerges from microscopic structure. A schematic of the elasticity of a single polymer chain as a function of applied force is shown in Figure 3. In Regime I, a polymer chain is weakly perturbed from equilibrium at very low forces (f < k B T /R F ), where the Flory radius of the chain is R F = L 3/5 l 1/5 p w 1/5 [41]. Here, the thermal blob size ξ T ≈ b 4 /v is the length scale at which the monomer interaction energy is comparable to thermal energy k B T . Upon increasing the applied force into Regime II, the chain begins to extend with a non-linear low force elasticity wherein x/L ∼ f 2/3 due to excluded volume interactions. At higher forces, the self-avoidance effects weaken and the behavior transitions to a linear elasticity region, where the polymer acts as an ideal chain. The transition between Regimes II and III occurs when the thermal blob size is on the order of the Pincus blob size ξ T = ξ P . Regime IV corresponds to high forces and large fractional extensions x/L ≥ 1/3, wherein the finite extensibility of the chain plays a role in the elasticity. The physical properties of polymer chains with different chemistries dictate the dominance and transition between the different regions in the force-extension curve in Figure 3. Interestingly, the linear elastic behavior in Regime III was not observed for ssDNA [38], but it was observed for PEG [62]. ssDNA has isotropic or nearly spherical monomers such that the excluded volume v ≈ b 3 and the monomer aspect ratio b/w is near unity, which means that the thermal blob size is essentially equal to the Kuhn step size (ξ T = b). This effectively shrinks Regime III to a vanishingly small size, and the non-linear low force elastic region in Regime II dominates the low-force elastic behavior. On the other hand, PEG has slightly more anisotropic monomers compared to ssDNA, with a PEG monomer aspect ratio b/w ≈ 5 [62], which results in a non-negligible Regime III for slightly less flexible polymers. How does this picture change for double stranded DNA? As previously discussed, DNA has a large persistence length and a correspondingly large monomer aspect ratio b/w ≈ 25. For polymers such as DNA, Regime III extends further into the low force region such that the non-linear elastic response in Regime II is essentially non-existent. In other words, due to the large monomer aspect ratio for DNA, the entire low-force elasticity regime is essentially linear. How do these concepts relate to linear and non-linear rheology for DNA? First, consider the limit of large forces which corresponds to non-linear rheological experiments. For any polymer subjected to high stretching forces f > k B T /ξ T , Pincus blobs are smaller than thermal blobs, which suggests that EV interactions are screened at all scales and the elastic force relation is linear ( Figure 3). However, DNA has a large Kuhn step size, and certainly no Pincus blobs will form when the Kuhn step size is larger than the Pincus blob size such that b > ξ P . A rough estimate for the force at which the Pincus blob size equals the DNA Kuhn step size b is f ≈ 0.04 pN, which is a relatively small force. Therefore, over the practical force range for most non-linear rheological experiments, b > ξ P for DNA, which necessarily precludes the formation of tensile blobs. In the limit of low forces (corresponding to linear rheological experiments), Pincus blobs only form between applied tensions k B T /R F < f < k b T w/b 2 (assuming an athermal solvent), which is a vanishingly small window for most reasonable size DNA molecules using in single molecule experiments due to the large values of b/w and b. Clearly, the difference in physical properties between DNA and flexible polymers result in major differences in elasticity, which impacts the emergent rheological properties of these materials [56]. The notion that solvent quality can impact the elastic (force-extension) properties of polymer chains was generally known in the polymer physics community, but quantitative chain elasticity models capturing this effect were largely absent from the rheological community until recently. For example, the Warner force relation and the Pade approximant to the inverse Langevin function [1, 65] both neglect the role of EV and the non-linear low-force elasticity for flexible polymers in a good solvent. In 2012, Underhill and coworkers postulated a new elastic (spring force) relation for flexible polymers that accounts for solvent quality ranging from theta solvents to good solvents [63]. This force extension relation smoothly interpolates between the non-linear low-force elasticity for flexible polymers in a good solvent (Regime II) and the linear (Regime III) and ultimately non-linear finite extensibility regions (Regime IV) of the force-extension diagram, as shown in Figure 3. Interestingly, this force extension relation also captures a scale-dependent Regime III, such that polymers with larger aspect ratio monomers b/w tend to exhibit a larger Regime III, consistent with experimental data on single molecule pulling experiments on PEG [62]. The main advantage of this analytic form of the force-extension relation is the ability to use it in coarse-grained bead-spring Brownian dynamics simulations for flexible polymers, where the force-extension relation is used to represent the elasticity of chain sub-segments in a coarse-grained model. This force-extension relation was subsequently used to study the impact of solvent quality on the coil-stretch transition and conformation hysteresis for flexible polymers (without HI) in a good solvent [66]. Recently, the concept of EV-influenced elasticity was further extended to wormlike chains by Li, Schroeder, and Dorfman [67]. Here, a new analytic interpolation formula (called the EV-WLC model) was developed for wormlike chains such as DNA in the presence of excluded volume interactions. Again, this model was found to smoothly interpolate between the relevant regions of the force-extension diagram for wormlike chains; however, the parameters in the EV-WLC interpolation formula were determined by rigorous calculations using Monte Carlo/PERM simulations, rather than phenomenological estimation. In 2016, Saadat and Khomami developed a new force-extension relation for semi-flexible chains by incorporating bending potential [68]. This force-extension model accurately describes correlations along the backbone of the chain, segmental length, and the elastic behavior of semi-flexible macromolecules in the limit of 1 Kuhn step per spring. Finally, it should be noted that the elastic behavior in the high-force region (Regime IV) qualitatively differs between ssDNA and either freely-jointed chains or wormlike chains. In the limit of high forces, the chain stretch scales as x ∼ (1 − f −α ), where α = 0.5 for wormlike chains and α = 1.0 for freely-jointed chains [69]. Moreover, it is known that there is a cross-over between freely-jointed and semi-flexible force-extension behavior at large polymer chain extensions [70]. However, single molecule pulling experiments on ssDNA showed a qualitatively different response, such that x ∼ ln f in the high-force regime [38]. It was found that this unusual behavior arises due to polyelectrolyte or charge effects for flexible polymers in the high-force regime, supported by scaling theory [71] and simulations of the stretching of flexible polyelectrolytes under high force [72,73]. III. DILUTE SOLUTION SINGLE CHAIN DYNAMICS: PRE-2007 It has long been appreciated that DNA can serve as a model polymer for understanding the rheology and dynamics of long chain macromolecules in solution due to monodispersity and molecular weight selectivity [74][75][76]. In the early to mid-1990's, advances in fluorescence [77], shear flow [78], and extensional flow [79,80]. This molecular-scale information on polymer chain dynamics can be further Single DNA dynamics was reviewed in a comprehensive article in 2005 [11], and the main content of this prior review article is not considered here. For the purposes of this review, I will explore a few interesting and perhaps under-appreciated topics in dilute solution single polymer dynamics that could inspire future investigation. To begin, it is useful to consider a few of the major results together with intriguing and outstanding questions in the area of dilute solution single polymer dynamics. Relaxation and stretching in uniform flow The relaxation of single DNA molecules from high stretch was first observed using single molecule techniques 1994 by Chu and coworkers [17]. Interestingly, these results showed that the longest relaxation time τ scaled with contour length L as τ ∼ L 1.65±0. 10 , which suggests that the apparent excluded volume exponent ν app ≈ 0.55 for DNA. These results agree with our modern understanding of excluded volume interactions and the effect of flexibility in the DNA backbone, as discussed in the prior section. However, subsequent experiments on the diffusion of single DNA molecules yielded an excluded volume exponent of ν app = 0.611 ± 0.016 [52], which appeared to be in conflict with the longest polymer relaxation time data until these single polymer diffusion experiments were repeated in 2006 with more uniform molecular weight samples, which yielded a value of ν app = 0.571 ± 0.014. Nevertheless, the disparity in these results lead to confusion in the field regarding the influence or role of EV in DNA for many years, though this has now been largely resolved, as discussed above. Following relaxation experiments, Chu and coworkers performed the first single polymer experiments on the stretching dynamics of a tethered chain in a uniform flow [77], a planar extensional flow [79,80], and a simple shear flow [78]. In 1997, Larson showed that the experimental data on uniform flow stretching could be quantitatively described by a simple dumbbell model using the non-linear wormlike chain (Marko-Siggia) force relation and a conformation-dependent hydrodynamic drag [59]. These ideas were some of the first to analyze single polymer dynamics results in a quantitative manner by considering the role of intramolecular hydrodynamic interactions. Dynamics in extensional flow: molecular individualism Several interesting phenomena were observed from single polymer dynamics experiments in an extensional flow [79,80]. First, the coil-stretch transition was observed to be extremely sharp when considering only the subset of molecules that reached a steady extension at any value of the dimensionless flow strength called the Weissenberg number W i =˙ τ , where˙ is the strain rate. In fact, the sharpness of coil-stretch transition was striking compared to prior bulk rheological measurements based on flow birefringence, which typically average over a large number of molecules that may (or may not) have reached a steady-state extension. Moreover, it was observed that single polymer chains generally adopt a rich set molecular conformations during transient stretching in extensional flow such as dumbbell, hairpin, and kink shapes [80]. From these experiments emerged the notion of 'molecular individualism', wherein a single polymer molecule may adopt a series of different stretching pathways given the same initial conditions and the same dynamic stretching experiment [22]. These concepts began to show the true value of single polymer experiments in revealing unexpected and rich sets of molecular sub-populations, and further, how these molecular populations served to influence bulk properties such as stress and viscosity. In tandem, major progress was being made in the development of coarse-grained Brownian dynamics (BD) simulations of polymer chains [20,21], which provide a direct complement to single polymer experiments. Larson and coworkers performed BD simulations on DNA stretching in extensional flow using a multibead-spring polymer model using the Marko-Siggia force relation, albeit in the absence of hydrodynamic interactions and excluded volume interactions [81]. Nevertheless, these simulations provided good agreement with single molecule experiments, including the emergence of different molecular conformations such as folds, kinks, and dumbbells. These simulations were useful in revealing the origin of the heterogeneity in molecular conformations, which essentially arise from variability in the initial polymer conformation and emerge as a balance between the conformational macromolecular diffusion and the imposed flow [81]. A related question is the impact of molecular individualism on bulk rheological properties of polymer solutions. On the one hand, it is clear that one needs to be extremely careful in implementing methods such as pre-averaging (for modeling hydrodynamic interactions) or making a priori assumptions regarding an underlying probability distribution for molecular properties. Molecular individualism necessarily broadens distributions in molecular behavior. Moreover, it is theoretically possible that a non-majority molecular conformation may dominate a bulk rheological property such as stress; future experiments that aim to couple bulk rheological measurements with single molecule observations can address these compelling questions. Dynamics in shear flow and linear mixed flows Experiments in extensional flow were followed by single polymer dynamics in steady shear flow [78], which provided direct experimental evidence for the relatively weaker stretching dynamics of polymers in a simple shear flow due to the influence of vorticity. Unlike single chain dynamics in extensional flow, polymers do not exhibit a steady-state extension in steady shear flow; rather, single polymer chains undergo repeated end-over-end tumbling events in shear flow [78]. Direct imaging in the flow-gradient plane of shear flow allowed for polymer 'thickness' in the gradient direction to be measured and interpreted in the context of shear viscosity in dilute polymer solutions [82]; these experiments were complemented by BD simulations with HI and EV [83]. Interestingly, the power spectral density of polymer chain extension fluctuations in steady shear suggest that the end-over-end tumbling events are aperiodic, with no preferred frequency for the tumbling cycle [78]. Similar conclusions were drawn from single polymer experiments for tether chains in shear flow [84]. However, a characteristic periodic motion for single polymers in shear flow (untethered shear and tethered shear flow) was found by considering the coupling between the advection of the polymer chain flow direction and diffusion in the shear gradient direction (Figure 4) [85]. In other words, polymer chain motion appears to be non-periodic when only considering the chain stretch in the flow direction, but quantities such as polymer orientation angle which rely on coupled chain dynamics between the gradient direction and flow direction reveal a characteristic periodic motion in flow. Here, it was found that the power spectral density of polymer orientation angle θ exhibited a clear peak at a characteristic frequency, and scaling relations were further developed to describe the physics of the characteristic tumbling frequency and cyclic polymer motion in shear flow [85]. time is scaled by the longest relaxation time in either dilute or semi-dilute solutions [87]. These results suggest that dynamics in the semi-dilute regime are qualitatively similar to polymer chain dynamics in steady and transient shear flow in the dilute regime. These results were interpreted by concluding that the increased polymer concentration merely serves to increase the background solution viscosity in semi-dilute solutions in shear flow rather than altering chain dynamics. Interestingly, however, recent experiments and BD simulations have shown that this is not the case for polymer dynamics in extensional flow [54,88], where chain dynamics change significantly in semi-dilute unentangled solutions, as discussed below. The dynamics of single polymers were further studied in linear mixed flows, wherein the degrees of rotational and extensional character are varied between purely extensional flow and purely rotational flow [89,90]. Interestingly, the coil-stretch transition was observed to be sharp for flows with dominant extensional character, and the steady-state polymer extension was observed to collapse onto a universal curve for extension-dominated flows when plotted against a rescaled W i ef f = W i √ α, where α is the flow-type parameter. Here, the eigenvalue of the velocity gradient tensor sets the scale of the effective strain rate along the extensional axis [91]. In brief, the W i ef f is rescaled by the magnitude of the extensional rate along the extensional axis eigenvector, which yields a universal curve for steady polymer extension for extension-dominated flows. These dynamics were further corroborated by an analytical model based on the finitely extensible Rouse model [92]. Moreover, it was observed that polymer chains align along the extensional eigenvector in extension-dominant mixed flows, while occasionally experiencing a Brownian 'kick' that knocks the polymer chain along the compressional axis, followed by a compression and re-extension event [89,90]. In an additional study, single polymer experiments and BD simulations were used to probe the conformational dynamics of DNA in linear mixed flows with dominant rotational character [93]. Single polymer experiments were performed using a microfluidic four-roll mill [94], which allows for generation of flows with arbitrary flow type between pure shear α = 0 and purely rotational flow α = -1.0 for varying W i. In rotational dominated flows, it was observed that the polymer trajectory essentially follows an ellipsoid and the tumbling motion is 'vanelike' and approaches simple chain rotation without extension changes as α approaches -1.0. Hydrodynamic interactions (HI) It has long been appreciated that intramolecular hydrodynamic interactions (HI) affect the dynamic properties of polymer chains [95][96][97]. Over the last few decades, advances in computational power have allowed for the simulation of long-chain polymers with in- de Pablo and coworkers [100]. Brownian dynamics simulations allows for the direct calculation of polymer chain dynamics with dominant HI. On the other hand, several approximate methods were developed to apply polymer kinetic theory to model the effect of HI on polymer chain dynamics [5], including the preaveraging approximation [97], the consistent averaging approximation [101,102], and the Gaussian approximation [103]. The latter incorporates the effects of fluctuating HI (rather than constant HI at a pre-chosen chain conformation) and is perhaps the most realistic of the models [104]. discretization level (number of springs N s ), Kuhn step size b K , HI parameter (hydrodynamic bead radius) a, and EV parameter v [114]. Polymer contour length and Kuhn step size for DNA are fairly well defined, but one concern is that some degree of parameter matching was required in order to choose a and v for a given level of discretization. An alternative Brownian dynamics simulation with HI and EV was subsequently reported by Hsieh and Larson which involved a systematic method for choosing the HI parameter a (or h * ) by matching the longest relaxation time or diffusion coefficient and the hydrodynamic drag on the polymer at full extension [115]. This method again yielded good agreement for DNA dynamics compared to single polymer experiments. Moreover, this work made use of an efficient second order semi-implicit predictor-corrector scheme for the time stepping algorithm [116], which enables relatively low computational expense for systems with large numbers of beads. Nevertheless, all of these methods require selection of model parameters that generally depend on the level of discretization. The method of successful fine graining allows for parameter-free modeling of polymers with HI and EV, as discussed below. In the context of DNA, a key question arises: are hydrodynamic interactions important or necessary for modeling the dynamics of DNA? The answer is absolutely yes. Near equilibrium, we have seen that the power-law scaling of DNA diffusion constants with molecular weight are consistent with a non-draining coil [52,53]. PERM simulations show that the theoretical Zimm limit for full non-draining coils is reached only in the limit of very high molecular weight DNA (L ≈ 10 2 -10 3 µm), however, DNA polymer coils are nearly fully nondraining for DNA molecules of size λ-DNA [35]. For these reasons, HI is clearly dominant at equilibrium for DNA molecules of size at least ≈40-50 kbp, which corresponds to polymers of size λ-DNA. Moving away from equilibrium and considering the role of HI on flow dynamics, it should be noted that DNA has a fairly large persistence length, which suggests that the increase in hydrodynamic drag (so called conformation-dependent drag) might be fairly minor for DNA compared to flexible polymers. Indeed, slender body theory predicts that the increase in drag between the coiled state to extended state is only a factor of ≈ 1.7 for λ-DNA [59,117]. For these reasons, the role of HI was difficult to answer and it was a major question in the field for many years. Nevertheless, this question has been suitably It was long ago predicted that this effect can give rise polymer conformation hysteresis in extensional flow [118][119][120], though the early predictions led to a vigorous debate in the field for many years [121][122][123][124][125]. Several challenges existed which complicated a clear answer to the question of polymer conformation hysteresis, including difficulties in analytical solution to polymer kinetic theories incorporating finite extensibility and hydrodynamic interactions, lack of suitable computational power to simulate the dynamics of long-chain polymers with HI, and lack of the ability to directly observe the dynamics of single polymer chains in flow. Polymer conformation hysteresis has been reviewed elsewhere [11], so here I only focus on the key aspects and recent considerations of the problem. In 2003, polymer conformation hysteresis was directly observed using single molecule techniques [126]. were complemented by Brownian dynamics simulations with intramolecular HI and EV [126,127], and good agreement was obtained between both methods. These single molecule experiments required several advances in order to obtain the results confirming hysteresis. First, the experiment required handling of extremely large DNA polymers in excess of 1 mm in contour length, which was required to achieve a large value of the drag ratio ζ stretch /ζ coil to induce conformation hysteresis. For λ-DNA (L=21 µm for stained DNA), the ratio ζ stretch /ζ coil ≈ 1.6, which is fairly small, but for bacterial genomic DNA (L=1.3 mm), the ratio increases to ζ stretch /ζ coil ≈ 5 [126]. Second, the observation of single polymers for large strains in extensional flow required the use of feedback control over the stagnation point, which effectively traps objects for long times in flow [126]. This method has been further optimized and automated in the development of the Stokes trap, which is a new method to manipulate multiple arbitrary particles or molecules using only fluid flow [128]. Moreover, these experiments showcased the power of combining single polymer experiments with Brownian dynamics simulations to probe physical phenomena in polymer solutions. The initial experiments on DNA were followed by BD simulations of polystyrene in dilute solution, which again predicted the emergence of hysteresis at a critical molecular weight [129]. It was further shown that conformation hysteresis can be induced by non-linear extensional flows and that hysteresis can be formally understood by considering Kramers theory and the process of activated transitions over an energy barrier separating coiled and extended states [130]. Moreover, the role of solvent quality on the coil-stretch transition was also studied using BD simulations [131]. In 2007, the first evidence of polymer conformation hysteresis using bulk rheology was In recent years, some attention has been drawn to the underlying nature of the coil-stretch transition [135,136]. By making an analogy to a thermodynamic process (at true thermal equilibrium), a phase transition may be classified as a first-order or second-order transition [137]. A first-order transition (such as liquid vaporization) is generally described by a discontinuity in the order parameter across a transition, whereas a second-order transition is described by a continuous order parameter across the transition. In many respects, the classification is of academic interest as it pertains to the coil-stretch transition, but a practical aspect is the emergence of a potentially hysteretic stress-strain curve in polymer processing, which would have major implications for polymer rheology. In any event, hysteresis is a classic signature of a first-order transition. A recent paper has claimed that the transition is second-order due to the evidence of 'critical slowing down' in polymer dynamics near the coil-stretch transition [136]. However, these results only consider DNA polymers up to the size of T4 genomic DNA (169 kbp), which is known to be much smaller than the critical molecular weight M W crit required to observe polymer hysteresis. Moreover, an increase in chain fluctuations is expected as the polymer molecular weight approaches M W crit , as the effective non-equilibrium free energy in flow will broaden and exhibit a deep and wide energy well in the vicinity of the coil-stretch transition for polymers near M W crit [118,126]. In fact, these general observations were made in the original work on hysteresis [126], which showed that DNA polymers of size L ≈ 575 µm were sluggish to recoil to the collapsed state when initially prepared in the stretched state near W i ≈ 0.5. Taken together, essentially all evidence points toward a first-order-like transition for polymers in extensional flow. Dynamics of DNA polymers in post arrays Significant attention has been given to the stretching dynamics of single DNA molecules in microfabricated or nanofabricated arrays of posts. In the context of electrophoretically driven motion, the dynamics of single DNA in post arrays was studied many years ago by Austin and coworkers [138]. Moreover, the dynamics of a single DNA molecule with an isolated (insulating) post has been studied in detail by Randall and Doyle [139][140][141]. However, recent work has shown that for most practical molecular weight DNA molecules, physical properties lie in the transition region between purely theta solvent and athermal solvents [43]; moreover, in the asymptotic limit of good solvents where EV interactions are governed only by hard-core repulsive potentials, the apparent EV exponent was found to be ν = 0.546 due to the large monomer aspect ratio b/w for DNA [35]. Moreover, it was found that the diffusion coefficient approaches the full non-draining Zimm value only in the limit of extremely large molecular weight DNA molecules (L ≈ 1 mm) [35]. These predictions are consistent with the results from the hysteresis experiments on long DNA molecules (L ≈ 1.3 mm) in that extremely long DNA polymers are required to achieve full non-draining behavior. Nevertheless, despite these differences, DNA can be described by concepts of universality in polymer physics within the context of dynamical scaling. However, DNA differs fundamentally from truly flexible polymer chains in several respects. First, flexible polymers such as ssDNA and PEG exhibit a low-force non-linear elasticity in the limit of low forces due to monomer-monomer excluded volume interactions. As discussed in Section II C above, the non-linear low force elastic regime is essentially absent for most reasonable size DNA molecules used in single polymer experiments. Moreover, the extensibility (ratio of contour length to equilibrium size) of DNA is substantially less than that of flexible polymer chains. For example, a double stranded DNA molecule with a con- it should be noted that the elastic force relations of other biological and synthetic flexible polymers have been studied using single molecule force microscopy, which has been reviewed elsewhere [147]. Dynamics in time-dependent, oscillatory flows The vast majority of single polymer dynamics experiments has focused on the steadystate or transient response of polymers when exposed to a step strain rate in shear or extensional flow [11]. However, there is a need to understand single polymer dynamics in Refs. [152,153]. Physically, the effective W i ef f can be motivated by considering the amount of fluid strain applied in a half-cycle, which is T /2 = W i 0 /πDe [153]. In addition to LAOE, the dynamics of single polymers in large amplitude oscillatory shear flow (LAOS) was studied by Brownian dynamics simulation in 2009 [154]. Interestingly, it was found that single chain dynamics can essentially be described as experiencing a steady These simulations provide an intriguing link to prior work on the characteristic tumbling frequency of polymers in steady shear flow which was determined using a combination of single molecule experiments and BD simulations [83,85]. The dynamics of single polymers in LAOS has not yet been studied experimentally, though this would provide key insights into connecting microscale dynamics and bulk rheological phenomena in LAOS such as strain softening and strain hardening. Non-equilibrium work relations for polymer dynamics In 2013, it was shown that equilibrium properties such as polymer chain elasticity can be determined from far-from-equilibrium information such as polymer stretching dynamics in flow [155]. This demonstration represents a fundamentally new direction in the field for determining near-equilibrium properties and thermodynamic quantities such as free energy, and it was made possible by applying recent methods in non-equilibrium statistical mechanics to the field of single polymer dynamics. In particular, the Jarzynski equality allows for determination of a free energy change ∆F between two states of a system by sampling the work distribution for transitioning the system from state 1 to state 2 [156]: where w is the work done on a system connecting states 1 and 2, β = 1/k B T is the inverse Boltzmann temperature, and p(w) is the probability distribution associated with the work. Using this framework, repeated measurements of the work performed on a molecular system upon transitioning between states 1 and 2 enables determination of a free energy change. The Jarzynski equality is intrinsically amenable to single molecule experiments, assuming that the work w can be determined for a process. In early experiments, optical tweezers were used to transition a single RNA strand between two states described by a specified molecular extension x, thereby enabling determination of the free energy of an RNA hairpin, wherein work is simply defined as force applied over a distance [157]. However, calculation of work done by a flowing fluid in stretching a polymer between two states of molecular extension is fundamentally different due to dissipation, and therefore required a different (and careful) definition of work for polymer dynamics [155,158]. Using this approach, Schroeder and coworkers published a series of papers showing that polymer chain elasticity (force-extension relations) and relaxation times can be determined for single polymers by sampling non-equilibrium properties such as stretching dynamics in flow ( Figure 7) [155,159,160]. First, the stored elastic energy (or alternatively, the elastic force as a function of extension) was determined by systematically stepping single polymers between defined states of molecular extension in extensional flow [160]. Using Brownian dynamics simulations of free-draining bead-spring polymer chains, the entropic elasticity (force-extension) was determined for wormlike chains described by the Marko-Siggia relation and for polymer chains described by the inverse Langevin force relation. The general framework was also applied to prior single polymer experimental data on large concatemers of 7-λ DNA, which further validates the results. In related work, the method was extended to multi-bead-spring chains with hydrodynamic interactions (HI) and for polymer dynam-ics in shear flow which contains vorticity [155]. In all cases, this approach also allows for determination of the housekeeping power, which is the rate of work required to maintain polymer extension at an average constant extension in a given flow field. Finally, it was shown that the external control parameter could be taken to be flow strength (W i), rather than polymer extension, which is generally more applicable to experiments [159]. Here, the polymer is transitioned between non-equilibrium steady states (at constant W i), which required a new definition of the work done through the process. Nevertheless, and quite remarkably, this method allowed for determination of an effective non-equilibrium free energy in flow, a Helmholtz free energy, and a non-equilibrium entropy for single polymers in flow [159]. Finally, the general framework of non-equilibrium work relations was further applied to polymer dynamics using an analytical path integral approach by Cherayil and coworkers, which further extends the applicability of the method [161,162]. Helmholtz free energy) under highly non-equilibrium conditions. These concepts may be useful in designing optimized processing methods for polymers with consideration of flow energies, for example in designing a process to minimize dissipation or heat (as lost work), or to maximize stored elastic energy, while further minimizing energy input. Dynamics of polymer globules The dynamics of polymer globules has studied intensively over the last decade. A key motivation for this work is to understand the dynamics of von Willebrand factor (vWF), which is a large multimeric glycoprotein found in blood plasma and plays a key role in blood clotting. It is known that shear flow induces unfolding and subsequent adhesion of vWF [163]. In 2006, the dynamics of single polymer globules in shear flow was studied using a coarse-grained bead-spring model with the stiff-spring approximation, which essentially amounts to a bead-rod model [164]. Poor-solvent conditions were simulated by introducing an attractive potential between beads using a Lennard-Jones potential. It was found that below a critical shear rateγ * , the polymer chain remains collapsed, but for shear ratesγ >γ * , single polymer chains undergo repeated collapsed/unfolding cycles. Importantly, hydrodynamic interactions were necessary to capture the proper dynamics during the collapse transitions [164,165]. Simulations of polymer globules were later extended to extensional flow [166,167] and linear mixed flows [168]. In related work, the role of internal friction in collapsed globules was examined [169], followed by further studies on the effects of hydrodynamic-induced lift forces on tethered polymers in shear flow [170,171]. Simulations were further modified to incorporate the effects of local chemistry by modeling 'stickers' or regions of the polymer chain that result in adhesion. In 2011, Sing and Alexander-Katz incorporated Monte Carlo-based self-association stickers into a BD simulation, which was used to study coil-globule transitions and the transition between Rouse chain dynamics and self-association dynamics [172]. This approach was further extended to study the dynamics of self-associating polymer chains in shear flow [173]. In 2014, Larson and coworkers developed a systematic method for coarse-graining bead-rod chains with attractive monomer interactions and variable bending stiffness, which serves as suitable model for semi-flexible biopolymers such as cellulose [174]. Interestingly, this work revealed an intriguing range of collapsed polymer structures including tori, helices, and and folded bundles for different ratios of the bead diameter to the persistence length. Recently, the role of shear flow on the dynamic formation of globules in shear flow was studied using bead-spring BD simulations by Underhill and coworkers [175]. Successive fine graining In the early 2000's, Prakash and coworkers embarked on a systematic study of the influence of excluded volume (EV) interactions on the dynamics of polymers [176][177][178][179]. EV interactions between beads within a dilute solution of Hookean dumbbells [176] and Rouse chains was modeled using a narrow Gaussian potential [177]. Using this approach, the linear viscoelastic properties [177] and the dynamics under shear flow for Rouse chain were mod-eled in the presence of EV [178]. In continuing work, the coupling and additive influence of HI an EV in the dynamics of Hookean dumbbells was considered using a regularized Oseen-Burgers tensor for the HI [179]. These publications were followed by the work of Graham and coworkers for simulating DNA dynamics by implementing the Chebyshev approximation for Fixman's method [100], and soon after by the work of Larson and coworkers [115] and Shaqfeh and coworkers [127] for related coarse-grained multi-bead spring models with HI and EV. However, despite the impressive advances reported in these publications, parameter selection and the dependence of model parameters on the level of coarse graining continued to be an issue and arguably amounted to finding the best fit of parameters to match experimental data. To address this issue, Prakash and coworkers developed a new method called successive fine graining that provided a systematic method for choosing model parameters [180,181]. Importantly, in the limit of high molecular weight polymer chains, the predictions from Brownian dynamics simulations become independent of model parameters [181]. In essence, the method relies on representing the polymer chain as a long but finite macromolecule using a coarse-grained bead-spring model. In a series of simulations, the polymer chain is successively fine-grained by increasing the number of beads. Extrapolating the results obtained using BD simulations to the large N limit enables determination of equilibrium or non-equilibrium properties that are essentially independent of the model parameters. The method was first developed for Hookean chains with HI interactions [108], where the general methodology of extrapolation was developed, followed by extending the approach to finitely extensible polymers in extensional flow [181]. The method was validated by comparison to dilute solution experiments in terms of the elongational viscosity of a polystyrene solution using filament stretching rheometry [182]. Remarkably, the method of SFG was further shown to generate quantitatively accurate predictions of DNA stretching dynamics in semidilute unentangled solutions in extensional flow [88]. with a particular emphasis on comparing Krylov subspace and Chebyshev based techniques [183]. This work was followed by a simulation-based study of the extensional rheology of high molecular weight polystyrene in dilute solutions [184]. Here, Saadat and Khomami developed a hi-fidelity Brownian dynamics approach to simulation high molecular weight polymers using the Krylov framework and a semi-implicit predictor-corrector scheme. Moreover, Moghani and Khomami further developed computationally efficient BD simulations for high molecular weight chains with HI and EV using bead-rod (instead of bead-spring) models [185]. In recent work, these concepts and models have been extended to polyelectrolyte chains [186]. In 2017, a new method for simulating the dynamics of single polymer chains with HI in dilute solutions was developed [187]. The conformational averaging (CA) method essentially treats the intramolecular HI as a mean-field, which is an extremely efficient approximate method for determining hydrodynamic interactions. An iterative scheme is used to establish self-consistency between a hydrodynamic matrix that is averaged over simulation and the hydrodynamic matrix used to run the simulation. Results from this method were compared to standard BD simulations and polymer theory, which show that this method quantitatively captures both equilibrium and steady-state dynamics in extensional flow after only a few iterations. The use of an averaged hydrodynamic matrix allows the computationally expensive Brownian noise calculation to be performed infrequently, so that it is no longer the bottleneck of the simulation calculations. The method has only been applied to dilute solution systems (linear chains and ring polymers), though it can be extended to provide an extremely efficient method for studying semi-dilute polymer solutions in flow. V. SEMI-DILUTE UNENTANGLED AND ENTANGLED DYNAMICS Recent work in the field of single polymer dynamics has pushed into semi-dilute unentangled and entangled solutions. In these experiments, the general approach is to fluorescently label a small amount of tracer or probe molecules in a background of unlabeled polymers. In this way, the influence of intermolecular interactions on the dynamics of a single chain can be directly observed in non-dilute polymer solutions. In tandem with single polymer experiments, significant advances were made in the modeling and simulation of semi-dilute unentangled polymer solutions, which is extremely challenging due to many-body effects and the role of HI and EV. These recent single molecule experiments and simulations have begun to probe the precise roles of HI in the semi-dilute regime in order to quantitatively understand DNA dynamics in flow [54,88]. As discussed below, the effect of HI is important and necessary for understanding the dynamics of semi-dilute polymer solutions. Semi-dilute unentangled solution dynamics The dynamics of polymer chains in semi-dilute unentangled solutions is a particularly challenging problem in the field. Semi-dilute unentangled polymer solutions are characterized by a concentration c that is larger than the overlap concentration c * but less than the entanglement concentration c e , such that c * < c < c e . Semi-dilute solutions often exhibit large fluctuations in concentration, which precludes straightforward treatment using a mean-field approach. Moreover, the role of intra-and intermolecular HI may be significant, further complicating modeling and simulations due to many-body interactions. The near equilibrium properties of semi-dilute polymer solutions are governed by an interplay between polymer concentration and solvent quality ( Figure 8) [41,42]. Recently, it was appreciated that polymer behavior can be described using a dynamic double crossover in scaling properties with concentration and solvent quality in the semi-dilute regime [42]. In this framework, two parameters are commonly used to describe the equilibrium properties of semi-dilute solutions. First, the critical overlap concentration c * ≈ M/N A R 3 g is used as a characteristic polymer concentration in semi-dilute solutions, where M is polymer molecular weight, N A is Avogadro's number, and R g is the radius of gyration. Using the overlap concentration, a scaled polymer concentration of c/c * = 1 corresponds to a bulk solution concentration of polymer that is roughly equivalent to the concentration of monomer within a polymer coil of size R g . In addition, solvent quality can be characterized by the chain interaction parameter z, which is a function of polymer molecular weight M and temperature T relative to the theta temperature T θ . In 2006, Smith and coworkers developed a series of well-defined, monodisperse DNA constructs for single polymer dynamics [28]. In particular, methods to prepare linear or circular double stranded DNA ranging in molecular weight between 2.7 kbp and 289 kbp were reported, and it was shown that the DNA constructs could be propagated in bacteria and prepared using standard methods in bacterial cell culture and DNA purification. Importantly, the authors made these constructs publicly available, which has enabled many other researchers in the field to study DNA dynamics using these materials. Using these DNA constructs, Prakash and coworkers determined the zero-shear viscosity η p,0 of semi-dilute linear DNA solutions in 2014 [43], with results showing that DNA polymer solutions generally obey universal scaling relations [188]. Moreover, the theta temperature for DNA in aqueous solutions was determined to be T =14.7 o C by static light scattering, and the solvent quality and radius of gyration was determined using dynamic light scattering and rigorous quantitative matching to BD simulations [43]. Using these results, the overlap concentration for λ-DNA was found to be c * = 44 µg/mL, and the chain interaction parameter z ≈ 1.0 for T = 22 o C These advances greatly improved our understanding of the fundamental physical properties of DNA. In 2017, the dynamics of single DNA in semi-dilute unentangled solutions in extensional flow was studied using a combination of single molecule experiments [54] and BD simulations [88] (Figures 8 and 9). Single molecule experiments were used to investigate the dynam- [86]. However, the results from DNA stretching in extensional flow did not show the same level of universality [54]. In terms of new efforts in modeling semi-dilute polymer solutions, several mesoscopic techniques have been developed in recent years to study non-equilibrium dynamics [190][191][192][193]. Prakash and coworkers developed an optimized BD algorithm for semi-dilute polymer solutions in the presence of HI and EV [192]. This algorithm was used in conjunction with the method of successive fine graining (SFG) to provide parameter-free predictions of the dynamics of DNA in semi-dilute solution in extensional flow [88], thereby providing a direct complement to single molecule experiments [54]. Remarkably, BD simulation results provided quantitative agreement to single polymer experiments using the SFG method. Taken together, these results show that HI is important and necessary to quantitatively capture the dynamic behavior of DNA in semi-dilute solutions. Interestingly, an analytical model has been developed that incorporates the effects of polymer self-concentration and conformation-dependent drag in the semi-dilute regime [134], and it was recently shown that this model captures hysteresis in the coil-stretch transition as observed in semi-dilute solutions using filament stretching rheometry [133]. Semi-dilute entangled solution dynamics Semi-dilute entangled solutions are generally defined to lie in a concentration regime c e < c < c * * , where c * * is the polymer concentration at which the concentration blob size ξ c equals the thermal blob size ξ. Recent single molecule experiments on longest polymer relaxation times [54] and bulk rheological experiments on intrinsic viscosity [43,188] show that the critical entanglement concentration for λ-DNA occurs at c e ≈ 4 c * , which is consistent with the range of onset of the entangled regime for different polymer chemistries [41]. In 1994, Chu and coworkers used single polymer dynamics to directly observe the tube-like motion of single fluorescently labeled DNA molecules in a background of unlabeled entangled DNA [18]. In these experiments, one end of a concatemer of λ-DNA ranging in contour length from 16 to 100 µm was linked to a micron-sized bead, and an optical trap was used to pull the bead through the solution of entangled λ-DNA at a concentration of 0.6 mg/mL (c ≈ 4 c e ). In these experiments, the polymer chain was observed to relax along its stretched contour, which provides evidence for polymer reptation in entangled solutions. Some degree of concern was expressed that the motion of the large micron-sized bead through the polymer solution might disrupt the local entanglement network, thereby resulting in modified polymer relaxation behavior. However, the relaxation of the local network (equilibration time of a thermally diffusing chain in a tube) was much faster than the reptation time of the long polymer chain. In 1995, the diffusion of single DNA molecules ranging in size from 4.3 to 23 kbp was observed in background solutions of concentrated λ-DNA at 0.6 mg/mL [194]. Results showed that the center-of-mass diffusion coefficient scaled with molecular weight as The bulk viscosity of DNA solutions was considered many years ago by Zimm using purified genomic DNA from bacteriophage T2 and T7 [97]. In the late 1990s, the linear viscoelastic properties of concentrated DNA solutions was studied by Wirtz and coworkers using calf thymus DNA (polydisperse with average molecular size of 13 kbp [197]. These results showed that for DNA concentrations greater than the entanglement concentration c e ≈ 2 mg/mL (for calf thymus DNA), a plateau modulus was observed in the storage modulus such that G p ≈ 6.1 dyn/cm 2 at c = c e . These experiments were followed by bulk rheological experiments on the non-linear viscoelasticity of entangled DNA solutions in shear flow [198], with results showing a plateau in shear stress over a decade in shear rate for concentrated solutions of T4 DNA. In 2007, Robertson and Smith directly measured the intermolecular forces experienced by a single polymer chain in entangled DNA solutions using optical tweezers ( Figure 10) [196]. In this experiment, a single DNA chain (25.3 kbp) was linked to two micron-sized beads, and a dual optical trap was used to confine both beads and to induce transverse displacement of the DNA-bead tether through the entangled polymer solution (1 mg/mL solution of 115 [196]. kbp linear DNA, such that c ≈ 40 c * for 115 kbp DNA). These results enabled estimation of the tube radius of 0.8 µm, which was close to the value predicted from Doi-Edwards theory [3] and from simulations from Larson and coworkers [199]. Elastic instabilities in semi-dilute DNA solutions It has long been known that elastic polymer solutions can give rise to instabilities and secondary flows [4,201,202]. The onset of secondary flows in DNA solutions has been studied using a wide array of microfluidic geometries [24]. In particular, the small length scales in microfluidic devices and associated viscous-dominated flow conditions allows for flow phenomena to be studied in the limit of low Reynolds number Re 1 and high W i, thereby allowing access to the highly elastic regime defined by the elasticity number El ≡ W i/Re [203]. In 2008, elastic secondary flows of semi-dilute DNA solutions were studied in microfluidic devices containing abrupt 90 o microbends [204]. Although not strictly a single polymer visualization experiment, particle tracking velocimetry (PTV) can be applied to DNA solutions in flow, thereby revealing the onset of secondary flows and instabilities due to elasticity. These experiments revealed that a vortex flow developed in the upstream corner of the right-angle bend and tended to grow in size with increasing W i. In related work, the flow of semi-dilute unentangled λ-DNA solutions (0.5 < c/c * < 3) and lightly entangled λ-DNA solutions c = 10 c * was studied in a gradual microfluidic contraction flow with combined on-chip pressure measurements [205]. Here, it was observed that large, stable vortices form about the centerline and upstream of the channel entrance. Direct visualization of single DNA conformation and stretching, combined with flow visualization measurements, were performed on a semi-dilute unentangled and entangled solution of DNA in a 4:1 planar micro-contraction flow [206]. These experiments showed the ability to image single DNA polymers in non-canonical flow fields other than simple shear or extension. Recently, this approach has been used to study the necking and pinch-off dynamics of liquid droplets containing semi-dilute polymer solutions of polyacrylamide near the overlap concentration [207]. Single fluorescently labeled DNA molecules were suspended in the semi-dilute polymer droplets, thereby enabling visualization of a DNA 'tracer' polymer in this flow geometry. It was found that individual polymer molecules suddenly stretch from a coiled conformation at the onset of necking. The extensional flow inside the neck is strong enough to deform and stretch polymer chains; however, the distribution of polymer conformations was found to be quite broad, but the distribution remains stationary in time during the necking process. In addition, this approach was extended to visualize the dynamics of single DNA molecules in a microfluidic-based porous media flow [208]. A common feature in these experiments appears to be broad and heterogeneous distribution of polymer chain conformations and stretching dynamics in flow, features that can only be revealed using single molecule imaging. Additional experiments by Wang and coworkers group in 2010 explored the effect of a gradual ramp up in shear rate, rather than an abrupt step change in shear rate [212]. and non-adsorbing boundaries [214]. Finally, recent methods in optical coherence tomography and velocimetry measurements have further confirmed the existence of shear banded flow profiles in highly entangled DNA solutions [215]. In terms of computational modeling of shear banding, there has been significant effort by many different research groups directed at this problem. Here, we focus only on a few molecular-based computational methods that have been used to investigate shear banding. In 2015, Mohagheghi and Khomami used dissipative particle dynamics (DPD) simulations to uncover the molecular processes leading to shear banding in highly entangled polymer melts [216]. The mechanism is complicated, but it essentially the stress overshoot in shear flow drives locally inhomogeneous chain deformation and spatially inhomogeneous chain disentanglement. In turn, the localized jump in the entanglement density along the velocity gradient direction results in a considerable jump in normal stress and viscosity, which ultimately leads to shear banding. This work was followed by two companion articles by the same authors that investigated flow-microstructure coupling in entangled polymer melts, which ultimately gives rise to shear banding [217,218]. Finally, these authors further elucidated a set of molecular-based criteria for shear banding in 2016 [219]. VI. ARCHITECTURALLY COMPLEX DNA: COMBS, BOTTLEBRUSHES, RINGS The dynamics of architecturally complex polymers is an extremely important problem in the field of rheology. A main goal is to identify how molecular branching and non-linear molecular topologies affect non-equilibrium dynamics and relaxation processes in entangled solutions and melts [220]. In an entangled solution of combs or comb polymer melts, branch points are known to substantially slow down the overall relaxation processes within the material. Branching results in a spectrum of relaxation times that can be attributed to molecular topology, including the branch segments, branch points, and the motion of the long chain backbone [221]. In concentrated and entangled solutions, these complex dynamics can be exceedingly complicated to discern using bulk techniques. Single molecule experiments enable the direct imaging of these processes, thereby allowing for a molecular-scale understanding of bulk rheological behavior. Early work by Archer and coworkers focused on the synthesis and direct single molecule imaging of star-shaped DNA generated by hybridizing short oligonucleotides to form a small star-branched junction, onto which long DNA strands of λ-DNA were hybridized and ligated [222][223][224] This method was also extended to create pom-pom polymers by connecting two stars with a λ-DNA crossbar. Conformational dynamics were mainly studied under the influence of electric fields, and electrophoretic mobility of star DNA polymers was measured in solutions of polyacrylamide [222,223], polyethylene oxide [224], and agarose and polyacrylamide gels [222]. This early work represents a few examples of only a small number of studies on single branched polymers. In the following section, I summarize recent efforts in the field of single polymer dynamics to understand the role of molecular topology on non-equilibrium flow phenomena. Here, the focus is on branching and ring polymers and not on intramolecular topological interactions such as knots, though there has been a recent interest in single molecule studies of knot dynamics [225][226][227][228][229]. For a lengthy discussion of knot dynamics as probed by single molecule techniques, I refer the reader to a recent review published elsewhere [12]. Comb-shaped DNA Schroeder and coworkers recently developed new strategies to synthesize DNA-based polymers with comb-shaped architectures that are suitable single molecule imaging ( Figure 11) [27,230]. Here, a hybrid enzymatic-synthetic approach was used to synthesize long branched DNA for single molecule studies of comb polymers [27]. Using PCR for the synthesis of the branches and backbones in separate reactions, precise control was maintained over branch length (1-10 kbp) and backbone length (10-30 kbp). However, a graft-onto reaction scheme was used to covalently link branches to DNA backbones, thereby resulting in average control over the degree of branching by tuning reaction stoichiometry. Overall, this method This approach was used to study the relaxation dynamics of single DNA-based combs as a function of comb architecture. Reproduced with permission from Ref. [27]. was made possible by the inclusion of chemically modified PCR primers (containing terminal azide moieties) and non-natural nucleotides (containing dibenzocyclooctyne or DBCO for copper-free click chemistry) during PCR. Moreover, side branches of DNA were synthesized to contain internal fluorescent dyes, thereby enabling simultaneous two-color imaging of branches (red) and backbones (green) during flow dynamics. Using this approach, the conformational relaxation of surface-tethered comb DNA was studied using single molecule imaging [27]. such that branches explored various conformational breathing modes while the backbone relaxed. At long times, the conformational relaxation of the backbone dominated the process, and these timescales were quantified by tracking single polymer extension during the relaxation process. The longest relaxation time was found to increase with an increasing number of branches. Interestingly, the role of branch position was also studied, and a strong dependence on the location of the branch relative to the surface tether was found. Branches far from the tether slowed relaxation, whereas branches near the tether resulted in faster overall relaxation processes compared to a linear polymer. It was postulated that branches near the tether point may accelerate the relaxation process by inducing cooperative hydrodynamic flows. Taken together, these results clearly show that relaxation processes depend on molecular topology. Bottlebrush polymers The general methods developed by Schroeder and coworkers for synthesizing comb-shaped polymers based on double stranded DNA [230] were recently extended to synthesize bottlebrush polymers based on ssDNA [231]. The bottlebrush polymers consist of a ssDNA main chain backbone with poly(ethylene glycol) (PEG) side chains. First, ssDNA was synthesized using rolling circle replication (RCR) following a similar approach used for single molecule studies of linear unbranched ssDNA [145]. The RCR reaction is performed with a fraction of DBCO-modifed dUTPs, which replace thymine in the main chain, thereby serving as grafting points for side branches using a copper-free click chemistry reaction described above. PEG side chains (10 kDa) are relatively monodisperse (PDI=1.04-1.06). Grafting density was controlled in an average sense by tuning the ratio of DBCO-dUTP to natural dTTP nucleotides in the reaction, with a 1:4 ratio generating a sparsely-grafted comb polymer (one side chain per 35 bases) and a 4:1 ratio generating a bottlebrush polymer (one side chain per 8.75 bases). Finally, the ssDNA is terminally labeled with thiol and biotin moieties, enabling polymer immobilization on to a glass surface and a streptavidin-coated magnetic bead. Using this approach, a magnetic tweezing experimental setup was used to directly measure the elasticity (force extension relation) of single bottlebrush polymers. It was found that chain stiffening due to side branches was only significant on long length scales, with the main chain retaining flexibility on short length scales. From these experiments, an estimate of the internal tension generated by side-chain repulsion was determined. Taken together, these experiments represent the first measurements of the elasticity of bottlebrush polymers. Ring polymers Ring polymers represent a fascinating topic in polymer physics. Due to the constraint of ring closure, it is thought that ring polymers exhibit qualitatively different dynamics in dilute solutions, concentrated solutions, and melts compared to linear chains [232][233][234]. In recent years, there has been a renewed interest in the community in experimentally studying ring polymers using bulk rheology and molecular modeling techniques. However, it can be extremely difficult to prepare pure solutions and melts of ring polymers due to challenges in synthesis and purification to ensure high degrees of purity [233]. Single molecule methods based on DNA offer an alternative approach to prepare highly pure solutions of ring polymers and to future probe their dynamics using single molecule techniques. For example, plasmids and many types of genomic DNA are naturally propagated in circular form in bacteria. Moreover, biochemical treatment based on endonucleases can specifically digest linear chains while leaving circular DNA chains intact. Using this general approach, DNA presents an advantageous polymeric system to study ring polymer dynamics using bulk rheology, microrheology, and single molecule imaging methods. In recent years, single molecule imaging has been used to study the center-of-mass diffusion of ring DNA and linear DNA in dilute solutions [53] and in concentrated solutions [29,235]. Importantly, these experiments began to probe the effect of polymer chain topology on the long-time diffusion dynamics of single chains. In dilute solution, it was observed that ring polymers generally follow a power-law scaling of the diffusion constant with molecular weight similar to linear DNA, where D ∼ L −0.589 [53]. In entangled solutions, a series of single molecule experiments were performed by varying the tracer chain topology (linear or circular) in a background solution of entangled, unlabeled polymer (linear or circular) [29,235]. The general trends were that the diffusion of circular chains in a circular background (C-C) was observed to be the fastest, with the complete trends as For a more lengthy discussion of single molecule diffusion experiments, I refer the reader to a recent review article on the topic [12]. Moving beyond near-equilibrium polymer diffusion, recent work has focused on the dynamics of circular DNA in dilute solution extensional flow [239,240]. First, the longest relaxation time τ of single polymers was measured as a function of molecular weight for 25, 45, and 114.8 kbp circular DNA [239]. It was found that ring polymers relax faster than linear chains of the same molecular weight, which can be understood due to the differences in the mode structure. Ring boundary conditions do not permit the lowest mode that exists in the linear chain Rouse motion, and instead the lowest mode has half the wavelength λ in the case of ring polymers such that λ 1,linear = 2 λ 1,ring [240]. Therefore, the mode relaxes more quickly, in principle by a factor of 4 for the free-draining polymer case because τ 1 ∼ 1/λ 2 1 [3]. In particular, it was found that τ linear /τ ring ≈ 2 from single molecule experiments [239] and τ linear /τ ring ≈ 4.0 from free-draining BD simulations [240]. As HI and EV are included in the BD simulations, it was observed that τ linear /τ ring ≈ 1.1 [240], which is consistent with prior work using lattice Boltzmann simulations [241]. The power-law scaling of the longest relaxation time τ as a function of molecular weight was also considered for both ring and linear polymers. Single molecule experiments revealed that the longest relaxation time of ring polymers scaled as τ ring ∼ L 1.58±0.06 over the range of 25, 45, and 114.8 kbp [239,240]. Moreover, complementary Brownian dynamics simulations showed that τ linear ∼ L 2.02±0. 15 and τ ring ∼ L 1.97±0.02 for the free-draining case, whereas τ linear ∼ L 1.53±0.05 and τ ring ∼ L 1.56±0.04 for the HI case. In other words, BD simulations suggest that linear and ring polymers exhibit similar power-law scalings with molecular weight for the free-draining and HI-only case. However, BD simulations with HI and EV showed that τ linear ∼ L 1.93±0.09 and τ ring ∼ L 1.65±0.04 , which suggests that the inclusion of excluded volume interactions fundamentally changes the nature of chain relaxation for ring polymers. However, it should be noted that excluded volume interactions were included using a Lennard-Jones potential, and deviations from the expected relation of τ ∼ N 1.8 for the case of linear polymers could arise due to the remaining attractive portion of the L-J pair potential or due to errors in the high-N data points that require long time averages [240]. In any event, results from single molecule experiments on ring DNA relaxation are consistent with results from BD simulations with HI and EV to within the error. From this perspective, single molecule experiments appear to provide an ideal method for probing dynamic behavior at the transition between physical regimes such as polymer concentration or molecular weight, where the latter property is related to associated effects of intramolecular HI in dilute and semi-dilute solutions and polymer conformational hysteresis [127,133,134]. Numerous additional examples of dynamic heterogeneity of polymer chain dynamics in flow can be cited, ranging from polymer stretching in porous media [208] and chain collisions with single microfabricated posts [141]. Taken together, these results showcase the importance of distributions in molecular behavior and molecular sub-populations in determining solution properties. In addition to revealing the importance of dynamic heterogeneity in polymer dynamics, single molecule methods are being used to directly observe the dynamics of topologically complex polymers. In recent work, the dynamics of comb polymers were observed at the single molecule level for the first time [12], with results showing that polymer chain topology (branch density, branch molecular weight, and position of branch points) directly determines polymer relaxation times following cessation of flow. These experiments are currently being extended to non-dilute solutions, which will be essential in comparing to molecular constitutive equations for comb polymer architectures that have so far been compared only to bulk rheological experiments [243]. Moreover, recent single molecule experiments on bottlebrush polymers have revealed the importance of an internal scale-dependent tension that impacts chain elasticity [231], which fundamentally changes the force-extension behavior away from linear unbranched polymers. This work follows single molecule studies probing the role of excluded volume interactions on generating a non-linear low-force elasticity for linear polymers [38], which subsequently inspired the development of several new force-extension relations for polymer chains that depend on solvent quality [63,67]. To this end, single molecule experiments have directly informed on the elasticity of single polymers, information that can be used in coarse-grained simulations of polymer stretching in flow. In the realm of ring polymers, single molecule studies have revealed an intriguing and previously unexpected 'ring-opening' chain conformation in dilute solution extensional flows that can be attributed to intramolecular HI [239,240]. Despite recent progress, however, single molecule studies have only scratched the surface in addressing the broad range of polymer chemistries, topologies, solution concentrations, and non-equilibrium processing conditions for complex materials. Indeed, much work remains to be performed, and the coming years promise to yield exciting new forays in to the dynamics of increasingly complex polymeric systems using single molecule techniques. Even in the realm of dilute solution dynamics, several questions remain unanswered. For example, the modal structure of single polymers is not yet fully resolved from an experimental perspective. Early single molecule studies on partially stretched DNA showed that the motion of the DNA polymer chain backbone could be decomposed into a set of normal modes [244], however, these results suggest that hydrodynamic interactions do not play an appreciable role in extended chain dynamics for DNA molecules of size λ-DNA (48.5 kbp). Nevertheless, for increasingly flexible polymer chains with dominant intramolecular HI, we expect non-linear coupling interactions to invalidate the linearized approximations for ideal polymer chains [40]. Repeating the experiments on flexible polymer chains such as single stranded DNA [145] may yield different findings. The field of molecular rheology would benefit from efforts to combine measurements of bulk stress and high-resolution molecular scale imaging. For example, simultaneous measurement of stress and viscosity, coupled with direct imaging of single polymer chain dynamics, would yield invaluable information regarding how molecular-scale interactions give rise to macroscopic material properties. Indeed, recent work has begun to combine shear rheometry with direct single molecule imaging, for example by mounting a shear rheometer with a transparent lower surface onto an inverted fluorescence microscope [212], thereby enabling simultaneous measurements of stress with non-equilibrium polymer conformations in flow. Moreover, increasingly creative experimental setups are enabling for direct imaging of single polymer dynamics in more complex flow fields, such as polymer chain dynamics spooling around rotating nanowires, as recently reported by Leslie and coworkers in 2017 [245]. In other cases, single molecule techniques have inspired new methods in microrheology. In 2017, particle tracking in viscoelastic solutions was extended to extensional flow [246], which enabled determination of extensional viscosity using microfluidics. This approach essentially amounts to passive non-linear microrheology, enabled by precise methods in particle trapping [128], which represents a new direction in the field of microrheology. [1] R. G. Larson, The Structure and Rheology of Complex Fluids (Oxford University Press, 1999).
2017-12-10T16:19:22.000Z
2017-12-10T00:00:00.000
{ "year": 2017, "sha1": "182d210e29126cf1be2cb4ab9c6326d5bbde27cf", "oa_license": null, "oa_url": "https://sor.scitation.org/doi/pdf/10.1122/1.5013246", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "182d210e29126cf1be2cb4ab9c6326d5bbde27cf", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
251646978
pes2o/s2orc
v3-fos-license
Elective surgeries during and after the COVID-19 pandemic: Case burden and physician shortage concerns The COVID-19 pandemic had a significant impact on several aspects of global healthcare systems, particularly surgical services. New guidelines, resource scarcity, and an ever-increasing demand for care have posed challenges to healthcare professionals, resulting in the cancellation of many surgeries, with short and long-term consequences for surgical care and patient outcomes. As the pandemic subsides and the healthcare system attempts to reestablish a sense of normalcy, surgical recommendations and advisories will shift. These changes, combined with a growing case backlog (postponed surgeries + regularly scheduled surgeries) and a physician shortage, can have serious consequences for physician health and, as a result, surgical care. Several initiatives are already being implemented by governments to ensure a smooth transition as surgeries resume. Newer and more efficient steps aimed at providing adequate surgical care while preventing physician burnout, on the other hand, necessitate a collaborative effort from governments, national medical boards, institutions, and healthcare professionals. This perspective aims to highlight alterations in surgical recommendations over the course of the pandemic and how these changes continue to influence surgical care and patient outcomes as the pandemic begins to soften its grip. Introduction Described as a pandemic by the World Health Organization in March 2020, COVID-19 disease inadvertently has put enormous strain on the healthcare system, affecting both patient care and physicians especially in countries with vulnerable health systems [1,2]. With a significant increase in demand for medical help amidst high costs for protective equipment and shortage of supplies, the pandemic has intricately highlighted the socioeconomic and ergonomic vulnerabilities of the healthcare industry [2]. In response, several public health policies were formulated, governments issued numerous recommendations to mitigate spread [3]. Recommendations encompassed various aspects of healthcare including prevention through use of personal protective equipment (PPE such as KN95 mask, goggles, and gowns), social distancing, contact tracing, and regular testing for healthcare workers; management guidelines (supportive care); investment and resource allocation on healthcare infrastructure [2,3]. Like other medical specialties, surgeons were required to modify their expectations and responsibilities in order to ensure resource redistribution and meet care demands. The prioritization of specific surgeries, as well as the need for immediate but appropriate guidelines, had become critical. While most surgeries are necessary, interpreting the meaning of "elective surgeries" and ensuring patient safety was no easy task. Governments issued recommendations on elective procedures, but with new state orders and societal recommendations, surgeons were left with little to no guidance, resulting in a general decline in physician and patient well-being [4]. During the early stages of the pandemic, the American College of Surgeons (ACS) advised postponing non-urgent surgeries. Surgeries were classified into different tiers based on their urgency, ranging from tier 1a surgeries like carpal tunnel release to tier 3b surgeries like high acuity surgeries [5]. Also, millions of procedures were canceled, resulting in a variety of potential long-term and short-term effects on patient care, as indicated in Fig. 1. The long-term effects included the risk of uncertain loss of function and adverse prognosis as a collateral effect of the pandemic [6]. While short-term effects included deterioration in patients' conditions, increased disability, and reduced ability to work [7]. Sims et al. reported that the cancellation of elective surgeries had a more severe impact on women and black patients, highlighting the importance to address healthcare disparity to ensure equitable care [8]. Amidst efforts to maximize patient care, physician wellness was perhaps another pivotal aspect brought into light during the pandemic as it influenced productivity and care provision. According to a study conducted by Farr et al., primary care and surgery residents reported a significant reduction in dating frequency and socialization due to the pandemic [9]. It is obvious that the pandemic had a significant impact on almost every aspect of healthcare delivery and surgical care. As the world adjusts to the current pandemic in order to re-establish a new sense of normalcy, it is critical to address the pandemic's impact on surgical care. This viewpoint emphasizes the pandemic's impact on surgical recommendations, with a focus on surgical care and physician wellness. The general impacts of covid-19 on elective surgeries The COVIDSurg Collaborative study estimated that during a 12-week period of peak COVID-19 disruption, 28.4 million surgeries would be canceled or postponed worldwide [7]. However, this estimate could be far from reality as the pandemic is still around after 2 years from the study. The best estimate of the global 12-week cancellation rates for elective surgeries was 72⋅3%, which is consistent with the finding of another systematic review, where 74.4% of the articles recommended postponing all, or at least selected, elective surgeries [7,10]. A study predicted that the number of canceled surgical procedures in England and Wales during the COVID-19 pandemic is at least 2.4 million by the end of 2021, which represents more than 6 months of normal surgical activity [11]. Meanwhile in Brazil, the elective surgical backlog has surpassed 900,000 cases by December 2020 [12]. According to the COVIDSurg Collaborative study, 37⋅7% of cancer surgeries were affected [7]. Other than oncological surgeries, other specialties have also been affected. In the United States, up to 71.8% of live donor kidney transplants were completely suspended while over two-thirds of live donor liver transplants were completely suspended [13]. More than a 50% drop in cardiac surgical volume was observed at 2 the busiest cardiac surgical programs in Maryland after full restrictions were implemented by the ACS in response to the pandemic [14]. Another study projected a cumulative backlog of more than 1 million joint and spinal surgical cases in the United States by May 2022 [15]. In urology, there was an 89.6% reduction in elective surgeries during April 2020 in Brazil [16]. A multidisciplinary oncologic review of time to surgery showed delaying surgery for more than one month negatively impacts survival in breast cancer, T1 pancreatic cancer, Stage I melanoma, ovarian cancer, and pediatric osteosarcoma [17]. Decreased survival was also associated with surgical delays of >3 months in patients with hepatocellular cancer and 40 days in colorectal cancer [18]. A 3-month delay to surgery across all stage 1-3 cancers has shown to cause more than 4700 attributable deaths per year in England while a 6-month delay could cause more than 10000 attributable deaths per year [19]. A French study observed a mean 43-days postponement period for scheduled pancreatic adenocarcinoma surgeries, with a major switch from upfront surgery to waiting for neoadjuvant chemotherapy in patients with the respectable disease [20]. More frequent metastasis discoveries during surgery (13% compared to 2.2% before the pandemic) were reported [19]. Delaying elective surgeries even for benign conditions could be risky, causing a significant impact on patients' health [20]. In a Spanish study among pending cases for elective cardiac invasive procedures during the pandemic, the mortality at 45 days due to cancellation was 1.7%, whereas 8.3% of the patients had to undergo an urgent procedure due to clinical destabilization [21]. Besides, surgical delays have a risk of progressing into a more advanced disease, which often requires more intense and costly treatment [22]. Orthopedic surgeries were among the biggest proportions of elective surgeries being delayed. A study has found that 1 in 4 patients reported substantial physical and/or mental impact due to the cancellation of elective total joint arthroplasty during the pandemic [23]. Other than the discomfort and inconvenience of rescheduling surgery, there is a risk of muscle wasting due to immobility, decreased quality of life, and depression with increased susceptibility to substance use disorders [24,25]. Psychological burden including severe restrictions in private life has been reported among patients who had to postpone sexual or reproductive health surgeries [26]. As the pandemic continues, the consequences reported are likely the tip of the iceberg, thus more research can uncover the true impact, especially in low-middle income countries. Expected changes in surgery recommendations post-pandemic and its effect on patient caseload As most elective surgeries were postponed during the COVID-19 pandemic, the healthcare system must prepare for a notable demand of surgical care as the pandemic subsides. According to the COVIDSurg Collaborative study, it would take a median of 45 weeks to clear the backlog of surgeries during 12 weeks of peak pandemic disruption if the countries increased their normal surgical volume by 20% after the pandemic [9]. At 2 busiest cardiac surgical programs in Maryland, the surgical backlog would require a monthly operating volume of 216%-263% of baseline to clear, or 1-8 months based on various post-pandemic operational capacities [14]. To clear the backlog of delayed total knee arthroplasty surgeries, it would take the US health system 16 months [24]. Financial-wise, a study has found that the cost of clearing the post-pandemic waiting list (more than 2.3 million overdue or canceled surgical procedures) in the NHS in England is €5.3 (3.1-8.0) billion, excluding additional costs of delivering surgical services under strict infection control procedures (including PPE, preoperative screening and extra bed-days in hospital), which may cost more than €500 million [2,6]. This could lead to catastrophic expenditure, particularly in low-middle-income countries, which further increases the financial burden of surgery. As the COVID-19 pandemic slowly subsides, there is a need for new evidence-based guidelines which can provide cost-effective solutions to prevent disease transmission and cross infection, without excessively increasing treatment costs [27]. Multiple organizations such as ACS, the American Society of Anesthesiologists (ASA), the Association of periOperative Registered Nurses (AORN), the American Hospital Association (AHA), and the Royal College of Surgeons have provided multiple guidelines for resuming elective surgeries [28,29]. Recommendations include institutions accounting for the possibility of repeat waves of SARS-CoV-2 infection when planning post-pandemic surgical recovery to best explore strategies to safely resume elective surgeries and slowly increase the case volume. Besides, a prioritization strategy with a standardized scoring system should be established to help with re-ordering of previously canceled and postponed elective surgeries, as well as specialties prioritization (cancer, organ transplants, cardiac, trauma) [28][29][30][31]. Other than that, institutions should ensure an appropriate number of intensive care units (ICU) and non-ICU beds, personal protective equipment (PPE), medical-surgical supplies, and trained staff appropriate to the number of elective surgeries to be performed without compromising the standard of care [28,32]. COVID-19 testing policies are likely to be implemented to protect staff and patient safety [28]. As the COVID-19 pandemic hits, telemedicine has emerged quickly as it allows physicians to provide care with safe distancing. Although there are still issues to address, telemedicine is likely to play a role in the preoperative and postoperative phases of care beyond the pandemic [33]. Impact of case burden on patient care and physician well being Physician well-being is the cornerstone of every well-functioning health system because it improves the quality of patient care, increases patient satisfaction, and decreases medical errors. In this time of a global pandemic healthcare workers are under a huge workload. This overwhelming burden of cases could lead to caregiver burnout. So far, the burnout syndrome, involving mental exhaustion, loss of motivation, depersonalization or cynicism, and reduced professional efficiency has been empirically described and applicable to physicians [34,35]. The major causes of burnout are long work hours, sleep deprivation, fatigue, exhaustion, and the risk of infection [36]. Furthermore, unique pandemic-related pressures further deteriorate physician well-being [6,31]. A systematic review and meta-analysis of 13 studies of mental health during the COVID-19 pandemic published up to April 17, 2020, of which 12 were from China and 1 from Singapore, reported a pooled prevalence of 23.2% for anxiety, 22.8% for depression, and 38.9% for insomnia [37]. In another study on the mental health of healthcare professionals in China, they found intense psychological experiences, traumatization, and various mental health disorders among healthcare workers during the pandemic [38]. Because of these issues many physicians have expressed a desire to quit medicine [39], this could negatively impact the quality-of-care provision in a prevalent physician shortage with an increased case burden. This could have serious consequences on patient care and could strain the healthcare system further [40]. The issue of physician shortage, with special emphasis on surgical specialties Surgical residents, fellows, and early-career surgeons face unique challenges during this global pandemic. During the 2003 SARS outbreak in southeast Asia, residents reported that their education was compromised as teaching sessions, grand rounds and elective surgeries were canceled [41]. Similar policies are adopted in this pandemic, reducing resident-educator contact with resident didactics like grand rounds either canceled or made virtual [42]. Clinical volume is also affected, as most of the elective surgeries and outpatient clinics have been postponed. Moral distress is a condition where the individual knows what is morally right but is unable to do so because of institutional or other constraints [43]. Residents are prone to moral distress because they are often responsible for implementing care plans, and they do not have the authority of developing [44,45]. The pandemic amplifies a number of these challenges [41]. Studies from various humanitarian organizations have reported that moral distress increases dropout rates and sick leave among disaster responders, this trend is also seen in the case of medical residents. Physician shortage has become rampant worldwide, including in developed countries [46]. The US is already facing a growing shortage of physicians and COVID19 has exacerbated this. It is estimated that from 2019 to 2034 there will be a shortage of 21,000 to 77,100 specialty physicians [47]. The NRMP data displays 39,205 residency spots, among which only 5538 are surgical spots (categorical + preliminary), highly unproportionally to the expected healthcare demand [48]. To address the shortage, countries have implemented independent approaches. The Italian government allowed newly graduated medical doctors or last-years residents to work in the COVID workforce to face the shortage of doctors [49]. In the US, several med schools allowed final-year students to graduate early or return to the hospital [50]. Other various institutes have shifted surgical specialists (Surgeons, Anesthesiologists, and Dermatologists) to work in COVID-19 units [47]. Efforts and recommendations As we approach our third year into the pandemic, the world has begun the process of returning to a more familiar semblance of life as it was prior to COVID-19. With the pandemic softening its grip on various industries, elective surgeries need not be delayed. It is important to note that elective surgeries, as per the American College of Surgeons, are "essential surgeries" [8,51,52]. Many nonemergent surgeries, such as for cancer care, may face dire consequences if delayed considerably. One paper predicted a rise in deaths by nearly 10,000 from colon cancer due to a delay of over 4 months [53]. Additionally, any delay of an essential surgery poses an increased risk of acute exacerbations, infections, and surgical complications. A Google Trends analysis looked to identify shifting public perspectives on the desire to pursue elective surgeries. While search volume has yet to return to pre-pandemic levels, the investigator's analysis indicates a steady rise in public interest that foreshadows an increased demand for previously deferred care [54]. The question therefore becomes how to facilitate this incoming demand. Firstly, Sastre et al. (Jan 2022) published a study suggesting elective surgeries could be safely resumed without concern for increased COVID-19 risk even in high incidence regions [55]. An option is to develop helpful screening tools to stratify patients on the waiting list by priority due to health status [56]. Another study showed that strict adherence to protocols allowed for elective surgeries to be completed with fewer complications, readmissions and mortality compared to 2019 [57]. Amutharasan et al. (2022) recommended the importance of self-isolation prior to surgery, the designation of a surgical team member for COVID-19 swabs, and the reinstatement of in-person pre-operative assessments to help guide patient trust and care [58]. Based on these findings, our recommendation would begin by urging the resumption of elective surgeries. Considering that most surgical services would have significant waiting lists, these groups should perform a priority stratification of their waiting list based on risk factors such as the urgency of the procedure, patient health status, and duration of delay since the initial surgical consult. As mentioned earlier, studies have indicated the safety of surgeries despite COVID-19, with no documented increased risk of its infection post-operation. Yet, we would suggest appointing a few members of the patient team (i.e., comprised of nurses, scrub techs, office administrators, etc.) who would be responsible for COVID-19 nasal swabs on patients' pre-operation, confirming negative results prior to arrival, and ensuring strict adherence to sanitation protocols pre-and post-operation. Finally, we suggest providing pamphlets or info sheets informing patients of safe post-surgery practices to avoid COVID-19 infection (i.e., isolation, continued mask use, sanitation, etc.). Further research and policy on the above recommendations will be vital to meet surgical care objectives while avoiding physician burnout. Conclusion and outlook The pandemic has been detrimental to surgical care. The extent of the impact unravels daily with increases in costs and resource demands due to delayed surgical procedures existing alongside normally scheduled procedures. Lengthy waiting lists shall prevail, but stratification of patients and procedures may improve care provision. Furthermore, the pandemic may encourage research on health policies and guidelines to cope with increasing post-pandemic surgical demands. Author contribution Contribution to the work's conception and design: All authors under the supervision of Aashna Mehta, Wireko Andrew Awuah, and Mohammad Mehedi Hasan. All authors worked together to draft the work and revise it critically, with the help of Aashna Mehta, Wireko Andrew Awuah, and Mohammad Mehedi Hasan. The final version of the manuscript was read and approved by all of the authors. Registration of research studies 1. Name of the registry: NA 2. Unique Identifying number or registration ID: NA 3. Hyperlink to your specific registration (must be publicly accessible and will be checked): NA Consent NA. Data availability statement No data available. Conflicts of interest NA.
2022-08-19T13:10:34.513Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "24d60f9ec1477ba5e3671d0bea8f25789d8ed762", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.amsu.2022.104395", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a56b5a6211ec45b4ecdf5d0a8159d3d51007056a", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Medicine" ] }
225011129
pes2o/s2orc
v3-fos-license
Does the content of financial literacy education resources vary based on who made or paid for them? In the decade since the global financial crisis, an increasing number of jurisdictions have added mandatory financial literacy education to school curricula. Governments recognize that this increases the burden on teachers, who may also lack the confidence to teach financial literacy. One response is to encourage the use of resources produced or sponsored by the financial services industry. The concern is that these resources may promote the industry’s interest in maximizing profits and minimizing regulation over students’ interest in becoming empowered financial consumers. As a first step in investigating this concern, we compared resources from the Canadian Financial Literacy Database produced or sponsored by the financial services industry with those produced by government, non-profit organizations and individuals. We focused on online resources intended for use by elementary teachers and students to determine whether the key themes and messages conveyed vary based on who made or paid for the resource. We found that key themes are consistent across resources, regardless of industry affiliation, but that resources produced or sponsored by the financial services industry are more likely to exhibit a moralistic tone. Introduction The decade since the global financial crisis has seen an increase in the number of mandated financial literacy curricula (Batty et al., 2015). The addition of financial literacy education to mandatory school curricula increases the burden on teachers, who frequently lack the training, background, and confidence to teach in this area (Henning and Lucey, 2017). One often proposed response to alleviate this burden is to leverage the resources of financial services companies, including banks, credit unions, and payment networks (e.g., VISA) to provide financial literacy education resources (Batty et al., 2015; Organization for Economic Cooperation and Development [OECD], 2013; Task Force on Financial Literacy, 2010). The financial services industry is answering this call and producing or sponsoring financial literacy education materials intended for use by parents and teachers (McCormick, 2009). The concern raised by using resources produced or sponsored by the financial services industry is that even when they are authored by experts or teachers, they will promote the industry's interests in maximizing profits and minimizing regulation over students' interests in becoming empowered financial consumers (Collins et al., 2017). Determining whether this conflict of interest should preclude educators from relying on such resources is difficult. One goal of financial literacy education is to encourage the use of asset-building financial products, such as savings accounts, which also helps to boost industry profits (Cole et al., 2013). Another goal is to help consumers choose from the increasingly wide array of complex financial products available (Financial Consumer Agency of Canada [FCAC], 2015; Ontario Ministry of Education Working Group on Financial Literacy, 2010), but focusing on educating consumers fails to hold the industry accountable for creating such a complex market in the first place. As a first step in investigating this concern, our study examines whether the content of financial literacy education resources varies based on who makes or pays for them. We compared three categories of resources: resources produced by the financial services industry; resources sponsored by the financial services industry; and resources neither produced nor sponsored by the financial services industry. To our knowledge, this study is the first to compare the content of financial literacy education resources based on the producer or sponsor of the resource. We focused on online resources aimed at elementary students and teachers. We selected online resources because of their predominance as a source of informal professional development for teachers (Beach, 2017). We focused on resources aimed at the elementary level due to the consensus in the literature that financial literacy education must start early if it is to affect individuals' financial behavior in the long-term (Batty et al., 2015;FCAC, 2015;John, 1999;McCormick, 2009). The scope for the study are resources aimed at elementary school aged children and elementary school teachers found in the Canadian Financial Literacy Database ("Database"). In their review of financial education programs aimed at children and adolescents, Amagir et al. (2018) found that the concepts covered in these programs does not vary by country, making our findings applicable and relevant outside of Canada. The online Database is curated and hosted by the Financial Consumer Agency of Canada, the federal agency tasked with financial consumer protection with respect to federally-regulated financial institutions. The Database was created on the recommendation of the federal Task Force on Financial Literacy to serve, among other purposes, as "a hub for Canadian teachers" providing "high-quality, unbiased information from a range of expert sources" (2010: 8, 62). Although the Database Terms of Use state that resources will not be included if they "promote the sale of a particular product or service or favor a particular product or service over others" (FCAC, 2017), this does not necessarily preclude materials that promote the interests of the financial services industry generally. To compare resources currently included in the Database by producer or sponsor, we conducted a document analysis, combining elements of content analysis and thematic analysis (Bowen, 2009). We found that key themes (e.g., spending habits, money gives choice) were consistent across resources regardless of industry affiliation. One difference we found is that resources produced or sponsored by the financial services industry are more likely to exhibit a moralistic tone, by either explicitly or implicitly judging certain financial behaviors as right or wrong, and to place an even greater emphasis on individual responsibility for one's own financial circumstances than resources produced by individuals, government, or non-profit organizations. This lends some support to the theory that these resources prioritize the industry's interest in shifting the regulatory burden of ensuring the safety of financial products from the industry to the consumer (Williams, 2007). The rest of the article is organized as follows. We first provide an overview of the relevant literature. We then describe our methodology, including the steps involved in the analysis. This is followed by a description and discussion of our findings. We conclude with study limitations and future directions. Literature review The definition of financial literacy used by the Canadian government is "having the knowledge, skills and confidence to make responsible financial decisions" (Task Force on Financial Literacy, 2010: 10). Other governments (Exec. Order No. 13646, 2013) and international organizations (OECD, 2012) use similar definitions. This definition combines the traditional understanding of financial literacy as knowledge of financial concepts with "financial capability" or "the skills to apply this knowledge" (Amagir et al., 2018: 57). The Ministry of Education of the Canadian province of Ontario also uses a similar definition, but adds that financial literacy education should also encourage students to develop "a compassionate awareness of the world around them" (Ontario Ministry of Education, 2016: 3). Traditionally, the delivery of financial literacy education curricula was left to non-profit or charitable organizations. In the wake of the global financial crisis, however, governments identified financial literacy as a necessary life skill for individuals and decided that a financially literate population would improve financial stability (Amagir et al., 2018;OECD International Network on Financial Education [OECD/INFE], 2012). Governments' current interest in financial literacy education is unlikely to wane, since increasing citizens' financial literacy is a policy goal that appeals to parties across the political spectrum (Willis, 2008). The evidentiary basis for this policy position is questionable. Evidence on the effectiveness of financial literacy education in not only improving students' knowledge and understanding of financial concepts, but also improving their financial capability is limited and mixed at best (Amagir et al., 2018;Batty et al., 2015;Hamilton et al., 2012;Mandell and Klein, 2009;Miller et al., 2015;Willis, 2009). Blue and Pinto point to the problems and dangers of focusing on financial literacy training over improving employment opportunities in and for marginalized communities (Blue and Pinto, 2017). There appears to be a consensus in the literature, however, that if financial literacy education is to be effective, then it needs to start early, in the elementary school years (Amagir et al., 2018;Batty et al., 2015;FCAC, 2015;John, 1999;McCormick, 2009 . Evidence from other areas, such as nutrition, suggests that effects on attitudes of elementary students will carry through to adulthood (Batty et al., 2015). Furthermore, children are faced with having to make financial choices, even from a young age (McCormick, 2009). The challenge is that elementary school teachers often report feeling unqualified to teach financial literacy (Henning and Lucey, 2017;Ontario Ministry of Education Working Group on Financial Literacy, 2010;PACFC, 2013). As governments' attention to financial literacy education in schools has increased, so has their engagement with industry stakeholders in designing financial literacy curricula. At least one Canadian province explained its decision to create a new financial literacy curriculum as a response to "requests from the education sector and industry stakeholders" (Government of Saskatchewan, 2018: 1). Teachers are not necessarily opposed to engaging with industry on this issue: in one study of pre-service teachers and teacher educators, respondents supported collaborating with "members of the local financial services industry" to teach financial literacy in school (Henning and Lucey, 2017: 167-168). Some scholars have expressed the concern that financial literacy education serves the interests of the financial services industry, rather than consumers. Financial literacy education driven by the industry may also "reinforce and reify conventional, neoliberal approaches, attitudes, and ideologies toward debt, credit, finance, and money" (Haiven, 2017: 349). It also shifts the focus away from the increasing complexity of the financial marketplace, caused by the industry, which creates the pressing need for financial literacy in the first place (FCAC, 2015;PACFC, 2013;Sawatzki, 2017;Willis, 2009). This gives the industry an incentive to support financial literacy education strategies to try to avoid other forms of financial products regulation. This creates the potential for conflict of interest when financial services companies produce or sponsor financial literacy education resources for use in the classroom, in contrast to, for example, resources found on company websites, where the self-interested motive is clear. To some extent, this conflict is inherent in government financial literacy strategies because these strategies emphasize responsible money management by the individual or household over responsible lending and sales practices by the financial industry (see Waldron, 2011;Williams, 2007). Furthermore, one goal of financial literacy education is take-up of financial services, such as a savings account, which also may increase industry profits (Batty et al., 2015;Cole et al., 2013). So although these companies are in a conflict of interest when they produce or sponsor financial literacy education resources, the fact that industry-affiliated resources emphasize individual responsibility or promote the use of bank accounts does not necessarily mean that they are worse for students than resources with no industry affiliation. This is why a comparison between industry-and non-industry-affiliated resources is necessary. Although, as noted above, this potential conflict of interest has been identified by other authors, we are not aware of any previous empirical studies that seek to examine whether financial literacy education resources provided by the financial services industry further the industry's interests in maximizing profits and avoiding new regulation over students' interest in becoming empowered financial consumers. As a first step in answering this question, we compared the key themes and messages of financial literacy education resources produced or sponsored by the financial services industry with those produced or sponsored by non-industry-affiliated individuals, governments and non-profit organizations to determine whether content varied based on industry affiliation. Our methodology is described in detail in the next section. Research design This study employed qualitative methods and descriptive statistics to examine and compare the key themes and messages in three categories of financial literacy education resources: those produced by a financial services company, those sponsored by a financial services company, and those neither produced nor sponsored by a financial services company. Resources in this third category included those produced by individuals, governments, or non-profit organizations. We used a document analysis: "a systematic procedure for reviewing or evaluating documents", including electronic documents (Bowen, 2009: 27). Through an iterative process of skimming, reading, and interpreting, we combined elements of content analysis and thematic analysis. The content analysis offered a process of organizing information from the Database into categories. This was followed by a thematic analysis, a process of careful reading and focused re-reading of the data (Bowen, 2009). As a result of the thematic analysis, key themes and key words were generated from the data (Thomas, 2006). We then coded resources for key themes and recorded the frequency of key themes and key words. Each step is further described below. Context of the study The financial literacy education resources reviewed for this study are those included in the Canadian Financial Literacy Database (https://www.canada.ca/en/financial-consumer-agency/ services/financial-literacy-database.html). The Database was created on the recommendation of the federal Task Force on Financial Literacy and is maintained by the Financial Consumer Agency of Canada ("FCAC") and housed on its website. Individuals or companies with an FCAC account may submit resources for inclusion in the Database. The FCAC in its sole discretion determines which submitted resources will be included based on the following criteria: the resource contributes to the financial literacy of Canadians, is available in one or both official languages, is free or for a "reasonable fee", and "does not promote the sale of a particular product or service or favor a particular product or service over others". Data analysis Identifying appropriate resources. At the time of analysis, the Database included separate categories of resources for "Educators", "Parents", "Students", and "Youth/youth at risk". The scope for this study was narrowed to resources targeting elementary students and teachers. Resources in French or those that required payment to access were omitted. As a result, twelve resources, five targeted at elementary students and seven targeted at elementary school teachers, were included in our analysis (see Table 1). These twelve resources were divided into three categories: resources produced by the financial services industry, resources sponsored by the financial services industry, and resources which are neither produced nor sponsored by the financial services industry (i.e., no industry affiliation). This last category includes resources produced or sponsored by government agencies, professional teachers' organizations, and individuals. Several of the twelve resources contained multiple "units". Units included separate web pages, videos, games, lesson plans, and teacher guides and were identified in order to further examine and code as sub-components of each resource. We included the "Home Page" and "About Us" web pages from each of the twelve resources, as these would likely be initially viewed by site visitors when entering the websites. To maintain our focus on resources included in the Database, external links were excluded. All other units were included for 11 of the resources. For the twelfth resource ("Charly & Max, Get Involved!"), we selected a sample of 8 videos out of a total of 51. Quizzes following the videos were excluded from the dataset. The process of identifying units occurred through ongoing conversations between the research team and resulted in 141 total units. Most of the 141 units were PDF or text documents; the rest were videos and flash games. PDF or text documents were downloaded for coding. Videos were transcribed for coding. To code for key themes in flash games, a written description of the game was produced and then coded. To code for key words in flash games, one researcher played the games once and recorded each time a key word appeared in the game. All written documents were uploaded into NVivo (2012), a qualitative data analysis software program. NVivo was chosen for this study because it aids in the organization and management of large datasets. Identifying and coding for key themes. An open coding process was used to identify recurring and relevant ideas expressed in the units included in the scope of the study. Twelve units from the Database were selected for this initial open coding. Two researchers separately reviewed these units and recorded "Plot or Activity", "Key Messages", "Terminology", and "Notes" for each. A third researcher identified fifteen key themes from these notes. Two researchers then coded three units to identify examples of each key theme. Three researchers then revised this list down to twelve key themes and agreed on definitions and examples of each. Two members of the research team coded the units for key themes. These researchers brought two different disciplinary perspectives to this process, law and education. Thirteen of the units (approximately 10%) were randomly selected from the total 141 units. These thirteen units were chosen across the different resources and included a variety of media (text, video, games). Each researcher separately coded these units for the twelve key themes. The researchers then met to compare results. Coding methods were revised so that each resource would be coded for a maximum of three key themes based on the most dominant messages in the unit. The same two researchers then separately coded all 141 units. When the researchers met to compare results again, there was a 50% agreement rate. After discussion, the researchers removed two key themes: "Taking Personal Responsibility" and "Curiosity and Exploration". The researchers decided that these two themes represented pedagogical descriptor-frameworks rather than tangible financial literacy topics. After these two key themes were removed, the two researchers compared disparity in codes and decided that if both identified common themes for a unit, the unit would be coded in the common theme and any theme coded for by only one researcher would be removed. The researchers also decided that if a unit was now empty-coded for one researcher (after removing the disparate theme), the unit would automatically be coded with the other researcher's key themes. This process resulted in an 80% agreement rate between the two researchers. A final discussion among the research team led to the decision to separate the theme "Smart Shopper" into four sub-categories: "Reading Advertisements Critically", "Getting Value for Money", "Contracts", and "Frauds and Scams". This decision was made because each sub-category represented distinct and separate ideas. One researcher revisited the units coded for "Smart Shopper" and re-coded every unit as a sub-category. The final thirteen key themes and their definitions are listed in Table 2. Identifying and coding for key words. One member of the research team identified a list of 68 relevant key words targeting financial literacy terminology. These key words were developed based on the initial notes made on the same sample of twelve units discussed above. We also decided to use the Ontario Financial Literacy Scope and Sequence curriculum document for Grades 4-8 as a source for keywords (Ontario Ministry of Education, 2016). The decision to use this document was based on the familiarity of the document by one of the research team members, who had formerly taught in Ontario's elementary education system. Additionally, Ontario is Canada's most populous province and the province in which the research team's institution is located. A future study could use additional curricular documents from a variety of provinces and states as sources for keywords. For the purposes of our exploratory methods, we deemed this scope and sequence document to be appropriate for offering keywords in addition to those identified based on our initial notes. Exact repetitions of key words were removed (e.g., combining "bank" and "banks", but not "credit" and "credit cards") and words with a common theme or prefix were expanded into a general word (e.g., using "self" to capture "self-awareness" and "self-monitoring"). In the end, the research team identified 36 key words to use to code the Database units. Upon reviewing the key words in relation to the literature, the research team noted three major categories into which the key words could be sorted. These categories included: "financial" (16 key words), "normative" (21 key words), and/or "curriculum" (25 key words). "Financial" key words describe financial products or concepts, such as "credit", "fee(s)", and "interest". "Normative" key words describe or relate to financial or other behaviors that are viewed as socially positive, for instance "charity(ies)", "habit(s)", and "manage*". Key words that were categorized as normative represented opportunities for value judgments on different financial products or choices. Normative key words are especially pertinent to financial literacy education, given the focus on changing behavior. Finally, "curriculum" key words are drawn from the Financial Literacy Resource Guide for the Ontario Curriculum Grades 4-8, and included "negotiate*, "problem solving", and "self*". Words which were categorized as all three included "budget*", "debt(s)", "income(s)", and "save*". The complete list of keywords and how they were categorized can be found in Table 3. One member of the research team coded all 141 units for the 36 key words. Units were coded using the "text search" function in NVivo. Most key word searches included stem variations of the key word (i.e., "borrow" included "borrows", "borrowing" and "borrowed"). Some key words were limited to the exact or very similar form in cases when the research team agreed that adding stem variations would change the meaning of the key word (i.e., "bank" and "banking"). The researcher was careful to only record results for words that were used appropriately in a financial context (i.e., "interest" was only recorded when it referred to financial interest, "question" was only recorded when students were encouraged to question an idea or teaching). Key themes As illustrated in Figure 1, the key themes we identified appeared consistently across resources, regardless of industry affiliation, suggesting that content does not vary based on who made or paid for the resource. Both the key themes identified and the consistency are generally in keeping with the dominant, conventional approach to financial literacy, which focuses on individual choice, rather than systemic constraints on those choices (Blue and Pinto, 2017). Eight of the thirteen themes were coded for in all three categories of resources. The top two most frequently coded key themes, "Spending Habits" and "Money Gives Choice", were the same in all three categories. "Getting Value for Money" was the third-most frequently coded theme in resources sponsored by the financial services industry and neither produced nor sponsored by the financial services industry. Although "Understanding Financial Products" was the third-most frequently coded theme in resources produced by the industry category, it is also the fourth most frequently coded theme in both other categories. In terms of differences of theme concentration among the categories, resources produced by the financial services industry had the highest percentage of the themes "Banks Are Safe", "Hard Work Ethic", "Smart Shopper -Advertising", "Understanding Financial Products", and "Delayed Gratification". The high percentage of "Banks Are Safe" is the result of one of the four resources in this category being an in-school deposit account program, which had three units coded for this theme. Resources sponsored by the financial services industry had the highest percentages of "Getting Rich Through Entrepreneurship" and "Giving Back". Resources neither produced nor sponsored by the financial services industry had the highest percentages of the themes "Good Debt versus Bad Debt", "Smart Shopper -Getting Value for Money" and "Spending Habits", as well as the only instances of "Smart Shopper -Contracts" and "Smart Shopper -Frauds and Scams". A closer examination revealed some differences in how these key themes were reflected among the different categories. Although non-industry affiliated resources replicated the dominant, conventional themes of financial literacy education, the discussion of these themes makes a bit more space for students to develop "a compassionate awareness" of how circumstances beyond an individual's control might affect their ability to save and budget. For instance, resources produced or sponsored by the financial services industry tended to be more prescriptive than resources neither produced nor sponsored by the industry, which took a more inquiry-based approach. In one unit of a resource produced by the financial services industry, the following statement was coded for the theme "Banks Are Safe": "Maybe an account would be a good idea after all. . .my money would be safer!" (Charly & Max, Get Involved!, "High Security"). Similarly, in a unit of "School Caisse" students are encouraged to complete a fill-in-the-blank activity sheet which then reads "By choosing to deposit my money in the school caisse, I am learning about the world of savings", implying that a deposit account with a financial institution is the only, or at least the best, way to save. By contrast, a unit neither produced nor sponsored by the financial services industry was less prescriptive, with a lesson plan prompting a discussion on the questions "How would you save money; where would you save it; what banks or credit unions are available in your community; why would you save your money in a bank/credit union and not a piggy bank?" (Inspire Financial Learning, "Bank It!"). Units coded for the theme "Spending Habits", which explored the distinction between needs and wants, also reflected this difference in approaches. A unit produced by the financial services industry involved the protagonist purchasing cakes and toys at a store instead of bread and milk, only to realize that she cannot eat a cake sandwich (Charly and Max, Get Involved!, "Priority Sandwich"). The distinction between needs and wants is clear and non-negotiable. This approach falsely presumes the same decision-making opportunities are available for all families and students (Blue and Pinto, 2017). In a unit in a resource neither produced nor sponsored by the financial services industry, students are encouraged to categorize needs and wants while creating a "piggy bank" covered in pictures and words that "represent their values and saving goals" (Youth Money Management Presentation, Teacher E-Book). Similarly, in another lesson plan neither produced nor sponsored by the industry, students are first asked to categorize recent family purchases as needs or wants, and then given different scenarios, such as being stranded on a boat or living in different climate environments, and an opportunity to reconsider how their needs and wants would change depending on the situation (Inspire Financial Learning, "Deserted (Needs vs Wants)"). Students are encouraged to consider that their perceptions of needs and wants might differ, and that these might shift depending on their situation. This approach better accounts for the barriers students of varying socio-economic backgrounds might face in trying to build savings or make a budget (Hamilton et al., 2012). Resources produced by the financial services industry also had the highest percentage of "Hard Work Ethic". The flash game Money Metropolis was coded with this theme. The game involves the player trying to "save" enough money for a goal they choose at the beginning of the game -usually fun activities, like going to the zoo, or big purchases, like a pet -by doing odd jobs or chores for which they are paid if they complete the task successfully. There is no other way to make money toward achieving the goal and winning the game other than by working through these jobs or chores. Even if the player chooses the least expensive goal, they will likely have to complete ten rounds of jobs or chores to afford their goal. Similarly, resources sponsored by the financial services industry coded with "Hard Work Ethic" also emphasize a direct and singular connection between success and individual work ethic. In "Getting & Earning Money, Grades 4-6" (Building Futures for Manitoba), students are encouraged to read "The Little Red Hen" and "The Three Little Pigs" in order to identify the characters who are "lazy" and "uncooperative". Students are then directed to identify high-earning professions and discuss how a "strong work ethic" is the primary trait for high-income earners. Meanwhile, in resources neither produced nor sponsored by the financial services industry, units coded for this theme describe the hard work ethic of certain communities or collectives of people in hard times and characterize a hard work ethic as only one of many necessary factors for success. For instance, in M is for Money teaching guides, the author draws on her personal experience as a child of an Italian immigrant family. The author writes: I learned the value of money at a very young age as I watched my parents work hard to make a new life in Canada [. . .] My parents talked to me about how difficult it was to earn a living, and when faced with competing priorities for their hard-earned money, hard decisions had to be made. The author recognizes the hard work ethic of others in her community who contributed to her success, as well as the financial insecurity people can face even when they work hard. Industry-affiliated resources were also more likely to exhibit a moralistic tone by either explicitly or implicitly judging certain financial behaviors as right or wrong. For example, "The Game Plan", a single-unit resource produced by an Indigenous non-profit organization and sponsored by a financial services company, was one of three units coded for "Good Debt vs Bad Debt". The comic book's protagonist falls into "bad debt" through reckless credit card spending on personal items, such as lunches out and sports equipment, while another character thrives because she relies on "good debt", such as a loan for expanding her small business. The moral of the story is that individuals who get caught in bad debt cycles do so because they indulge in spending on "wants", rather than by reason of systemic discrimination and colonial structures facing Indigenous individuals and communities in Canada (Blue and Pinto, 2017). This is consistent with the "blame the victim" subtext in financial literacy education materials (Blue and Pinto, 2017;McCormick, 2009;Willis, 2008). The other two units coded for this key theme were neither produced nor sponsored by the financial services industry. One, produced by an individual, had students make a budget for a shopping list so they have enough money for the things they want to buy, and to make a simple loan agreement for a friend who wants to borrow money for lunch (M is for Money, "Teaching Guide for Books 4, 5, and 6"). The other, sponsored by the Ontario Teachers' Federation, the professional association of Ontario's public school teachers, helped students distinguish between building credit and going into debt, but avoided applying moralistic labels, with the stated intention to "equi[p] teachers, students, parents and guardians with unbiased, independent and effective tools and strategies to help them navigate the complex world of finances and also support[t] them in making responsible financial choices" (Inspire Financial Learning, home page). While all three groups had relatively similar percentages of the theme "Money Gives Choice", in three out of five units coded for this key theme in resources produced by the financial services industry, this theme is connected to choice over one's apparel or appearance. In the flash game Peter Pig's Money Counter, players use the money they win playing the game to buy clothing and accessories for Peter Pig. Similarly, in Money Metropolis, players could choose to spend their earnings on buying new clothing, rather than saving it all for their chosen savings goal. As noted above, there is a concern that financial literacy education resources produced by the industry will reinforce "neoliberal approaches, attitudes, and ideologies" about money and spending (Haiven, 2007: 349), specifically consumerism. In non-industry affiliated resources, "Money Gives Choice" was more commonly connected to different ways to treat money, such as saving, spending, donating, or investing it, and how to make that decision. For instance, in "The Money Dilemma" lesson plan, students are encouraged to discuss what to do with a ten-dollar bill they find -whether to spend it or save it, as well as how one earns ten dollars. Although this unit encourages more discussion regarding the uses of money, it does not necessarily challenge currently dominant ideas about money. Perhaps surprisingly, resources produced by the financial services industry had the highest percentage of units dealing with "Advertising", which focused on helping students to critically assess advertising techniques, and make choices based on their own best interests, rather than being influenced by advertising or peer pressure. This theme was differentiated from "Getting Value for Money", in which the lessons focused more on the needs of the buyer, and ascertaining the value of different products to the buyer, rather than the intentions of the seller in trying to persuade consumers to buy their products through advertising. That said, resources neither produced nor sponsored by the industry are the only group with units coded with "Contracts" and "Frauds and Scams", both of which teach students about consumer protection. In "Lessons for Life -First Mobile Phone", a lesson plan from Make It Count, students are encouraged to discuss the basic terms of a phone contract as well as the concept of a contract. For instance, students are asked, "What are the different types of mobile phone plans?" and "What does it mean to sign a contract? What promises are you making by signing one? What happens if you don't hold up your end of the bargain?" The lesson helps students to see how they can save more money by choosing a better phone plan, and to understand the basic legal obligations they undertake in signing a contract. In another lesson plan by Make It Count, "Lessons for Life -Frauds and Scams", the only unit in the study coded for "Smart Shopper -Frauds and Scams", students are taught to recognize and avoid different forms of frauds and scams. Students are prompted to research in groups different kinds of frauds and scams, such as online scams, ATM scams, identity theft and investment scams. Key words Similar to our findings with respect to the key themes, key words appeared consistently across the three categories of resources, indicating that the content of financial literacy education resources does not vary based on who made or paid for them. However, differences in the frequency of the three categories of key words -financial, normative and curriculum -supports the contextual analysis of key themes that resources produced or sponsored by the industry are more likely to take a prescriptive approach and/or exhibit a moralistic tone. As shown in Figure 2, the most frequently coded key words across all three categories were "save*" (n = 613, 11% of all words identified), "need*" (limited to the context of needs versus wants; n = 380, 7%), and "budget*" (n = 361, 6%). The frequency of these terms reflects the prevalence of financial literacy topics common across resource categories, namely, how to save, how to distinguish needs from wants, and how to create reasonable budgets. "Save*" was the most frequent key word across all three categories of resources. After "save*", the most frequent key words identified for resources produced by the industry were "savings account(s)" (n = 71, 13% of all words identified in this category) and "goal(s)" (n = 59, 11%), whereas for resources sponsored by the industry they were "credit" (n = 238, 8%) and decision(s) (n = 197, 7%), and for neither produced nor sponsored by the industry they were "bank(s)" (n = 200, 9%) and "budget*" (n = 192, 8%). Among the most frequent key words coded in all three categories, therefore, are words related to the use of financial products (i.e., "savings accounts", "credit" and "banks"). In resources produced by the industry, nine key words were not identified at all: "bank account(s)", "career(s)", "charity(ies)", "credit", "critical", "debt(s)", "fee(s)", "negotiate*", and "self". The omission of some of these key words, such as "critical", "negotiate*" and "self" may suggest industry-affiliated resources focus less on helping students to develop a critical awareness of financial norms, services and products. The absence of "credit" and "debt(s)" is interesting, suggesting perhaps the industry resources reviewed chose not to discuss more complex money management relationships. The word "financial institution(s)" was identified more than "bank(s)"; this is likely due to the fact that financial institution is the legal term for the regulated entity and so likely reflective of how the industry describes itself in day-to-day corporate communications. The key words "financial institution(s)" and "bank(s)" appeared a combined total of 52 times, and thus together were the fourth-most frequently coded key word. In resources sponsored by the industry, the words "self", "negotiate*", "interest", "job", and "career" appear most frequently as compared to the other resource groups. Furthermore, 80% of entries for the word "credit" were found in resources sponsored by the industry; 75% of entries for "decision(s)" were also found in this resource group. The marked difference in key word results between resources created by the industry and resources sponsored by the industry may reflect the influence from having a non-industry partner involved in the creation of the resources. In resources neither produced nor sponsored by the industry, the words "bank" and "financial institution" appear most frequently compared to the other resource groups. As noted above, one of the difficulties in examining the question of conflicts of interest in financial literacy education resources is that one of the goals of financial literacy education generally is to increase the uptake of certain mainstream financial products, such as savings accounts. In terms of categories of key words, "financial" words appear most frequently in resources neither produced nor sponsored by the financial services industry, whereas the highest concentration of "normative" key words was found in resources produced by the industry. This finding may reflect the differences in tone of messaging between non-industry resources as compared to industry resources noted above with respect to key themes, and suggests that even if the substantive topics are similar across resources, the terms and methods which are employed to discuss these topics is likely to vary based on who made or paid for the resource. Study limitations Our study focuses on the content, or the "what" of financial literacy education resources, and whether this content varies based on who made or paid for them. We do not address here more fundamental challenges to financial literacy education and its role in reinforcing pre-existing economic inequalities. We also do not examine the extent to which these resources have been made out of date by technological innovations and new financial products, such as cryptocurrencies and mobile phone payment and savings apps. With respect to our findings regarding content, one limitation of the study is that only resources aimed at elementary school aged children and their teachers were analyzed. Key themes might differ at the secondary or adult education levels, revealing greater differences in content among categories of resources. A second limitation is that we analyzed only resources included in the Canadian Financial Literacy Database. While this provided a wide sample of the types of online resources available to teachers, it is not necessarily a representative sample of resources, and therefore our findings need to be interpreted with caution. Furthermore, online resources are not the only ways that financial services companies contribute to curricula. One area of future research would be to look beyond online resources to content being delivered in classrooms, for example, by guest speakers from the industry (Henderson, 2020). Conclusion Our results indicate that the content of financial literacy education resources does not vary based on who made or paid for them. Frequency and concentration of key themes and key words were consistent across the three categories of resources. "Spending habits" was the most frequent theme across all groups. While "Understanding Financial Products" was coded more frequently in units of resources produced by the industry, it was also frequently coded for in units from the other two groups. "Save" and its variants was the most frequently identified key word in all three groups. While "savings account" was the second most frequently coded-for key word in resources produced by the industry, "banks" was the most frequently coded-for key word in resources neither produced nor sponsored by the industry. Since banks are only one type of financial institution available to financial consumers in most jurisdictions, it seems likely that teachers who want to inform their students of other institutional options when it comes to saving and investing will need to rely on their own financial knowledge. In Canada, while national or regional credit unions, such as Desjardins, have the resources to produce or sponsor financial literacy materials for inclusion in the CFLD, small, local credit unions, which are another savings option, likely do not. Hopefully, resources created or sponsored by government departments or agencies can step in to fill this gap. Financial literacy resources should also recognize and discuss community or collective savings practices, which take place independent of financial institutions, such as saving circles. Again, for now, teachers interested in diversifying their lessons in this way will have to search even further afield for materials. One difference in frequency of key theme was Frauds and Scams. This key theme was coded for only in resources neither produced nor sponsored by the industry. This very low frequency even in this category may be a result of educators thinking this is too advanced for elementary school students, rather than as a result of who made or paid for the resource. Avoiding fraud is a central goal of Canada's financial literacy strategy, however (FCAC, 2015). The omission of materials addressing this theme from industry-affiliated resources places the onus on teachers who rely on these resources to look elsewhere for credible information regarding frauds and scams to bring into their financial literacy lesson planning. In Canada, resources on frauds and scams are available from provincial securities commissions, which are regulatory agencies, but they are generally aimed at adults, rather than children. The Government of Ontario has called on the Ontario Securities Commission to collaborate with the Ontario Ministry of Education "to enhance the financial literacy curriculum" (Government of Ontario, 2019: 229). This collaboration may increase the materials available on avoiding frauds and scams in this jurisdiction. Although key themes and words are consistent across categories, a closer analysis of the context in which the key themes were found indicates that resources produced or sponsored by the financial services industry are more likely to take a prescriptive approach and exhibit a moralistic tone than resources neither produced nor sponsored by the industry. This is evidenced in differences in how the key themes are reflected in the units reviewed, as explained in the examples above. Also, "financial" key words were more likely to appear in resources that are neither produced nor sponsored by the financial services industry than the other two categories, whereas "normative" key words were least likely to be coded for in this category. Discussions of financial literacy education in initial teacher education programs as well as teacher professional development should consider the industry affiliation of resources and be alive to the differences in tone. National and sub-national financial literacy strategies in several jurisdictions encourage ministries of education, school boards and teachers to rely on financial literacy education resources produced or sponsored by the financial services industry (OECD, 2013). One concern with this approach is that these resources may be biased in favor of the industry's interests over those of students. This study is a first step to attempt to measure empirically this potential conflict of interest by comparing key themes and key words in industry-affiliated and non-industry affiliated resources. Our results suggest that the content does not vary based on who made or paid for the resource, although industry-affiliated resources are more likely to exhibit a moralistic tone and to place greater emphasis on individual responsibility over systemic economic inequalities, which may not be appropriate, particularly for students whose families' socio-economic circumstances may make it harder for them to engage in the financial behaviors promoted by these resources, such as accumulating savings and avoiding debt. More study of this question is desirable, given the ongoing government emphasis on incorporating financial literacy education into mandatory school curricula. Perhaps more importantly, both the key themes identified and consistency across resources, regardless of who made or paid for them, highlight the lack of diversity in existing financial literacy education materials and the dominance of the conventional approach, with its focus on individual responsibility. School boards that want to take an approach to financial literacy education that better accounts for the socioeconomic and cultural diversity of their students would be advised to develop their own financial literacy education materials. Declaration of conflicting interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The Social Sciences and Humanities Research Council of Canada
2020-10-19T18:06:55.767Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "2bf6212eaec737f42e0b6911bf07244398d1917c", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2047173420961031", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "a544d4546f91a6e6fdf30f8d43507bfdf40aaabe", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
7746597
pes2o/s2orc
v3-fos-license
NADPH- Diaphorase positive cardiac neurons in the atria of mice. A morphoquantitative study Background The present study was conducted to determine the location, the morphology and distribution of NADPH-diaphorase positive neurons in the cardiac nerve plexus of the atria of mice (ASn). This plexus lies over the muscular layer of the atria, dorsal to the muscle itself, in the connective tissue of the subepicardium. NADPH- diaphorase staining was performed on whole-mount preparations of the atria mice. For descriptive purposes, all data are presented as means ± SEM. Results The majority of the NADPH-diaphorase positive neurons were observed in the ganglia of the plexus. A few single neurons were also observed. The number of NADPH-d positive neurons was 57 ± 4 (ranging from 39 to 79 neurons). The ganglion neurons were located in 3 distinct groups: (1) in the region situated cranial to the pulmonary veins, (2) caudally to the pulmonary veins, and (3) in the atrial groove. The largest group of neurons was located cranially to the pulmonary veins (66.7%). Three morphological types of NADPH-diaphorase neurons could be distinguished on the basis of their shape: unipolar cells, bipolar cells and cells with three processes (multipolar cells). The unipolar neurons predominated (78.9%), whereas the multipolar were encountered less frequently (5,3%). The sizes (area of maximal cell profile) of the neurons ranged from about 90 μm2to about 220 μm2. Morphometrically, the three types of neurons were similar and there were no significant differences in their sizes. The total number of cardiac neurons (obtained by staining the neurons with NADH-diaphorase method) was 530 ± 23. Therefore, the NADPH-diaphorase positive neurons of the heart represent 10% of the number of cardiac neurons stained by NADH. Conclusion The obtained data have shown that the NADPH-d positive neurons in the cardiac plexus of the atria of mice are morphologically different, and therefore, it is possible that the function of the neurons may also be different. Background The heart of mammals is innervated by nerve fibers from the vagus and the sympathetic chain. In addition, a group of neurons situated at the dorsal surface of the atria contribute to the innervation of the heart. These neurons play an important role in the regulation of cardiac output during rest, exercise and cardiovascular disease [1]. Recent evidences suggest that nitric oxide (NO), acts as a neurotransmitter in both the central [2,3] and peripheral nervous system [4][5][6]. Nitric oxide is synthesized by the enzyme nitric oxide synthase (NOS) in a reaction that converts the amino acid arginine to citruline. It has become apparent that histochemical techniques to demonstrate NADPH-diaphorase enzymatic activity detect the presence of NOS in neurons [7]. Therefore, NADPH-d histochemistry is used to label NOS-containing neurons in the nervous system [8][9][10][11]. Previous studies have demonstrated the presence of NADPH-diaphorase positive neurons in the cardiac ganglia of the guinea-pig [12,13] and rat [7]. All cardiac neurons contained immunoreactivity to choline acetyl transferase, and two smaller subpopulations were immunoreactive to calbindin or NOS [14]. It was also demonstrated that 14.4% of cardiac nerve cell bodies of the mouse heart contain NOS [15]. The method of dye injection has permitted to establish that the morphology of neurons in autonomic ganglia bears a relation to their electrophysiological and histochemical properties [16][17][18][19]. In addition, physiological studies on the NADPH-d positive neurons of the cardiac plexus in mammals have been frustrated because nothing is known about the topography of these neurons [20]. Therefore, the present investigation was carried out to examine the location, morphology, distribution and the number and sizes of the NADPH-diaphorase positive cardiac neurons in the atria of mouse on whole-mount preparations. Our data show the existence of three types of neurons, based on their morphology and that these cells were located in 3 topographical groups. NADPH diaphorase positive cardiac neurons The location of the NADPH-diaphorase positive neurons is shown in the diagram of Fig. 1, in which they were represented by dots. In general, 3 groups of neurons were recognized. The first group of neurons was in the region situated cranial to the pulmonary veins, the second group of neurons was located caudally to the pulmonary veins and finally, the third group of neurons was located in the interatrial groove. The number of NADPH-diaphorase positive neurons was obtained in laminae from 5 mice. This number ranged from 39 to 79 neurons. The average number of neurons was 57 ± 4. The largest group of NADPH-diaphorase pos-itive neurons was located cranially to the pulmonary veins (66.7%). The neurons located caudally to the pulmonary veins formed 26.3% of the stained neurons. The neurons located in the interatrial groove were the fewest in number (7.0%). The great majority of NADPH-diaphorase positive neurons were found in ganglia of the cardiac plexus. Each ganglion usually contained 1-7 NADPH-diaphorase positive neurons (Fig. 2H). Single NADPH-diaphorase positive neurons were occasionally found (Fig. 2B, G). There was a small variation in the intensity of the staining of neurons. Three types of neurons could be distinguished on the basis of their shapes: cells with one labeled process (unipolar cells) ( Fig. 2A, B, E, F, H); cells with two short processes (bipolar cells) ( Fig. 2A, C, G) and cells with three short or long processes (multipolar cells) (Fig. 2C, D). The contours of unipolar cell bodies were of an irregular round or oval shape. The long processes of these types of neurons extended from the hillock over a few microscopic fields. In the great majority of them, the process was traced only within the ganglion. The contours of the bipolar neurons bodies were oval in shape. The processes of these neurons were traced only within the ganglion, with no axon distinguishable. The contours of the multipolar neurons, as well as of the bipolar ones, were oval in shape. One multipolar neuron had 3 short processes and one, had two short and one long process. As to the location of the three types of neurons, cranially to the pulmonary veins, there were 32 unipolar neurons, 5 bipolar neurons and 1 multipolar neuron (n = 38). Caudally to the pulmonary veins there were 11 unipolar neurons, 3 bipolar neurons and 1 multipolar neuron (n = 15). Finally, in the interatrial groove there were 2 unipolar neurons, 1 bipolar neuron and 1 multipolar neuron (n = 4) ( Table 1). The unipolar neurons were clearly predominant in the three regions. The size of each type of NADPH-diaphorase positive neurons was assessed (as cell profile area) in the wholemount preparations used for cell counts. Sizes of the three types of neurons stained for NADPH-diaphorase method, ranged from about 90 µm 2 to about 220 µm 2 . The mean size of each type of neuron was obtained and the data from the three groups were tested for significance using the one-way ANOVA (Fig. 3). The results showed that there were no significant differences between the sizes of the three types of neurons. NADH diaphorase positive cardiac neurons Examination of the NADH-stained preparations revealed that the mean number of the neurons in the atrial preparations was 530 ± 23, n = 5). Comparing the number of NADPH-diaphorase positive neurons to the NADH positive cardiac neurons, we concluded that the percentage of NADPH-diaphorase positive to cardiac neurons in this study was 10%. Discussion Although the NADPH-positive intracardiac neurons of many animals have been investigated, a structural organization and topography of these neurons in mice have not been described. Therefore, in the present study we have attempted to describe exactly the location, the morphology, the number and the sizes of NADPH-diaphorase positive cardiac neurons in the atria of mice. At present, many studies are carried out using NADPHdiaphorase reaction with nitro blue tetrazolium [7,8,[21][22][23][24][25][26]. Applying the NADPH-diaphorase technique and immunohistochemistry with NOS antiserum, it was examined the occurrence of NOS in nerve fibers and neurons of the rat and guinea-pig heart [12]. According to these authors, NOS immunoreactivity and NADPH-diaphorase technique showed identical results. However, some authors [27] encountered a significant difference in the number of NADPH-d positive and NOS immunoreactive neurons in the urinary bladder of guinea pig. The authors suggest that the localization of one enzyme does not necessary reflect the presence of the other. When comparing the results achieved by NOS-immunocytochemistry and those by NADPHd-histochemistry, the distribution patterns of positive nerve-cells and fibers were apparently similar. A combination of both methods Relation between mean, standard error of mean and devia-tion values of sizes (µm 2 ) of the three types of NADPH-posi-tive cardiac neurons (unipolar, bipolar and multipolar) of the mouse atria Figure 3 Relation between mean, standard error of mean and deviation values of sizes (µm 2 ) of the three types of NADPH-positive cardiac neurons (unipolar, bipolar and multipolar) of the mouse atria. revealed complete colocalization of NOS and NADPHd in neuronal perikarya and fibres [9]. The use of whole mount preparations presents the intracardiac plexus as a monolayer of nerve cells, and it has therefore been possible, in parallel with neuron location and counts, to make measurements of neuron somata of unsectioned nerve cells. However the two techniques used in this investigation do not allow seeing terminals in the intrinsic cardiac ganglia around intrinsic neurons. Our data about the location and distribution of NADPHpositive cardiac neurons showed the presence of three major groups of ganglion cells, one cranially to the pulmonary veins, one caudally to the pulmonary veins and one dorsal to the interatrial groove. The right atrial free wall, atrial appendages and trunk of great vessels were devoid of cardiac NADPH-diaphorase positive neurons. However, in the guinea-pig heart, the largest populations of NOS-immunoreactive neurons were distributed in right atrium, septum and left atrium [13,28]. The distribution of NADPH-diaphorase positive neurons in specific ganglia that we observed in this study indicates that they selectively project to and thus control specific cardiac neurons. NADPH-diaphorase cardiac ganglia of mouse contained three morphological types of neurons. Neurons in one group, were unipolar. The second set of neurons, were bipolar and the third set of neurons were multipolar. The unipolar predominated (78.9 % of the stained neurons), whereas the bipolar and multipolar were encountered less frequently (15.8 % and 5.3 % of the sampled neurons, respectively). The three types of NADPH-positive neurons were encountered in the three regions of the mouse atria. These results suggest that, at least for the mouse's heart, the morphology of neurons is not conditioned by their location. However, the importance of the location has been showed in a few physiological studies, in which it was demonstrated that the electrophysiological properties of the intracardiac neurons are related to their different location in canine heart [29]. The types of neurons observed in the present work are similar to that lying in the nerve plexus of the cardiac hilum of the guinea pig [20,30]. The relatively great abundance of unipolar neurons we have encountered is also similar to the results obtained for the neurons of the nerve plexus of the cardiac hilum of rats and guinea pigs [20]. The existence of cardiac neurons with no dendritic arborization, in the guinea pig has been confirmed by labeling them intracellularly with neurobiotin [31]. It is possible that some of the neurons observed in the present work provide postganglionic innervation to the heart or possibly some may function as interneurons to ganglion cells in other cardiac ganglia or in the same ganglion. In fact, some studies of the NADPH-diaphorase cardiac neurons indicated that a number of neurons in the rat heart and in the guinea-pig heart are NOS-immunoreactive and labeled neuronal processes are associated with cardiac ganglia, nodal tissues, the coronary vasculature and myocardium [25,32]. Therefore it is very likely that NO acts as a neuronal messenger of cardiac nerves innervating vascular smooth muscle, intrinsic neurons, nodal tissues and the myocardium [12]. The results of several studies support strongly the hypothesis that NO liberated from neuronal sources has an important facilitatory action on the vagal control of the heart [33][34][35]. The results of other study showed that NO plays a stimulatory role in mediating vagal neurotransmission and vagal modulation of sympathetic effects and an inhibitory role in mediating sympathetic neurotransmission [36]. In addition, prior pharmacological studies in guinea pigs suggest that NO facilitates the negative chronotropic effects of vagal stimulation [34,[37][38][39][40]. In a recent paper, [15] it was investigated whether enhanced cardiac vagal responsiveness elicited by exercise training is dependent on neuronal nitric oxide synthase. The results suggest that the changes in vagal responsiveness resulted from presynaptic facilitation of neurotransmission, and that NOS appears to be a key protein in generating the cardiac vagal gain of function elicited by exercise training. It was suggested that the mechanism of action of NO may be facilitation of acetylcholine release, either at the preganglionic-postganglionic or at postganglionic-muscle synapse [34,40]. Finally, examining cardiac effects induced by electrical stimulation of discrete loci within the ganglionated plexus located on canine atria it was concluded that atrial ganglia contain not only efferent parasympathetic, efferent sympathetic but also afferent neurons [41]. These atrial neurons received afferent inputs from sensory neurites in both ventricles that were responsive to local mechanical stimuli and the nitric oxide donor nitroprusside [42]. Therefore, it is possible that, some of these NADPH-d neurons are pain conducting afferent neurons. The number of NADPH-diaphorase positive neurons in the cardiac plexus of mice averages 57. This number is relatively small when compared with the number of NADHdiaphorase positive neurons in the atria (530). In the other words, according to the present results, the NADPHdiaphorase positive neurons represents only 10% of the number of NADH positive cardiac neurons. However, [15] demonstrated the presence of 14.4% of NO-positive neurons in the mouse atria. On the other hand, the percentage of NADPH-diaphorase positive neurons to myenteric neurons in the rat stomach was 20% [43]. The difference suggests the possibility that the local physiolog-ical requirement of the organs may require different amounts of NO activity [43]. In terms of neuron size the NADPH-diaphorase positive cardiac neurons do not form a uniform population. They range in the area of their profile from about 90 µm 2 to about 220 µm 2 . The morphometric analysis of the NADPH-positive neuron types of the cardiac plexus has not showed statistically significant differences in somata parameters. A lack of the morphometric differences between distinct neuron types is known from various studies of the cardiac plexus [20,44]. An elucidation of such a pattern in morphometry of neurons confirms the fact that mostly morphometry of the somata does not serve as a diagnostic feature in the neuron classifications [20]. Neuronal sizes vary less in the cardiac than in the enteric ganglia [45] and resemble in this respect the ganglia of the mouse tracheae [46]. It is not clear at the present whether the variation in size within a population of neurons reflects differences in the extent of their territory of innervation [44]. That the volume of the tissue innervated influences the size of the intrinsic neurons is confirmed by the experiments of intestinal hypertrophy. A massive enlargement of the musculature on the oral side of a partial obstruction is accompanied by a marked neuronal hypertrophy, the average area of the cell profile increasing by about 100% [45]. Conclusion In summary, the present results shows that the cardiac ganglia present in the atria of mice are composed of about 10% of NADPH-d positive neurons. Morphologically, these neurons are of three types: unipolar, bipolar and multipolar. The majority of them have their processes confined within the ganglion and may possibly be the intraganglionically active neurons or interneurons. The existence of different types of neurons in the cardiac plexus of mice is an indication that they possibly have different functions. These neurons might mediate complex integration and cardiac reflex. Animals Ten, young, male, isogenic mouse, ASn, weighing 15-18 g were used. Mice were housed singly in cages and maintained under standard conditions at 21°C, with 12 h light dark cycle, and all water and food ad libitum. Each animal was killed with an overdose of ether. The chest was opened, the heart removed and the atria were separated from the ventricles and dissected in order to remove excess of fat and connective tissue from their external surface. The study was conducted according to current legislation on animal experiments of the Biomedical Science Institute of the University of São Paulo. NADPH-Diaphorase histochemistry Immediately after the atria were removed from the body and dissected, they were washed in saline. Five atria were then fixed in 4% paraformaldehyde in phosphate buffered saline (PBS) pH 7.2, at 4°C for 30 minutes. After 2 × 10 minutes rinses in PBS at room temperature, the atria were incubated in the following solution for the demonstration of NADPH-d for 60 minutes at 37°C, in the dark and with continuous agitation: β-NADPH reduced form (Sigma) 0.1 mg/ml., nitro blue tetrazolium (Sigma) 0.5 mg/ml in PBS containing 0.2% triton X-100 [20]. The interatrial septum was removed and the whole mount preparations of the atria were mounted in glycerol for microscopic examination. The formazan deposits filling the cell bodies and processes, and their absence in the cell nuclei identified the NADPH-diaphorase positive neurons. Using this method we were able to distinguish between the different types of neurons based on their morphology. For the neurotopographical investigations, the neurons were visualized on the stained whole mount preparations and were surveyed microscopically at magnification 50x. Morphometry The numbers of NADH-diaphorase and of each morphological type of NADPH-diaphorase positive neurons were obtained directly at the microscope by using a 40 × objective. Neuron somata measurements were made on all nerve cell types present in each laminar preparation. The microscope image was projected to a monitor and neuron somata were outlined and measured on a digitizing tablet interfaced with a Kontron-300 image analyzer (Kontron Elektronik, Germany). The statistical significance of the difference between the means was calculated through a one-way ANOVA. Significance level was determined at p < 0.05. NADH Diaphorase histochemistry To estimate the total number of cardiac neurons, 5 whole mount preparations of the atria were stained by a histochemical technique [22]. After the intraperitonial injection with Pentobarbital sodium (0.1 ml/100 g body wt) to anesthetize the animals, these were perfused with Krebs solution. Once the thoracic cavity was opened, the heartlung blocks were isolated and, subsequently, the atria separated from the ventricles and by careful dissection with stereoscopic microscope, the subepicardium fatty tissue was removed. The atria were left in a Krebs solution for 30 minutes and afterwards placed in Triton-X solution (0.3% in Krebs solution) for 10 minutes. They became slightly agitated and were then washed four times for 3 minutes in the Krebs solution. Then the pieces were kept in the solution for incubation (
2014-10-01T00:00:00.000Z
2006-02-02T00:00:00.000
{ "year": 2006, "sha1": "72bdc39daaecb82db513be192a43a9b4b13c2342", "oa_license": "CCBY", "oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/1471-2202-7-10", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "72bdc39daaecb82db513be192a43a9b4b13c2342", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
89616861
pes2o/s2orc
v3-fos-license
Prognostic Value of CT Radiomic Features in Resectable Pancreatic Ductal Adenocarcinoma In this work, we assess the reproducibility and prognostic value of CT-derived radiomic features for resectable pancreatic ductal adenocarcinoma (PDAC). Two radiologists contoured tumour regions on pre-operative CT of two cohorts from two institutions undergoing curative-intent surgical resection for PDAC. The first (n = 30) and second cohorts (n = 68) were used for training and validation of proposed prognostic model for overall survival (OS), respectively. Radiomic features were extracted using PyRadiomics library and those with weak inter-reader reproducibility were excluded. Through Cox regression models, significant features were identified in the training cohort and retested in the validation cohort. Significant features were then fused via Cox regression to build a single radiomic signature in the training cohort, which was validated across readers in the validation cohort. Two radiomic features derived from Sum Entropy and Cluster Tendency features were both robust to inter-reader reproducibility and prognostic of OS across cohorts and readers. The radiomic signature showed prognostic value for OS in the validation cohort with hazard ratios of 1.56 (P = 0.005) and 1.35 (P = 0.022), for the first and second reader, respectively. CT-based radiomic features were shown to be prognostic in patients with resectable PDAC. These features may help stratify patients for neoadjuvant or alternative therapies. www.nature.com/scientificreports www.nature.com/scientificreports/ include morphological features to capture ROI shape characteristics 14 and edge detection features that highlight the boundaries of objects in the ROI 15 . In PDAC, computed tomography (CT) is the main diagnostic tool for assessment of local extent of disease and surgical planning 16 . As a standard-of-care imaging modality, CT images can be used to extract radiomic features with no extra image acquisition cost to the healthcare system, thus providing comprehensive information on the phenotypic and textural structure of the tumour. Neoadjuvant therapy has been shown to improve the survival of patients with resectable PDAC 17 . If radiomic features can identify patients with more aggressive disease, it might help select patients most in need of neoadjuvant treatment. Although previous studies have shown the prognostic value of CT radiomic features for different cancer sites including non-small cell lung cancer 7,18 , renal clear cell carcinoma 19,20 , and metastatic colorectal cancer 21 , there is scarcity of multicentre radiomics research on PDAC. Our preliminary results were published on a single small cohort (n = 30) exploring a limited number of radiomic features for prognostication (5 second-order statistical features) showing two features were predictors of overall survival (OS) for PDAC patients undergoing curative intent surgical resection 22 . Another study investigated radiation induced changes in CT radiomic features (8 first-order features) in a cohort of 20 patients with pancreatic head cancer over a period of 5 weeks and association between changes in radiomic features and pathologic response was reported 9 . Recently, a larger cohort of 161 patients with resected PDAC was analyzed to study the prognostic accuracy of radiomic features combined with preoperative serum carbohydrate antigen 19-9 (CA19-9 levels) and pathology score (The Brennan score) where it was reported that adding the pathology score to radiomic features and serum cancer antigen improves the prognostic power of the model 23 . In this retrospective study, we aimed to address the shortcomings of the previous radiomic studies of PDAC by including assessment of reproducibility across different readers, using data from different institutions and CT scanners, and using separate training and validation sets. To achieve this goal and further validate CT radiomic parameters as prognostic biomarkers in PDAC patients, we investigated these parameters in two separate pre-operative cohorts from two institutions and contoured by two radiologists with different levels of expertise through the analysis of a comprehensive set of radiomic features with a standard analytic library (PyRadiomics version 2.0.1) 24 . The purpose of this study was to assess the reproducibility and prognostic value of CT-derived radiomic features for resectable PDAC. Materials and Methods Patients. This retrospective study was approved by the Research Ethics Board of Sunnybrook Health Sciences Centre and University Health Network and all methods were carried out in accordance with relevant guidelines and regulations. Two cohorts from two separate institutions consisting of 30 and 68 patients undergoing curative intent surgical resection for PDAC from 2007-2012 and 2008-2013, respectively, who had pre-operative contrast-enhanced CT available for analysis and were part of ongoing studies where survival data was being collected were included. Patients were resectable and had not received neo-adjuvant treatment. To minimize the effect of post-operative complications on outcomes analyses, patients who died within 90 days after surgery were excluded. Institutional review board approval was obtained for this study from both institutions and the need for written informed patient consent was waived. The demographic information for both cohorts is shown in Table 1. We previously used the first cohort (n = 30) in a pilot study where only few in-house developed radiomic features extracted from single reader contours were investigated for prognostic value of OS in PDAC patients 22 . Image acquisition. Patients underwent contrast-enhanced CT with a biphasic pancreas protocol consisting of arterial or pancreatic phase and portal venous phase acquisitions. As CT scans were not all from the same institution, the exact contrast bolus volume, timing, and injection rate varied over the time period. In addition, there was inconsistent timing related to variation in CT protocols during the arterial/pancreatic phase imaging. This resulted in variable enhancement of the tumour and background pancreas. As a result, in many cases, the tumour was inconsistently visualized on the arterial/pancreatic phase. The portal phase was consistent in timing and enhancement of background tissue across the entire cohorts. For these reasons, all pancreatic cancer boundaries were drawn on the portal venous phase of acquisition as this phase was most consistent across all exams. CT images were reconstructed with 5 mm and 2 mm intervals for the first cohort and second cohort, respectively. Detector width was 40 mm and kV was 120 kVp for the portal phase for both cohorts. Examination was performed on a 64 row multidetector helical CT (first cohort: GE Medical Systems, LightSpeed VCT, second cohort: Toshiba, Aquilion). www.nature.com/scientificreports www.nature.com/scientificreports/ Image analysis. An in-house developed volume of region contouring tool (ProCanVAS) 25 was used by an experienced radiologist with 18 years of experience as an oncologic imager (Reader 1) and a radiology research fellow (Reader 2) blinded to patient outcome, to review the images and contour the ROIs on the slice with the largest visible cross section of the tumour on the portal venous phase. To differentiate the tumour and the pancreas background, relative contrast difference was considered and in cases where tumour boundary was not clear, tumour boundary was defined by the presences of pancreatic or common bile duct cut-off and review of pancreatic phase images 22 . Feature extraction was performed on the ROI using the PyRadiomics library (version 2.0.1) in Python 24 . To remove the fat and stents, the images were thresholded where voxels with HU (Hounsfield unit) <−10 and >500 were excluded from the analysis. We used a subset of well-known PyRadiomics features, which include first order features and second order features extracted from GLCM matrix using different filters (no filter, exponential, gradient, logarithm, square, square-root, and local binary pattern filters). In total, 410 radiomic features were extracted which included different classes of features listed in Table 2. Statistical analysis. We used the first cohort (n = 30) and second cohort (n = 68) as the training and validation datasets, respectively. The goal was to build a single radiomic signature, which is both robust to inter-reader reproducibility and prognostic of OS across the cohorts and readers. Constructing a single radiomic signature instead of using a set of features reduces feature space dimension mitigating multiple testing problem. In addition, a multi-feature signature accounts for the inter-feature interactions which usually leads to improved predictive modeling compared to individual features 10,26 . First, individual radiomic features of both Reader 1 and 2 in the training set were evaluated for their inter-reader reproducibility by calculating Intraclass Correlation Coefficients (ICC) for each pair of features. ICC, which represents how strongly the features in the same class resemble each other, is generally regarded as poor if less than 0.3 [27][28][29] . We excluded features with ICC < 0.3 to eliminate unstable features among the contours of both readers. The more reproducible features were then tested for their ability to predict OS in the training cohort using a Cox proportional-hazards regression model 30 where the features were treated as continuous variables. This was done using Reader 1 contours as Reader 1 was the more experienced radiologist. Any features that were not prognostic of OS in the training cohort were eliminated. A feature selection method (LASSO 31 ) was applied to significant features (P < 0.05) in the training cohort to select the ones with best prognostic power. Each radiomic feature derived above in the training cohort was then tested in the validation cohort. This was done by retesting these selected features on both Reader 1 and 2 contours in the validation cohort using a univariate Cox regression model and Wald test. Given that this was the validation phase, to control the multiple testing problem, false discovery rate (FDR) control was applied 32 . The feature was considered validated if its adjusted P-value was <0.05. As a final step, the remaining significant features in the training cohort were run through a Cox regression model to generate a single radiomic signature. To validate the reproducibility of the constructed radiomic signature, the ICC was calculated in both training and validation cohorts. To validate the prognostic value of the constructed radiomic signature, Cox regression and Wald test were run in the validation cohort for both Reader 1 and 2 contours. Clinical factors that may be prognostic of OS were also used in univariate Cox regression models. These factors include age, sex, tumour size, grade, N stage, margin, and CA 19-9 levels. Results Out of 410 initial radiomic features generated from PyRadiomics library, 133 features were removed due to having zero or constant values for all patients. Out of 277 remaining features, 251 features had ICC > 0.3 among the contours of two readers in the training cohort. When the Cox regression model followed by feature selection was applied to these 251 reproducible features in the training cohort with Reader 1 contours, 3 features were significant (P < 0.05). These 3 features were then assessed using Cox regression in the validation cohort with both readers' contours, and 2 remained significant after applying FDR multiple testing control (P < 0.05). Table 3 summarizes the hazard ratios, P-values, and ICC values for these significant radiomic features (and radiomic signature) for prognostication of OS in the training and validation cohorts. It also lists the median values of the significant features (and radiomic signature) in the training cohort, which were used for dichotomization of the validation cohort. Figures 1 and 2 show the Kaplan-Meier plots of cumulative OS for these 2 radiomic features in the validation cohort for Reader 1 and 2, respectively. As it can be seen from Table 3, the 2 significant features are second order features extracted from GLCM matrix: one feature is Sum Entropy calculated on the original image and the other is Cluster Tendency calculated on the filtered images (squared root). Sum Entropy is a texture feature that measures the randomness in the image as shown in Equation 1. and P(i,j) is the probability of gray-level i to be adjacent to gray-level j in the image. Cluster Tendency is a measure of groupings of pixels with similar gray-level values: where μ x and μ y are the mean gray level intensities of the marginal row and column probabilities of p x and p y , The radiomics signature derived from these 2 radiomic signatures combined is shown in Equation 3: where F 1 is original_glcm_SumEntropy and F 2 is squareroot_glcm_ClusterTendency. The hazard ratios in the validation cohort for the radiomic signature were 1.56 (Confidence Interval (CI): 1.15-2.13) and 1.35 (CI: 1.04-1.75) for Reader 1 and 2, respectively. The P-values in the validation cohort for the radiomic signature were 0.005 and 0.022 for Reader 1 and Reader 2, respectively with ICC value of 0.63 (Table 3). Figure 3 shows the Kaplan-Meier plots for OS using the radiomics signature in the validation cohort for the two readers. Figure 4 shows two typical examples from the validation cohort contoured for tumour by both Reader 1 and 2 with specific survival time and radiomic signature values. Out of clinical factors, only N stage was significant in the validation cohort with P-value of 0.03 and hazard ratio of 2.27 (CI: 1.06-4.86). To investigate whether the radiomic signature adds prognostic value to the model built with N stage, we tested a bivariate Cox regression model using N stage and radiomic signature and it was found that the bivariate model (N stage plus radiomic signature) is significantly different (with improved Table 3. List of hazard ratios, P-values, median values, and ICCs of significant radiomic features prognostic of OS in the training and validation cohorts. Abbreviations: CI: confidence interval; ICC: intraclass correlation; OS: overall survival: Original_glcm_SumEntropy, sum entropy feature extracted from original image via grey level co-occurrence matrix; Squareroot_glcm_ ClusterTendency: cluster tendency feature extracted from filtered image (square root) via grey level co-occurrence matrix. www.nature.com/scientificreports www.nature.com/scientificreports/ performance) than the univariate model (N stage alone) (Likelihood-ratio test P-value: 0.005). This indicates that adding radiomic signature to the clinical factor model (N stage) further improves the prognosis performance. CA19-9 levels were available for a subset of patients; 25 and 39 patients for training and validation cohorts, respectively. In the subset, CA19-9 factor was only significant in the validation cohort with P-value of 0.047 and hazard ratio of 1.37 (CI: 1.00-1.88). Adding radiomic signature to CA19-9 model significantly improved the prognostic performance of OS (P-value: 0.01) confirming that the radiomic signature adds to the prognostic power of the model with of CA19-9, which is an established clinical biomarker. Table 4 summarizes the P-values for all clinical factors for prognostication of OS in the training and validation cohorts. Discussion PDAC has a very low survival rate 33 . Better treatment options, fundamental understanding of the disease and earlier detection methods are needed. In this exploratory study, we evaluated the potential of radiomic features in PDAC on CT as part of early validation. We have demonstrated the potential of a radiomic signature as a prognostic biomarker in PDAC that can be used across different CT scanners and readers. Although radiomic features have been found to be prognostic of patient outcome in different cancer sites such as lung 7,18 , kidney 19,20 , and colorectal cancer 21 , there is limited work on PDAC 9,22,23 . These studies are all single institution exploring a limited number of radiomic features. In addition, only one reader contour has been used for the analysis, and standard www.nature.com/scientificreports www.nature.com/scientificreports/ radiomic libraries are not used in most of these studies. By using an open source code library (PyRadiomics 24 ), there is an opportunity for other centres to validate the findings presented in this study. If further validated, this signature could be used to help select patients that may benefit from neoadjuvant treatment. Radiomics studies for cancer prognosis are usually limited by the "Large P, small N" dataset problem 10 where the number of features is far greater than the number of patients in the dataset. This challenge combined with the reproducibility issues inherent in different readers annotating the same image differently, and inconsistency in images acquired by different scanners, which might lead to unreliable features, cast doubt on the reproducibility of radiomic features as prognostic biomarkers for cancer. In this study, the main goal was to address these challenges by generating a single radiomic signature using the contours of two readers on two cohorts from two institutions where the first cohort was used for radiomic signature discovery and the second cohort was used for validation. This allowed us to separate the training and testing data and thus, to perform a proper validation of the generated radiomic signature. Excluding features with low agreement between the readers ensured the reproducibility of the final radiomic signature. The radiomic signature was generated by combining the features that were significant in the training cohort and remained significant in the validation cohort after multiple testing correction. It is encouraging to observe that the radiomic signature that was generated in the training cohort remained significant in the validation cohort for both readers. This confirms the reproducibility of radiomic features as cancer biomarkers across not only different scanners/institutions which has also been shown in other studies for different cancer sites such as lung 34 but also different readers. It was interesting to observe that a significant number of radiomic features (251 out of 277) were robust with respect to inter-reader variability in ROI contouring. The fact that out of 277 robust features (with moderate and high ICC), only 3 were found to be prognostic of OS in the training cohort may be due to small sample size (n = 30). A larger sample size will increase the probability of finding more prognostic features in the training phase. As an indicator of tumour heterogeneity, entropy-related radiomic features (e.g., entropy 9 , joint entropy 22 , and sum entropy 35 ) have been shown to be prognostic of OS for different cancer sites. Entropy measures the degree of randomness or non-uniformity in the image and it has been hypothesized that it can act as a surrogate for tumour heterogeneity. The comparison of pairs of synchronous metastases from the same primary tumour has shown that entropy in each pair is highly correlated suggesting that it is capable of representing tumour biological characteristics 36 . It is promising to note that one of the radiomic features validated in this work is also based on entropy-related features (Sum Entropy), which strengthens the hypothesis that this specific radiomic feature may capture the underlying tumour phenotype. www.nature.com/scientificreports www.nature.com/scientificreports/ Although tumour size measured as the maximal diameter of the mass on gross pathologic examination has been shown to be a histopathologic feature for prognosis of OS 5 , the corresponding radiomic feature (ROI diameter or ROI area) was not significant. This may be in part related to the poor definition of cancer margins and high interobserver variability in size measure on CT of PDAC. This indicates that features such as entropy that capture tumour characteristics beyond size may be needed for prognostication of PDAC. Out of other clinical factors available for both cohorts (age, sex, tumour grade, N stage, and margin), only N stage, which is a postoperative factor was prognostic of OS in the validation cohort. It is important to note that the radiomic signature improved the prognostic power when added to N stage model. This indicates that the radiomic signature harbours prognostic information not necessarily captured by N stage factor. This, combined with the fact that radiomic signature is a preoperative biomarker, reconfirms the prognostic value of radiomic signature as a potentially reliable biomarker for PDAC. CA19-9 levels which had been shown to be associated with the OS of PDAC 37 were available for a subset of patients in both cohorts and it was significant only in the validation cohort. Similar to N stage, the radiomic signature improved the prognostic power when combined with CA19-9. Limitations of this work was the relatively small sample, and outcome was limited to overall survival. We hope to extend this work to larger cohorts and multicentre studies with more clinical outcome and genomics data soon. Moreover, CA19-9 and carcinoembryonic antigen (CEA) which have been shown to be associated with the OS of PDAC 37 were not available for all patients. In future studies, the added prognostic value of radiomic signature to these preoperative biomarkers will be investigated using the full cohorts. Nevertheless, when these biomarkers are obtained, which is after diagnosis, radiomic features are readily available with no extra cost. Thus, a reliable radiomic signature with prognostic power is of significant value independent of other preoperative biomarkers. Once validated, these biomarkers may have a role in selection of patients who should undergo neoadjuvant treatment with chemotherapy and/or radiation therapy prior to surgery. This also provides a potential signature to be tested in different PDAC populations such as unresectable patients. Further work on histologic correlates such as tumour stroma which is a potential druggable target in this disease would also be of interest. In this study, we have demonstrated a set of imaging biomarkers and a signature that are both reproducible across different readers and CT scanners and prognostic in preoperative patients. These parameters provide a reasonable starting set of quantitative measures for prospective validation in future trials in surgical candidates with PDAC. Conclusion Conventional staging CT-based radiomic features related to Sum Entropy and Cluster Tendency show promise for prognostication of OS for PDAC patients undergoing surgical resection across different institutions. Ethics approval and consent to participate. The Sunnybrook Health Sciences Centre and University Health Network Research Ethics Boards approved these retrospective single institution studies and waived the requirement for informed consent. Data Availability The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request pending the approval of the institution(s) and trial/study investigators who contributed to the dataset. Table 4. List of P-values and hazard ratios for clinical factors for prognosis of OS in the training and validation cohorts. Abbreviations: CI: confidence interval; OS: overall survival.
2019-04-02T13:03:37.178Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "033c70c2aeb35cd474755febda609b223c86f179", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-41728-7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19d373632ac39621a9ef82aba95cd14f28a3712c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
126476209
pes2o/s2orc
v3-fos-license
MALAYSIAN OUTER SPACE LAW: THE WASATIYYAH APPROACH Wasatiyyah is not a mathematical moderation. It is a form of just evaluation that applies the Islamic conception of justice which asserts putting things in its proper places. Therefore, it should be defined and interpreted according to each circumstance surrounding the issues. With regards to the application of wasatiyyah in other field like outer space law, it requires analysis of the legal rules of outer space activities based on relevant elements of maqasid. This article highlights some legal rules proposed for Malaysia. It analyses each circumstances in determining whether the wasatiyyah approach is essential. The article applies the library research method including analysing the works of authoritative writers, as well as the United Nations international conventions on outer space. The findings demonstrate that the wasatiyyah approach is appropriate and proper to be applied in formulating and drafting Malaysian outer space law. It has been proven as well that it is capable of generating an excellent outcome of Malaysian space legislation. Introduction Outer space law refers to laws that govern outer space activities.The law could be either global or domestic in nature.i Since the formation of legislation has always been a matter of domestic jurisdiction, thus, a state government has an absolute power to shape and mould its outer space legislation.As a matter of fact, many countries have yet to introduce the space legislation despite their involvement in various space activities.This is also the case of Malaysia.Ahmad Sabirin (1999;2001), Martin et al. (2001), Norul et al. (2007), Mustafa (2011), Azriel (2012), Amar (2015), and Goh (2018) acknowledged that among the space activities involved by Malaysia, to name a few: manufacturing and launching satellites (the latest is CubeSats), sending an astronaut to the international space station, scientific research conducted in outer space, suborbital space plane and commercial space port project.The space application includes remote sensing, meteorology, navigation and telecommunication and broadcasting.Therefore, in this paper a special reference of discussion is made to Malaysia in respect of constructing outer legal rules for her space legislation.As indicated by Che Zuhaida (2014), and Mustafa and Azmi (2015), there are many evidences and proofs of developments of the space industry in Malaysia which indicates the need for Malaysian outer space legislation. It is unanimously agreed in the process of developing and drafting a law, one aims to produce the best outcome of legislation.Therefore, this paper discusses whether the wasatiyyah approach is necessary to be applied in ensuring such outcome, mainly in respect of Malaysian outer space legislation.In view of this, an analysis of some selected proposed outer space legal rules based on relevant elements of maqasid is therefore required.This paper begins with a summary of the wasatiyyah approach as a selected conception.The discussion then focuses on the explanation of some outer space rules.Next, it explores and discusses the issue of whether wasatiyyah approach is necessary for the construction of outer space legislation. Wasatiyah Approach: A Selected Conception Islam is a religion of wasatiyyah.It obliges its believer to practise wasatiyyah in all aspects of life (Muhammad Mustaqim, Paimah and Hariza 2012). "Thus, have We made of you an Ummah justly balanced, that ye might be witnesses over the nations, and the Messenger a witness over yourselves" (Al-Baqarah, 2: 143). The word 'ummatan wasatan' has been translated as, to name a few, ummah justly balanced (Yusuf Ali translation), and a community of the middle way (Asad translation).Two principal ingredients of ummatan wasatan are moderation and justice.Thus, to be acknowledged as ummatan wasatan or a justly balanced ummah, one needs to be moderate and simultaneously applying justice.Hence, the wasatiyyah concept is an approach of moderation together with the conception of justice and fairness as prescribed by the Islamic teaching.It is a concept which demonstrates the Islamic system of syariah as better than the other system.Furthermore, it meets the needs and necessities of human being in achieving the prosperous life in this world and hereafter (Ismail Ibrahim 2012). It is important to highlight that wasatiyah is not a mathematical moderation.However, it is a form of just and fair evaluation that applies the Islamic conception of justice which asserts putting things in its proper or right places.Al-Attas (2001) stresses that the notion of 'proper place' implies the existence of 'relation' obtaining between things which altogether describe a system, and it is such relation or network of relations that determines the recognition of the thing's proper place within the system.Therefore, based on this assertion, it is vital to note wasatiyah should be defined and interpreted according to each circumstance surrounding the issues. To ascertain a rightly wasatiyyah approach, one should evaluate each subject matter according to its respective situation.This must be done with a fair and just assessment according to the Islamic principle of justice.Al-Attas (2001) further affirms that the conception of justice is actually referring to putting thing in its proper place or where it truly belongs to according to Islam.In other words, wasatiyyah does not simply refer to applying the approach of moderation in a reasonable quantified manner without having any clear indication, guideline or benchmark as underlined by Islam.However, such wasatiyyah can indeed be achieved when it has been applied with the Islamic conception of justice.In that case, the outcome of the matter will then be justly balanced. In this section, the author does not intend to discuss the wasatiyyah approach as a whole.Instead, the major purpose of the presentation is to highlight the concept which is related to the subject matter of the paper that is the selected proposed outer space legal rules. In the context of outer space legal rules, the author will look into each of outer space rules that were selected for the discussion.Such outer space rules are among the rules proposed for the construction of Malaysian outer space legislation.In achieving the excellent outcome of legislation, the author will adopt the wasatiyyah approach.Such method is applied via the moderation approach that comprises of just evaluation which relates to the Islamic conception of justice.In other words, an analysis will be performed in respect of those rules in regards to moderation approach whereby the rules of outer space law will be justly evaluated based on each circumstance.Such just evaluation will indeed be accomplished after examining the rules from the perspective of whether they are in conformity with the Islamic conception of justice (i.e. based on the concept of putting things at the proper place -this means whether the rules are drafted or constructed as according to their appropriateness).And, in clarifying whether the rules are in compliance with such conception of justice (or have been constructed 'at their proper place'), exploring such rules based on the elements of maqasid is therefore crucial. Elements of maqasid are used to ascertain whether the proposed outer space rules are constructed at their proper place, and this will determine whether they are within the spirit of Islamic justice.This analysis is essential in order to acquire excellent legal rules of outer space activities with a just evaluation or moderation as according to the concept of wasatiyyah.From this point of view, the paper will then focus on the relevant foundational goals (maqasid syariah) such as: preservation of life (nafs), preservation of property or wealth (mal), and other relevant elements (if necessary). The relevant elements of maqasid will be utilised in determining whether the wasatiyyah approach is necessary for the construction of outer space legislation.In circumstances where the applied approach is capable in producing and generating the excellent outcome of legislation, this approach shall then be utilised in developing outer space rules.In realising this matter, the maqasid elements will be used to examine the relevant outer space rules and this should be performed according to the respective circumstances of each rule.This is important due to every rule having different and numerous surrounding circumstances and each circumstance should be evaluated justly.Relying on this, it is alleged the wasatiyyah approach is not a mathematical moderation approach; however, it should be a moderation approach by way of just and fair evaluation relying on the justice concept that depends on each particular circumstance. The Three Selected Outer Space Legal Rules When a state involves with outer space activities, the state should observe the United Nations space obligations as prescribed in its space conventions.ii The legal obligations arise when the state performs the activities and particularly when the state becomes a party to the treaties.For a state which plans to conduct, or conducts, or has conducted the space activities, it needs a law or legislation to govern and monitor the activities.This paper will propose some selected legal rules of outer space conventions which are significant for a state that has no specific outer space legislation like Malaysia to consider in the construction of her outer space legislation.In this section, the author will firstly present and discuss the ideas and content of the proposed laws, as well their relationship with the United Nations outer space conventions.These rules are proposed for the construction of Malaysian space legislation.Then, in the next section, the proposed rules will be examined and discussed with respect to the wasatiyyah approach as presented earlier. Authorization Rule The first crucial rule proposed is from the perspective of authorization.In conducting any space activities, getting and giving authorization is the most critical matter that should be given priority.In regards of controlling the activities of outer space, authorization is the most proper rule to propose.In other words, no authorization means, no activity allowed. The international space law requires a state to impose authorization to outer space activities of nongovernmental entities (Article VI, Outer Space Treaty 1967).Therefore, any activities of outer space conducted by the private sector especially, shall require authorization from the state government.This is crucial from the viewpoint of the state's international responsibility and liability (Article VI and VII, Outer Space Treaty 1967;Liability Convention 1972), the liability of non-governmental entity (Article VI, Outer Space Treaty 1967;Liability Convention 1972), as well the issue of indemnification.In other words, a state must be responsible internationally for its national activities in outer space. Not only that, the state can also be liable for damage or loss caused by the object that are launched by the state or whose launched procured by the state (Article VI, Outer Space Treaty 1967;Liability Convention 1972).From this perspective, a state including Malaysia -as a contributor to the activities -must be aware with regards to the effect and risk of such international rules.Therefore, the government should think and consider the indemnification strategy to substantiate such issue, yet this issue is not a matter of discussion in this paper. Relying on the above, it is imperative to firstly propose a mode of authorization for Malaysia in tackling the above issues.With the authorization rules, the government can control the activities of outer space particularly of the non-governmental sector.In brief, there are four modes proposed for the Malaysian legislation to cater or authorise such activities (Che Zuhaida 2014): (1) licences; (2) overseas launch or return certificate; (3) experimental permit; (4) exemption certificate. For the licence, Che Zuhaida (2014) proposes two types of licences: (a) space site and facility licence; (b) space launch and re-entry licence.The former is required for operation of site and facility of space activities in Malaysia.The latter is necessitated for operation of launch or re-entry of space object from or to Malaysia.In respect of the overseas launch or return certificate, it is proposed for operation of launch or re-entry object conducted by Malaysian national, firm or body incorporated by or under Malaysian laws outside Malaysia.However, the experimental permit is obligatory for operation of space activities solely connected to research or testing a new design, concept, equipment or other experimental purposes.Lastly, the exemption certificate is proposed for operation involving an acceptance of foreign authorization or where appropriate arrangement has been made between Malaysia and other related states, as indicated in Che Zuhaida (2014). In granting the above authorization modes, various fundamental conditions are required to be satisfied.Meaning to say, the Malaysian Government needs to ensure the applicants have fulfilled or are going to fulfil certain requirements.Che Zuhaida (2014) further clarifies that they are, among others: the applicant is competent to operate the site, and/or facility, and/or space object; the insurance requirement has been or to be satisfied; obtaining the environmental approval (if any) under Malaysian law; providing an adequate environmental plan; conducting the operation in such a way as to prevent the contamination of outer space or adverse changes to the Earth environment; activities will not jeopardize the public health or safety of person or property; conducting activities in such a way as to avoid interference with others' activities in the peaceful exploration and use of outer space; the space object do not contain any nuclear weapon or weapon of mass destruction of any other kind; activities are consistent with Malaysian international obligations; and the activities will not impair the national security of Malaysia. In the event the applicant succeeds in proving those conditions, the government then can consider granting the authorization.With such authorization, the state government is afterward able to control and govern the activities accordingly.Such action is in fact crucial as it involves the issue of state government's liability and responsibility. Monitoring and Supervision Rules The second legal rule proposed is from the perspective of monitoring and supervision.After applying and receiving authorization, there should be a mode to ensure the space activities are in compliance with the laws.The rules of monitoring and supervision are the most appropriate rule to ensure submission of participants in regards to the Malaysian relevant laws.Indeed, the international space law imposes also the requirement of continuing supervision to the states -apart from the obligation of authorization -particularly when it involves the private sector (Article VI, Outer Space Treaty 1967).Hence, any outer space activities that will be performed, after they have been authorized by the government, should also be monitored constantly and supervised appropriately to ensure they are in conformity with the laws. At this juncture, in formulating the rules of monitoring and supervision, there are two kinds of power or authority proposed: (1) powers of Minister; and (2) power of supervision and monitoring committee or body.For the first, power of Minister, a related Minister shall be given power to monitor and continuously supervise the related activities.In view of this, apart from his authority of monitoring and supervising the activities, such power shall also extend to power of giving direction as well as power of revocation, variation, or suspension of the former authorization (Che Zuhaida 2014).This means, in exercising his duty of monitoring and supervision, the Minister also has the authority to give any direction -as appropriate -to the person who conducts space activities, particularly in the event of such person contravening the authorization requirement and condition.The Minister also has legal capacity to revoke, vary or suspend such authorization.Che Zuhaida (2014) further specifies that this circumstance is allowed especially when it affects public health, national security, or to ensure compliance with Malaysian international obligation. For the second, supervision and monitoring committee or body, it is established to assist the Minister in exercising the power of monitoring and supervision of outer space activities.Che Zuhaida (2014) firmly stresses that this committee or body shall consist of a group of Malaysian technical and legal experts, assisted if necessary, by foreign experts.The committee shall constantly monitor the activities to ensure the compliance of person involved with the Malaysian rules and laws.Besides that, the committee has authority to supervise such person; furthermore, to confirm the related operation will not endanger any life and property.The committee also has capacity to enter the site and make an inspection of its facility, object, and any related equipment of the space activities, as well as permitted to do anything that is reasonably necessary in performing their duty of monitoring and supervision of the activities. The aforementioned tasks are significant as the state government can ensure the space activities conducted conform to the rules and laws especially at the international level.Such circumstance is essential because it concerns with respect to the matters of liability and responsibility of the government of a state. Registration of Space Object The third legal rule proposed is from the perspective of registration of space activities.This actually relates to the registration of a space object and the establishment of a national registry.Anybody who wishes to start the activities under the Malaysian law, after obtaining authorization from the relevant authority, is required to register the object that he plans to launch into outer space with the relevant authority.This is paramount for the purpose of identifying the owner of the object, especially in the event of it being found outside the territory of the state of registry or had caused destruction during its operation. The United Nations outer space convention has indeed legally recognized the ownership of a space object that it shall belong to the state of registry, which will thus retain jurisdiction and control over the object (Article VIII, Outer Space Treaty 1967;Registration Convention 1975).This international space law also states when the object is found outside the territory of state of registry, it shall be returned to the state of registry upon its request (Article VIII, Outer Space Treaty 1967;Registration Convention 1975).Not only that, the law also imposes on states a requirement of registering the space object that shall be by means of entry in an appropriate registry, and to further inform the United Nations Secretary-General of such registry (Article II(1), Registration Convention 1975). Hence, in view of the above, it is proposed for Malaysia to establish her national space registry in dealing with matter of registration of outer space object.As written in the draft of Malaysian Outer Space Act proposed by Che Zuhaida (2014), the object that will be launched into outer space under the jurisdiction of Malaysian law must be registered in the Malaysian national space registry and shall be allocated a registration number for the purpose of identification.Che Zuhaida (2014) further states that there shall be also rules of an entry of information of the space object in the national registry by which it shall consists of particulars of the space objects like: the registration number, location of launch, date and time of launch, general functions of the object, its orbital parameter, manufacturer's name, operator's name, elements on board of the object, name of other launching state (if applicable), and any other information that is necessary. With respect to the entry of information, there must be a time limit to do so.A rule is proposed for the Malaysian outer space legislation, to confirm that the information must be entered in the registry not later than 30 days following the launch of the space object (Che Zuhaida 2014).In the event of any modification or changes occurred to the object, the relevant information in the registry must also be updated, in other words the national registry must be well informed of the latest changes. The rules of registration of space object at the national level are indispensable as it relates to the state's liability, responsibility, and also indemnification issues.Once a state can ensure the space objects are properly registered at her national level, the government will then have the capacity to deal with the responsibility, liability as well the indemnification matters appropriately. Outer Space Legislation: Is Wasatiyyah Approach Necessary? This part aims at answering whether wasatiyyah approach is necessary for constructing outer space legislation.At the earlier sections, the paper has presented and discussed three crucial rules that proposed for the Malaysian outer space legislation.Hence, in responding to the question, it is necessitated to discuss those rules based on the elements of maqasid.Thus, the discussion will concentrate on the relevant maqasid syariah including preservation of life (nafs), preservation of property or wealth (mal), and other relevant elements (if necessary).It will analyse whether to obtain a better outcome of outer space legislation, the application of wasatiyyah approach is necessary.It must be noted wasatiyyah approach under this scope is actually referring to a fair or just evaluation that is based on the Islamic conception of justice, and not merely a mathematical moderation; in other words, it indeed relates to the notion of putting thing at its proper place.Thus, in constructing the outer space legislation, the just evaluation concept that is putting thing at its proper place will be applied in determining the appropriateness of the rules in achieving the best outcome of the Malaysian outer space legislation. With respect to the rules of authorization, it is indispensable to have a specific mode to authorize the activities.This is in fact a way of safeguarding and controlling the space activities as not to harm or danger the life of public and their property.Since the nature of space activities may expose the public and their property to danger and loss of life and property, thus, it is a just evaluation and indeed necessary to impose rules of law regarding the method of authorization by which it can protect and preserve the life (nafs) and property (mal) of the public at large. Relying on the wasatiyyah approach and element of maqasid that is preservation of life (nafs) and property (mal), it is right and proper to provide various kinds of modes of authorization in dealing with outer space activities.Meaning to say, this proposal is made on the basis of just and fair evaluation on the reason that the different modes proposed are -as a matter of fact -relying on the possibility of different circumstances of application of authorization may take place or applied to the Malaysian authority.Based on the above justification, it is proposed when the applicant plans to operate a space site and facility only; he is then required to apply for space site and facility licence only.However, for those who plan to launch or re-enter an object, they need to apply for space launch and re-entry licence only.Similarly, for overseas launch or return certificate, it is suggested for Malaysian national, firm or body incorporated under Malaysian laws outside Malaysia for them to operate launch or re-entry object.While for those who intend to operate space activities solely for research purposes, it is proposed to apply the experimental permit.Indeed, such authorizations should only be granted in the event of the applicant succeed in satisfying certain requirements (as mentioned earlier).The requirements such as the competency of applicant to operate the facility, the certainty of the activities will not harm the public health and property, the activities will be conducted in peaceful manner etc, are as a matter of fact will safeguard the life (nafs) and property (mal) of public at large.This in fact relates to the notion of just and fair evaluation under the wasatiyyah approach. The same circumstance applies to the monitoring and supervision rules.In getting the best outcome of space legislation, the wasatiyyah approach has been applied.It is a just and proper evaluation with respect to monitoring and supervision rules to have a duo kind of monitoring and supervision system (i.e.Minister and a committee or body).Indeed, it is more balanced and just to establish a duo kind of supervision and monitoring system as it can provide check and balance in term of performing their tasks in supervising and monitoring the space activities.Therefore, the Minister in charge -apart from his obligation and task to supervise and monitor the activities -should be assisted by a group of experts in dealing with the activities.These experts shall consist of technical and legal professionals.Hence, such circumstances are in fact proven more effective and efficient in comparison with the situation where the Minister is alone doing his task.Indeed, with regards to maqasid syariah that is the protection of life (nafs) and property (mal) of the public during the operation of outer space activities, it appears to be more successful and sustained. For the registration rules, it is proposed to establish a Malaysian national registry.It is a right and justified action to have a proper national registry in dealing with the registration of outer space activities.Having a proper registry for outer space activities will in fact reflect a systematic arrangement and preparation of the Government of Malaysia.For the protection of property (mal) and also life (nafs) of public, it is crucial to ensure any object launched into outer space must be registered properly.Once the registration is completed, the owner of the object can be traced easily and this is indispensable in the event of any loss or harm occurred as a result of launch or re-entry of the space object.As a matter of fact, the proposed rules of requirement of updating the information regarding the object is indeed in correspondence with the notion of Islamic justice as it provides the current or latest information about the object.Such circumstances will definitely assist the victim who suffers loss or harm because of the object to claim damages or compensation, and this has in fact met the maqasid of syariah. From the aforementioned discussion, it is observed in realising a better outcome of space legislation, an application of wasatiyyah approach is indeed necessary.The concept of wasatiyyah that is based on just evaluation can actually justify the correctness and appropriateness of the space rules whether they can truly uphold the conception of justice according to Islam with the preservation of the element of maqasid of syariah.Be capable of verifying such circumstances, it will lead to a better result. Concluding Remarks It is submitted wasatiyyah approach is necessary to be applied in the construction of outer space legislation.The discussion of utilization of wasatiyyah approach to the proposed rules of authorization, monitoring and supervision as well as registration of space objects has actually justified the capability of determining the appropriateness of such rules in governing the space activities of a state, as to be in accordance with the notion of Islamic justice.In other words, the applied approach has been proven capable in producing and generating the excellent outcome of space legislation. Therefore, this paper firmly proposes such approach shall be utilised in developing outer space rules.This indeed, to some great extent, will contribute significantly to the growth of outer space legislation of a state in confirming its best outcome.At this juncture, it should be noted that an excellent outcome of legislation will greatly affect the forthcoming involvement and participation of a state and its nationals in outer space activities, particularly to a state that has yet to establish the legislation such as Malaysia.
2019-04-23T13:21:50.637Z
2018-11-29T00:00:00.000
{ "year": 2018, "sha1": "92ea55a1c1c16efcc0974b12398722397a69976c", "oa_license": "CCBYSA", "oa_url": "https://mjsl.usim.edu.my/index.php/jurnalmjsl/article/download/142/77", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "92ea55a1c1c16efcc0974b12398722397a69976c", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [ "Political Science" ] }
231392492
pes2o/s2orc
v3-fos-license
Deconstructing transcriptional variations and their effects on immunomodulatory function among human mesenchymal stromal cells Background Mesenchymal stromal cell (MSC)-based therapies are being actively investigated in various inflammatory disorders. However, functional variability among MSCs cultured in vitro will lead to distinct therapeutic efficacies. Until now, the mechanisms behind immunomodulatory functional variability in MSCs are still unclear. Methods We systemically investigated transcriptomic variations among MSC samples derived from multiple tissues to reveal their effects on immunomodulatory functions of MSCs. We then analyzed transcriptomic changes of MSCs licensed with INFγ to identify potential molecular mechanisms that result in distinct MSC samples with different immunomodulatory potency. Results MSCs were clustered into distinct groups showing different functional enrichment according to transcriptomic patterns. Differential expression analysis indicated that different groups of MSCs deploy common regulation networks in response to inflammatory stimulation, while expression variation of genes in the networks could lead to different immunosuppressive capability. These different responsive genes also showed high expression variability among unlicensed MSC samples. Finally, a gene panel was derived from these different responsive genes and was able to regroup unlicensed MSCs with different immunosuppressive potencies. Conclusion This study revealed genes with expression variation that contribute to immunomodulatory functional variability of MSCs and provided us a strategy to identify candidate markers for functional variability assessment of MSCs. Supplementary Information The online version contains supplementary material available at 10.1186/s13287-020-02121-8. could be some of the leading reasons resulting in inconsistent clinical outcomes [4]. MSCs have been isolated from various tissues, such as bone marrow [5], adipose tissue [6], umbilical cord [7,8], and placenta [9,10]. These cells comply with the minimal criteria defined by International Society for Cellular Therapy (ISCT) in 2006 [11] based on their morphological, phenotypic, and functional characteristics. However, recently, increasing number of MSC-based studies demonstrated that MSCs derived from different donors, tissues, and even sub-clones from the same cell line differed in their functional properties, such as immunomodulatory function, which will impact their applications [12][13][14][15]. Besides cell origins, the heterogeneity of MSCs could also be introduced by different isolation methods, culture media and methods, passage numbers, and/or freezing processes and lead to changes in proliferation and differentiation capacities, as well as in immunosuppression potency [16][17][18]. The unique immunomodulatory plasticity of MSCs makes them an invaluable cell type. MSCs exert their therapeutic effects through forming a balanced inflammatory and regenerative microenvironment, and their immunomodulatory capabilities are not constitutive but rather are licensed by inflammatory cytokines, such as INFγ and TNFα [1]. Licensed MSCs could release various cytokines (such as TGFβ, IL-10, CCL2, IL-6, IL-7, CXCL9, and CXCL10) [19,20], growth factors (such as HGF and LIF) [21,22], immunosuppressive molecules (such as NO, PGE2, TSG6, HO1, and galectins) [23,24], and/or MSC-derived exosomes [25][26][27] to modulate differentiation, maturation, and inflammatory state of immune cells, such as dendritic cells (DCs), macrophages, and monocytes, and promote the formation of regulatory T (Treg) cells and prevent the activation of effector T cells [24]. In addition, MSCs responding to inflammatory environment could also upregulate immunosuppressive molecule PD-L1 which inhibits T cell activation [28] and FASLG which induces T cell apoptosis [29] through cell-to-cell interaction. Recent studies greatly improved our understanding in the immunoregulatory mechanisms of MSCs. However, why different MSC samples differ in immunomodulatory potency before licensing remains unclear and needs to be further investigated [1]. Here, we comprehensively investigated transcriptomic variations among MSC samples derived from multiple tissues. According to the expression patterns, we categorized these samples into 7 groups exhibiting distinct functional enrichment. To identify potential molecular mechanisms that result in distinct MSCs with different immune modulatory potency, we analyzed transcriptome changes of MSCs licensed with INFγ. Differential expression analysis indicated that different groups of MSCs deployed common regulation networks in response to inflammatory stimulation while expression variation of genes in the networks could trigger their different immunosuppressive capability. We also found that these different responsive genes showed high expression variability among unlicensed MSC samples. Finally, a gene panel was derived from these different responsive genes and was able to regroup unlicensed MSCs with different immunosuppressive potency. Umbilical cord-derived MSCs (UC-MSCs) were isolated from Wharton's jelly (WJ) within the umbilical cord after dissection and removal of the arteries, vein, and amniotic epithelium. Tissue explants were applied to isolate and culture UC-MSCs using the same method as PL after tissue dissection. Human dental pulp-derived MSCs (DP-MSCs) were isolated from DP uncovered by means of bone forceps to fracture the dental crown in several parts, as described previously [30]. Then, the dental pulp was enzymatically treated with 1 mg/ml type I collagenase (Sigma) and 3 mg/ml type II dispase (Sigma) for 1 h and cultured in MSC medium at 37°C with 5% CO 2 in a humidified atmosphere. When cell density reached about 80% confluence, cells were dissociated with TrypLE™ Select (ThermoFisher Scientific) incubated at 37°C for 2 min. The collected cells were passaged about every 3-5 days at a seeding density of 5000 cells/cm 2 . All assays were performed using MSCs between passages 2 and 5. IFNγ treatment and cell collection MSCs were seeded into 6-well plates at a density of 5000 cells/cm 2 and then cultured in MSC medium at 37°C with 5% CO 2 in a humidified atmosphere. When cell density reached about 70% confluence, MSCs were stimulated with IFNγ (5 ng/mL, R&D) for 24 h; meantime, parallel untreated wells were used as paired control. After treated for 24 h, the cells were lysed by adding of 1 ml TRIzol (Invitrogen) into each well after removing the medium and washed 3 times with PBS. For total mRNA extraction, each sample was pooled from 2 wells of 6-well plates cultured at the same time. RNA-Seq library construction and sequencing Total mRNA was extracted using TRIzol (Invitrogen) reagent, as described previously [31]. Briefly, cells lysed by TRIzol were centrifuged and chloroform was added to the supernatant and mixed well. After spin, supernatant was mixed with chloroform/isopropanol (24:1) and centrifuged again. The same volume of isopropanol was added to the supernatant and stored at − 20°C for 1 h, and then samples were centrifuged to precipitate RNA. RNA was washed by 75% alcohol twice and dissolved in nuclease-free water. The purity, integrity, and density of RNA were detected by Nanodrop, Agarose gel electrophoresis, and Agilent 2100 BioanaLyzer respectively, and then cDNA was synthesized and PCR was used to construct RNA-Seq library. All protocols for BGISEQ-500 library construction, preparation, sequencing, and quality control were provided by BGI. To enhance the repeatability of our experiments, 13 cell lines (UC (n = 7), PL (n = 3), AD (n = 2), and PD (n = 1)) banked in the National Institutes for Food and Drug Control were independently cultured and treated with IFNγ with the similar methods as mentioned above. Then, the cells were lysed by TRIzol and shipped to our labs for RNA-seq. RNA-Seq data processing and quality control To get public available RNA-seq data of MSCs, we searched in Gene Expression Omnibus using keywords "(MSC OR Mesenchymal stem cell OR Mesenchymal stromal cell) AND "Homo sapiens"[porgn:__txid9606]" with "Expression profiling by high throughput sequencing" type for data information collection. Then, we manually removed samples cultured with certain treatment, from donors with certain disease, with gene modification, or differentiated from ESCs, iPSCs, or other cell types (Fig. S1). We used Illumina platforms for sequencing and retained samples with reads not less than 10, 000,000. Totally, we obtained 120 samples, of which raw files were downloaded from NCBI SRA database [32]. Quality control for each sample was performed by FastQC; adaptors and poor quality bases at read ends were trimmed by cutadapt [33] before mapping. Reads were mapped to the human genome (GRCh38) using HISAT2 with default parameters [34]. Raw counts of sequencing reads for the feature of genes were extracted by featureCounts [35]. MSCs RNA-seq data sequenced in our lab was also processed by the same mapping and feature counts extraction methods processes as mentioned above. After read mapping and raw count extraction, we further compared the percentage of aligned exactly 1 time and median pairwise correlation coefficient for each sample, and we considered samples with percentage of aligned exactly 1 time that are lower than 60% and samples with median pairwise correlation coefficient r less than 0.9 as outliers (Fig. S2a) and left out for the further analysis. Finally, 69 downloaded samples together with our 20 untreated samples were used to investigate transcriptomic variation among MSC samples (Table S1). Filtering and normalization Expressed genes in MSCs were defined as genes with counts per million (CPM) value more than 1 in at least 10% of the total samples; others were considered as none (not detected in all samples) or lowly expressed genes (CPM < 1 in at least 90% of the total samples), which were filtered out before normalization. The trimmed mean of M values (TMM) normalization method was used to estimate scale factors between samples and normalize RNA composition by calcNormFactors function in R package edgeR [36,37]. Variable-expressed gene identification To quantify variability of gene expression across MSCs, distance-to-median (DM) statistic was used as a corrected version of coefficients of variation (CV), which is independent of the mean expression and gene length, as previously described [38]. Briefly, counts per million (CPM) were computed for the mean-corrected residual of the squared CV of each gene to its corresponding rolling median calculation. To correct for the effect of gene length on the mean corrected residual, DM was defined as the difference between the mean corrected residual of the squared CV of each gene and its expected residual from gene length. We computed the rolling median in 50 windows and set the number of overlapping genes between adjacent windows to 25 [38]. Data dimension reduction, visualization, and clustering R function prcomp with default parameters was used to perform principal component analysis (PCA) for expression of selected gene sets. R function Rtsne from Rtsne package was applied for visualizing high-dimensional data into a two-dimensional map in Fig. 3a, b with ini-tial_dims = 10, and before running, we set seed to 1. A graph-based clustering approach [39] was used to cluster the samples into different groups. The first 10 PCs in the data were applied to construct an SNN matrix using the FindNeighbors function in Seurat v3 with k.param set to 10. We then identified clusters using the FindClusters command with the resolution parameter set to 2. Differential expression analysis To identify differentially expressed genes (DEGs), we used R package edgeR to organize, filter, and normalize the data, and quasi-likelihood F tests were applied to identify DEGs according to the guide [37,40]. Genes that differed in expression by two folds and with a false discovery rate (FDR) < 0.1 were assigned as DEGs. GO and pathway enrichment analysis To find the GO and KEGG terms enriched in defined gene sets, we used the DAVID web-tool [41]. For figures, we only reported the top-ranked terms illustrated in the legends. Gene set enrichment analysis Gene set enrichment analysis (GSEA) [42] was performed with 1000 permutations by GSEA_4.0.2 desktop application. Gene lists were ranked by the significant score defined as −log10(FDR) multiplied by logtransformed fold-change between two conditions or DM value for Fig. 2a, g, S3c, and S3d. Gene sets from the Molecular Signature Database (MSigDb) were used for GO and KEGG analysis. Gene sets that contained between 15 and 300 genes were included to provide more biologically meaningful results and reduce false positives. Pathway enrichment analysis and visualization Pathway enrichment analysis was achieved according to protocol described in [43]. We downloaded the pathway gene set database Human_GOBP_AllPathways_ no_GO_ iea_October_01_2019_symbol.gmt from the Bader lab dated October 01, 2019, for all pathway enrichment analyses. Gene lists were ranked by the significant score defined as −log10(FDR) multiplied by log-transformed fold-change between two conditions. After gene set enrichment analysis (GSEA), pathway sets that contained between 15 and 300 genes were included to provide more biologically meaningful results and reduce false positives. For map visualization, pathway enrichment analysis results were interpreted in Cytoscape using its EnrichmentMap, AutoAnnotate, WordCloud, and clusterMaker2 applications [43,44]. The pathway enrichment map was created with parameters FDR Q value < 0.001 and combined coefficient > 0.375 with combined constant = 0.5 [44]. MSC and PBMC co-culture for immunosuppressive potency assessment The suppressive effect of MSCs on leukocyte expansion was confirmed as described previously [45]. Briefly, MSCs were seeded into 24-well plates at a density of 1 × 10 5 cells per well, and CFSE (Sigma)-labeled human PBMCs were added to each MSCs well at a 1:5 (cell number) co-culture ratio of MSCs to PBMCs. Then, 10 μg/mL phytohemagglutinin (PHA) (Sigma) was used to activate PBMC cells. PBMCs at a same density without MSCs and PHA were used as negative control. For positive control, we plated the same number of PBMCs/well with PHA to active leukocytes. After 5 days of co-culture, PBMCs were collected and measured using a FACSCalibur platform (BD Biosciences). The suppression of T cell proliferation by MSCs was calculated as [100% − (T cell proliferation after co-culture with MSCs divided by positive control × 100)%]. Negative control was applied to define a threshold of the CFSE signal of non-proliferating T cells. Data selection and quality control To comprehensively investigate transcriptomic variations among MSC cell lines cultured in vitro, RNA-seq data of total 102 samples were integrated for gene expression analysis, of which 69 samples were selected from public database and 33 samples were newly sequenced in this study ( Fig. S1 and S2; Table S1). Overall, the MSC samples were derived from 6 tissues ( Fig. 1a) in 17 studies (Fig. S2b), including adipose tissue (AD), bone marrow (BM), dental pulp (DP), endometrial (ED), placenta (PL), and umbilical cord (UC). Of these samples, number of reads were mostly between 10,000,000 and 60,000,000, and more than 60% aligned exactly 1 time to the transcriptome (Fig. S2c), indicating high quality of these collected RNA-seq data. Minimal criteria of defining MSC claimed that MSCs must express three positive markers, i.e., CD105 (ENG), CD73 (NT5E), and CD90 (THY1), and lack expression of several negative markers, including CD45 (PTPRC), CD34, CD14 or CD11b (ITGAM), CD79a (CD79A) or CD19, and HLA-DR (HLA-DRA and HLA-DRB1 etc.) [11]. Indeed, gene expression level ranked by TPM (transcripts per kilobase million) showed that those positive markers were highly expressed (Fig. 1b) while negative markers were weakly or not expressed in our samples, except HLA-DR molecules showed highly variable expression across samples (Fig. 1c). Considering that MSCs express HLA-DR surface molecules not only in response to stimulations, such as IFNγ, but also under some normal expansion culture conditions [46,47], therefore, we did not remove these samples with higher expression of HLA-DR for the following gene expression analysis. To further investigate whether expression variations among the MSC samples were related to specific MSC biological functional properties or not, we next performed gene set enrichment analysis (GSEA) with above pre-ranked gene list. Notably, genes with highly variable expression were significantly enriched in immune modulation and developmental process, such as humoral immune response, response to chemokine, nephron development, cardiac chamber morphogenesis, and digestive system development (Fig. 2e, f). Expression variability was also overrepresented in coding proteins located in extracellular matrix and cell surface, involving in cytokine, receptor regulator, and binding activity (Fig. S3c). On the other hand, genes with stable expression were housekeeping genes involved in basic cellular function, such as RNA processing (Fig. S3d). These results together demonstrated that transcriptome-wide sample-tosample variations among MSCs were associated with their functional properties. Grouping MSC samples based on highly variable genes To identify candidate groups of the MSC samples based on gene expression pattern, genes with DM value more than 1 as highly variable genes (HVGs) were selected for data dimension reduction and clustering. The results showed that the samples we collected could be clustered into 7 groups (G0-G6) (Fig. 3a). Among these groups, G1 included samples mostly derived from BM while G3 included all AD-MSCs plus some BM-MSCs ( Fig. 3b; Table S3). Other groups included MSC samples derived Table S3). Meanwhile, the HVGs we selected grouped into five clusters with distinct functional enrichment, such as system development, tube development, metabolic process, and response to cytokine (Fig. 3b), indicating that different groups of MSC samples may have different differentiation propensity and immunomodulatory potency. In addition, different expression analysis among these groups demonstrated that functional enrichment of DEGs among these groups are associated with MSC function-related properties as well, such as angiogenesis, nervous system development, cell migration, and inflammatory response. Taken together, these results demonstrated that MSCs from different tissue origins could be classified into the same group with similar functional enrichment based on expression of HVGs, although tissue origins have been reported to impact greatly on functions of MSCs [56,57]. To illustrate expression difference in these groups, we presented results obtained from comparison among G0 (n = 22, 15 downloaded, 7 newly sequenced), G2 (n = 15, 1 downloaded, 15 newly sequenced), G3 (n = 13, 11 downloaded, 2 newly sequenced), and G4 groups (n = 12, 3 downloaded, 9 newly sequenced) (Table S4), to which our 33 MSC samples were assigned (Table S3). GO biological process enrichment analysis demonstrated that G0 significantly upregulated genes involved in response to stimulus and inflammatory response, including cytokines such as CXCL2, CXCL3, CXCL5, and CXCL20 ( Fig. S4A and B). The upregulated genes in G2, G3, and G4 were overrepresented in developmental process with distinct developmental cell types. For example, genes related to nervous system development, circulatory system development, and nephron development were respectively upregulated in the G2, G3, and G4 groups (Fig. S4C-S4H). Altogether, we found significant gene expression variations existing among MSCs that could potentially influence their functional properties. Therefore, we hypothesized that quantitative RNA analysis of selected genes from HVGs could serve as a candidate matrix assay for characterizing MSC potency [58]. Characteristics of expression changes in MSCs upon IFNγ licensing Although above functional enrichment analysis demonstrated that some inflammatory response-related genes were over-represented in G0 (Fig. S4), it was not clear how these differences would affect MSC immunomodulatory behavior. Due to the critical role of IFNγ in licensing MSC-mediated T cell suppression [28], IFNγ, which can be used as an alternative for human PBMCs as responder cells in a MSC potency assay [19], was used to treat 27 MSCs within the G0 (n = 4), G2 (n = 15), and G4 (n = 8) groups to study response variation on transcriptomic level. For clarity, we assigned them as ssG0, ssG2, and ssG4 due to their small size compared to the number of samples in each group. Comparing to their paired untreated samples, we identified 902 genes that were differentially expressed (655: 72.62% upregulated vs 247: 31.26% downregulated) (Fig. 4a, Table S5). In line with previous studies, IDO1, the dominant determinant of MSC-mediated inhibition of T cell proliferation, and chemokines, including CCL5, CXCL9, CXCL10, and CXCL11, were upregulated in MSCs upon IFNγ licensing, which could potentially form a chemokine-IDO axis to exert immunoregulatory effects on various immune cells (Fig. 4b) [19,[59][60][61]. In addition, cytokines, including CCL2, CCL7, and IL6, apoptosis inducer TNFSF10 (TRAIL), immune checkpoint proteins, including CD274 (PD-L1) and PDCD1LG2 (PD-L2), cell adhesion molecules, including ICAM1 and VCAM1, and class II major histocompatibility complex (MHC), including HLA-DRA, HLA-DRB1, and HLA-DRB5, were also overexpressed in IFNγ activated MSCs (Fig. 4b and Table S5). Similarly, these genes have also been shown to play critical roles in MSC-mediated immunosuppression in previous studies [58]. Meanwhile, IFNγ licensing triggered specific signaling pathways in MSCs, such as upregulation of JAK2, JAK3, STAT1, STAT2, SOSC1, SOSC3, TLR2, and TLR3, to orchestrate their immune response. Pathway enrichment map further illustrated that IFNγ-licensed MSCs upregulated several gene clusters linked to response to interferon, immune response, and antigen and protein degradation (Fig. 4c). Interestingly, sterol biosynthetic process was downregulated in IFNγ-treated MSCs (Fig. 4c). Sterols are the major component of the cellular membranes and are essential for mammalian cell growth [62]. Decreased sterol synthesis could partially explain why IFNγ leads to a cytostatic response in MSCs [60]. Overall, a panel of immunomodulation-related genes was upregulated upon IFNγ licensing, and here we define these upregulated genes as common response genes (CRGs). To predict T cell suppression potency, the sum of log normalized expression of VEGF, IFNa, CXCL10, GCSF, CXCL9, IL-7, and CCL2 genes, which were correlated with T cell suppression capacity of MSCs according to previous studies [19,63], was calculated and served as MSC immunosuppressive score for the 27 MSCs within ssG0, ssG2, and ssG4 treated with IFNγ. Our results showed that MSC immunosuppressive score in ssG0 was significantly lower than those of ssG2 and ssG4 (Fig. 4d). MSC and PBMC co-culture experiments in vitro were performed on 16 MSCs (4 from ssG0, 8 from ssG2, and 4 from ssG4), and the results demonstrated that T cell proliferation inhibitory rate of G0 was significantly lower than those of ssG2 and ssG4. Taken together, these results demonstrated that different groups of MSCs clustered by their expression patterns of HVGs across unlicensed MSC samples could have distinct immunosuppressive capability, which may reflect on their different responses to inflammatory environment. Transcriptional variations of MSCs in response to inflammatory environment imitated by IFNγ treatment To identify genes that respond differentially to IFNγ licensing among the ssG0, ssG2, and ssG4 groups, we performed differentially expressed analysis and obtained a total of 472 genes defined as different response genes (DRGs). Most of the DRGs downregulated in ssG0 while upregulated in ssG2 and ssG4 are enriched in immune response pathways (Fig. S5A-S5D), including several well-known immune-modulating genes, such as CXCL9, CXCL10, CXCL11, CCL2, CCL7, CCL8, CD74, CXCL16, CD7, CD14, CD83, and LGALS9 ( Fig. 5a; Table S6). These genes are tightly involved in immunomodulatory processes, such as regulation of immune cell migration, T cell development and differentiation, T cell chemotaxis, immune activation, and cell survival [64][65][66][67]. To further identify the genes related to common immune-regulatory pathways and to differences in immunomodulatory capacities, we performed comparison between DRGs and CRGs. The genes unique to CRGs, shared between CRGs and DRGs, and unique to DGRs were designated as Genes set1, Genes set2, and Genes set3, respectively (Fig. 5b). Among the 472 DRGs, 205 genes were shared with CRGs and fall within Gene set2 (Fig. 5b and c; Table S7). Interestingly, functional enrichment analysis revealed that Genes set1 and Genes set2 were significantly involved in immunomodulatory functions, while Genes set3 was involved in regulation of developmental process (Fig. 5d), implying that IFNγ-treated MSCs may partly influence on their developmental behaviors. Taken together, these results suggested that MSCs exert immunosuppressive effects through common mechanisms, while variations in expression of some immunomodulatory genes upon inflammatory priming could result in their distinct immunosuppressive potency. Refining gene panel for grouping unlicensed MSCs with predictable immunosuppressive potency According to our results, differential expression of certain genes under IFNγ-licensed state could potentially explain for differences in their immunomodulatory activity. However, there is still a lack of reports on genes whose expression in MSCs under unlicensed condition will be related to MSCs' immunomodulatory potency. Considering that genes in Genes set2 were not only related to common immune-regulatory pathways but also to differences in immunomodulatory capacities (Fig. 5b, c), differential expression of these genes under unlicensed condition may contribute to distinct immunomodulatory behavior of MSCs in response to inflammatory environment. Interestingly, when we compared DM distribution of genes in Genes set1, Genes set2, and Genes set3 in the total unlicensed MSC samples, genes in Genes set2 demonstrated significantly higher variation than Genes Set1 (p = 2.06e−11) and Genes Set3 (p = 1.80e−02) (Fig. 6a). Several immune response-related genes, such as CCL2, CCL7, CD74, TNFSF10, LGALS9, IFIT1, VCAM1, and ICAM1, fall within Genes set2 and were among the top highly variable genes ( Fig. 2c and Table S2). These results indicated that expression variation of genes in Genes set2 may exert greater influence during immune activation of unlicensed MSCs. Then, we applied the top 100 genes with highest DM values in Genes set2 as a gene panel and utilized their expression for data dimension reduction. The results of principal component analysis (PCA) showed that all MSC samples in the G0 group laid in the third quadrant while majority of the samples from the G1 to G5 groups laid in the first and second quadrant based on principal component 1 and principal component 2 (Fig. 6b). Since the G0 group exhibited the lowest immunosuppressive capacity compared to the G2 and G4 groups in both in silico and in vitro analysis (Fig. 5e), samples fall within the same quadrant as G0 may have lower immunosuppressive capability as well. Meanwhile, with INFγ-treated samples included, PCA analysis showed that the G6 group, of which all samples laid in the fourth quadrant, were closer to INFγ-treated samples (Fig. S6), indicating pre-licensed state of these samples. Taken together, our results demonstrated that the gene panel we selected here would be valuable for characterizing MSC immunomodulatory potency based on quantitative RNA-seq analysis. Discussion Since the first report on the characteristics of MSCs derived from human BM [68], studies revealed that MSCs can be isolated from both pericytes and adventitial progenitor cells from nearly all tissues [69][70][71]. Although MSCs have been widely accepted as one of the most promising cell products to treat various degenerative and inflammatory disorders, such as graft versus host disease (GvHD), Crohn's disease, multiple sclerosis, and diabetes, there are still clinical challenges, such as why the outcomes of advanced clinical trials were not as encouraging as pre-clinical animal data in a wide array of disease models [17]. In addition to limited understanding of mechanisms of action which MSCs deployed to regulate their anti-inflammatory and tissue repair functionalities, functional variability and heterogeneity could hinder development of effective assays for MSCs as potency release criterion for the advanced clinical trials [1,17,58]. These variability and heterogeneity manifest among donors, among tissue sources, as well as within cell populations [72][73][74][75]. Besides, distinct cell separation and preservation methods, culture media, and number of passages can affect cell functionality. For example, human umbilical cord blood mononuclear cells tested before and after cryopreservation showed different abilities to treat stroke [76], aged MSCs underwent morphological, phenotypic, and differentiation potential changes [77], and long-term culture increased genetic instability in MSCs [78]. These studies indicated that MSCs with distinct cell preparation, fitness, culture methods, and expansion levels could differ in their tissue-protective and immunomodulatory properties [17,19]. However, the molecular contributors to the functional variability and heterogeneity remain unclear. Here, we analyzed RNA-seq from 102 MSC samples derived from 6 Several studies have been done to compare gene expression similarity and variability among MSC samples [79][80][81][82]. These studies demonstrated that different gene expression profiles could reflect the ontogenetic sources of MSCs and indicate their distinct differentiation potential or other functional properties. In line with these studies, our results also showed that MSCs were mostly grouped together with the same tissue origin according to the expression pattern of HVGs (Fig. 3a, b; Table S3). However, these researches largely focused on expression difference among MSCs with distinct tissue origins while ignoring that MSCs from the same tissue might also exhibit functional variability. The functional differences could come from a variety of cues, including chemical, physical, and biological factors, expansion level, and characters of donors, which may result in changes of MSC functional characteristics [80,83]. For example, compared to younger counterparts, aged MSCs from the same tissue with identical culture method display delayed clonogenic capacity and pro-inflammatory SASPlike phenotype, and their immunomodulatory properties were significantly reduced [83]. To address this, we performed data dimension reduction and clustering based on a nonparametric clustering technique (see the "Methods" section) to group these collected samples in the present study. Our results demonstrated that MSCs can be clustered into groups with diverse functional properties characterized by enrichment analysis (Fig. 3 and S4). Besides, MSCs from different tissues can be classified into the same group while MSCs from the same tissue as well can be clustered into different groups according to the expression patterns of HVGs (Fig. 3a, b; Table S3), indicating the importance of potency assays for MSCs before clinical trials or application. Despite different tissue sourcing, our results are in line with that MSCs likely share fundamental mechanisms of action mediating their anti-inflammatory processes [58]. Our data are in agreement with reports that immunosuppression related molecules, such as IDO1, CCL5, CXCL9, CXCL10, CXCL11, CD274, TNSF10, CCL2, and FLT3LG (FLT3L), were upregulated upon INFγ licensing ( Fig. 4b; Table S6) [60,61], of which some were lowly or not expressed in unlicensed MSCs. Activated MSCs produce chemokines CCL5, CXCL9, CXCL10, and CXCL11, which could recruit T cells to the proximity of MSCs and suppress the proliferation and activity of T cells in their vicinity by expressing tryptophan catabolism ratelimiting enzyme IDO1 through metabolite kynurenic acid and/or by expression the immune checkpoint protein CD274 through cell-to-cell interaction [1,84,85]. In addition, a recent study demonstrated that MSCs might utilize IFNγ-FLT3L-FLT3 axis to suppress inflammation in lupus through upregulating tolerogenic DCs [86]. Different groups of MSCs should deploy shared regulation networks to exert immunosuppressive function upon IFNγ licensing, including JAT-STAT, NF-kappaB, IL-12/IL23, response to interferon, immune response, antigen and protein degradation, extrinsic apoptosis pathway, and complement system signaling pathways, which could form a regulatory network to orchestrate MSC immunomodulatory function (Fig. 4c). However, gene expression variations could result in different responses among MSC groups treated with INFγ, and these immunomodulation related genes were mainly in the shared regulation networks, including abovementioned TNSF10, CXCL9, CXCL10, CXCL11, and CCL2. Furthermore, human MSCs licensed by INFγ have been tested in NOD-SCID mice showing enhanced immunosuppressive properties to significantly reduce the symptoms of GvHD [61], indicating potential clinical application of INFγ primed MSCs. Nevertheless, expression variability among MSCs, which could lead to different expression levels of immunomodulatory genes among the licensed MSCs, implied their different immunomodulatory potency after priming. Therefore, conditions for MSC priming, such as optimum priming time, the concentration of INFγ, and whether different MSCs could be adjusted to similar immunomodulatory potency by priming design, need to be refined. And experimental immunosuppression and immunomodulation strategies could also be applied to enhance the predictive value of preclinical studies with MSCs [87]. In essential, genetic and epigenetic variations contribute to functional variability among MSCs. Identification of functional markers of potency in unlicensed MSCs could facilitate our understanding of MSCs' mechanisms of action and development of release potency assays for them as potency release criterion [58]. IFNγ stimulation of MSCs recapitulates the molecular genetic changes that are observed in MSCs co-cultured with activated PBMCs [19]; however, it is necessary to notice that therapeutic effects of MSCs are multifaceted synergistic responses, which could form a balanced inflammatory and regenerative micro-environment in the presence of vigorous inflammation [1], and assays just focused on one functional aspect of MSCs, such as immunosuppressive potency, may ignore other functional capabilities of MSCs, which may also in some extend link to clinical results. Our analysis demonstrated that transcriptomewide sample-to-sample variations among MSCs are associated with various functional properties (Fig. 2). Besides, their functional similarity and disparity can be classified based on expression of HVGs (Fig. 3), and thus, we speculated that these HVGs are valuable to serve as candidate matrix assays for potency analysis of MSCs without licensing. Comparison of response patterns to IFNγ among MSCs further showed that genes shared between CRGs and DRGs are significantly more variable than the other two sets (Fig. 6). Based on these genes' expression, we established a primary model, which faithfully assessed immunosuppressive potency of unlicensed MSCs (Fig. 6). Beyond this, we inferred that RNA-seq technology combined with our model method can be extended to other functional variations of MSCs, such as interaction with innate immune cells and differentiation propensity. Conclusions In summary, our study demonstrated that MSC samples can be classified into groups exhibiting distinct functional properties, such as immune modulatory potency, according to the expression pattern of HVGs. We also highlighted that MSCs deployed common regulation networks to exert immunosuppressive function while expression variability of genes in the networks could result in distinct immunosuppressive potency in MSCs. Finally, we found these different responsive genes showed high expression variability among unlicensed MSC samples as well, from which candidate markers were refined for development of matrix assays to quantify the immunosuppression potency of human unlicensed MSCs. In the future, with increased number of MSC samples, our analysis approach can be extended beyond immune modulatory potency to characterize other functional variations and related genes. Additional file 1: Fig. S1. Workflow for data search and selection. Additional file 2: Table S1.
2021-01-10T14:40:32.276Z
2021-01-09T00:00:00.000
{ "year": 2021, "sha1": "779152188f77e3f4ce9dce7abee82014e903a3db", "oa_license": "CCBY", "oa_url": "https://stemcellres.biomedcentral.com/track/pdf/10.1186/s13287-020-02121-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8666cb91228798de8b6eae8582af2ccd9cc4ff05", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266081168
pes2o/s2orc
v3-fos-license
Ten simple rules for managing laboratory information Author summary Information is the cornerstone of research, from experimental (meta)data and computational processes to complex inventories of reagents and equipment. These 10 simple rules discuss best practices for leveraging laboratory information management systems to transform this large information load into useful scientific findings. Introduction The development of mathematical models that can predict the properties of biological systems is the holy grail of computational biology [1,2].Such models can be used to test biological hypotheses [3], quantify the risk of developing diseases [3], guide the development of biomanufactured products [4], engineer new systems meeting user-defined specifications, and much more [4,5].Irrespective of a model's application and the conceptual framework used to build it, the modeling process proceeds through a common iterative workflow.A model is first evaluated by fitting its parameters such that its behavior matches experimental data.Models that fit previous observations are then further validated by comparing the model predictions with the results of new observations that are outside the scope of the initial data set (Fig 1). Historically, the collection of experimental data and the development of mathematical models were performed by different scientific communities [6].Computational biologists had little control over the nature and quality of the data they could access.With the emergence of systems biology and synthetic biology, the boundary between experimental and computational biology has become increasingly blurred [6].Many labs and junior scientists now have expertise in both producing and analyzing large volumes of digital data produced by high- Information management enhances the experimental and modeling cycle.Our 10 simple rules for managing laboratory information (green) augment the cycle of hypothesis formulation, design, data analysis, modeling, and decision-making (gray).The experimental design phase is improved by carefully tracking your inventory, samples, parameters, and variables.Proactive data management and the thoughtful use of databases facilitate statistical and exploratory analyses as well as the development of conclusions that inform the next round of experiments.Frequent reevaluation of project, team, and workflow success is a critical component of refining experimental processes, developing a common culture, and positioning your research group in the greater scientific context.https://doi.org/10.1371/journal.pcbi.1011652.g001throughput workflows and an ever-expanding collection of digital instruments [7].In this context, it is critically important to properly organize the exponentially growing volumes of experimental data to ensure they can support the development of models that can guide the next round of experiments [8]. We are a group of scientists representing a broad range of scientific specialties, from clinical research to industrial biotechnology.Collectively, we have expertise in experimental biology, data science, and mathematical modeling.Some of us work in academia, while others work in industry.We have all faced the challenges of keeping track of laboratory operations to produce high-quality data suitable for analysis.We have experience using a variety of tools, including spreadsheets, open-source software, homegrown databases, and commercial solutions to manage our data.Irreproducible experiments, projects that failed to meet their goals, datasets we collected but never managed to analyze, and freezers full of unusable samples have taught us the hard way lessons that have led to these 10 simple rules for managing laboratory information. This journal has published several sets of rules regarding best practices in overall research design [9,10], as well as the computational parts of research workflows, including data management [11][12][13] and software development practices [14][15][16].The purpose of these 10 rules (Fig 1) is to guide the development and configuration of lab information management systems (LIMS).LIMS typically offer lab notebook, inventory, workflow planning, and data management features, allowing users to connect data production and data analysis to ensure that useful information can be extracted from experimental data and increase reproducibility [17,18].These rules can also be used to develop training programs and lab management policies.Although we all agree that applying these rules increases the value of the data we produce in our laboratories, we also acknowledge that enforcing them is challenging.It relies on the successful integration of effective software tools, training programs, lab management policies, and the will to abide by these policies.Each lab must find the most effective way to adopt these rules to suit their unique environment. Rule 1: Develop a common culture Data-driven research projects generally require contributions from multiple stakeholders with complementary expertise.The project's success depends on the entire team developing a common vision of the project objectives and the approaches to be used [19][20][21].Interdisciplinary teams, in particular, must establish a common language as well as mutual expectations for experimental and publication timelines [19].Unless the team develops a common culture, one stakeholder group can drive the project and impose its vision on the other groups.Although interdisciplinary (i.e., wet-lab and computational) training is becoming more common in academia, it is not unusual for experimentalists to regard data analysis as a technique they can acquire simply by hiring a student with computer programming skills.In a corporate environment, research informatics is often part of the information technology group whose mission is to support scientists who drive the research agenda.In both situations, the research agenda is driven by stakeholders who are unlikely to produce the most usable datasets because they lack sufficient understanding of data modeling [20].Perhaps less frequently, there is also the situation where the research agenda is driven by people with expertise in data analysis.Because they may not appreciate the subtleties of experimental methods, they may find it difficult to engage experimentalists in collaborations aimed at testing their models [20].Alternatively, their research may be limited to the analysis of disparate sets of previously published datasets [19].Thus, interdisciplinary collaboration is key to maximizing the insights you gain from your data. The development of a common culture, within a single laboratory or across interdisciplinary research teams, must begin with a thorough onboarding process for each member regarding general lab procedures, research goals, and individual responsibilities and expectations [21,22].Implementing a LIMS requires perseverance by users, thus a major determinant of the success of a LIMS is whether end-users are involved in the development process [17,23].When the input and suggestions of end-users are considered, they are more likely to engage with and upkeep the LIMS on a daily basis [23].The long-term success of research endeavors then requires continued training and reevaluation of project goals and success [19,21] (Fig 1). These 10 simple rules apply to transdisciplinary teams that have developed a common culture allowing experimentalists to gain a basic understanding of the modeling process and modelers to have some familiarity with the experimental processes generating the data they will analyze [19].Teams that lack a common vision of data-driven research are encouraged to work toward acquiring this common vision through frequent communication and mutual goal setting [19,20].Discussing these 10 simple rules in group meetings may aid in initiating this process. Rule 2: Start with what you purchase All the data produced by your lab are derived from things you have purchased, including supplies (consumables), equipment, and contract manufactured reagents, such as oligonucleotides or synthetic genes.In many cases, (meta)data on items in your inventory may be just as important as experimentally derived data, and as such, should be managed according to the Findability, Accessibility, Interoperability, and Reuse (FAIR) principles for (meta)data management (https://www.go-fair.org/fair-principles/)[24].Assembling an inventory of supplies and equipment with their associated locations can be handled in a few weeks by junior personnel without major interruption of laboratory operations, although establishing a thorough inventory may be more difficult and time-consuming for smaller labs with fewer members.Nevertheless, managing your lab inventory provides an immediate return on investment by positively impacting laboratory operations in several ways.People can quickly find the supplies and equipment they need to work, supplies are ordered with appropriate advance notice to minimize work stoppage, and data variation is reduced due to standardized supplies and the ability to track lot numbers easily [17,25,26] (Fig 1). Many labs still use Excel to keep track of inventory despite the existence of several more sophisticated databases and LIMS (e.g., Benchling, Quartzy, GenoFAB, LabWare, LabVantage, TeselaGen) [25].These can facilitate real-time inventory tracking unlike a static document, increasing the Findability and Accessibility of inventory data.While some systems are specialized for certain types of inventories (e.g., animal colonies or frozen reagents), others are capable of tracking any type of reagent or item imaginable [25].When considering what items to keep track of, there are 3 main considerations: expiration, maintenance, and ease of access. Most labs manage their supplies through periodic cleanups of the lab, during which they sort through freezers, chemical cabinets, and other storage areas; review their contents; and dispose of supplies that are past their expiration date or are no longer useful.By actively tracking expiration dates and reagent use in a LIMS, you can decrease the frequency of such cleanups since the LIMS will alert users when expiration dates are approaching or when supplies are running low.This can prevent costly items from being wasted because they are expired or forgotten, and furthermore, the cost of products can be tracked and used to inform which experiments are performed. LIMS can also support the use and service of key laboratory equipment.User manuals, service dates, warranties, and other identifying information can be attached directly to the equipment record, which allows for timely service and maintenance of the equipment.Adding equipment to the inventory can also prevent accidental losses in shared spaces where it is easy for people to borrow equipment without returning it.The label attached to the equipment (Rule 5) acts as an indication of ownership that limits the risk of ownership confusion when almost identical pieces of equipment are owned by neighboring laboratories.As the laboratory inventory should focus on larger, more expensive equipment and supplies, inexpensive and easily obtained equipment (i.e., office supplies) may not need to be inventoried.An additional benefit of inventory management in a LIMS is the ability to create a record connecting specific equipment and supplies to specific people and projects, which can be used to detect potential sources of technical bias and variability (Rules 4 and 5). Rule 3: Focus on your current projects first After establishing an inventory of supplies and equipment, it is natural to consider using a similar approach with the samples that have accumulated over the years in freezers or other storage locations.This can be overwhelming because the number of samples will be orders of magnitude larger than the number of supplies.In addition, documenting them is likely to require more effort than simply retrieving a product documentation from a vendor's catalog. Allocating limited resources to making an inventory of samples generated by past projects may not benefit current and future projects.A more practical approach is to prioritize tracking samples generated by ongoing projects and document samples generated by past projects on an as-needed basis. Inventory your samples before you generate them It is a common mistake to create sample records well after they were produced in the lab.The risks of this retroactive approach to recordkeeping include information loss, as well as selective recordkeeping, in which only some samples are deemed important enough to document while most temporary samples are not, even though they may provide critical information. A more proactive approach avoids these pitfalls.When somebody walks into a lab to start an experiment, the samples that will be generated by this experiment should be known.It is possible to create the computer records corresponding to these samples before initiating the laboratory processes that generates the physical samples.The creation of a sample record can therefore be seen as part of the experiment planning process (Fig 1).This makes it possible to preemptively print labels that will be used to track samples used at different stages of the process (Rule 5). It may also be useful to assign statuses to samples as they progress through different stages of their life cycle, such as "to do," "in progress," "completed," or "canceled," to differentiate active works in progress from the backlog and samples generated by previous experiments.As the experimental process moves forward, data can be continually appended to the sample computer record.For example, the field to capture the concentration of a solution would be filled after the solution has been prepared.Thus, the success, or failure, of experiments can be easily documented and used to inform the next round of experiments. Develop sample retention policies It is always unpleasant to have to throw away poorly documented samples.The best strategy to avoid this outcome is to develop policies to discard only samples that will not be used in the future, a process rendered more objective and straightforward with adequate documentation.Properly structured workflows (Rule 8) should define criteria for which samples should be kept and for how long.All lab members should be trained in these policies to ensure consistency, and policies should be revisited as new research operating procedures are initiated. It can be tempting to keep every tube or plate that still contains something as a backup.This conservative strategy generates clutter, confusion, and reproducibility issues, especially in the absence of documentation.While it makes sense to keep some intermediates during the execution of a complex experimental workflow, the successful completion of the experiment should trigger the elimination of intermediates that have lost their purpose, have limited shelf life, and/or are not reusable.During this intermediate step, samples that are deemed as critical backup should be stored in a separate location from the working sample to minimize the likelihood of loss of both samples in case of electrical failure, fire, etc.Using clear labels (Rules 4 and 5) and storing intermediate samples in dedicated storage locations can help with the enforcement of sample disposal policies. Rule 4: Use computer-generated sample identification numbers Generating sample names is probably not the best use of scientists' creativity.Many labs still rely on manually generated sample names that may look something like "JP PCR 8/23 4." Manually generated sample names are time-consuming to generate, difficult to interpret, and often contain insufficient information.Therefore, they should not be the primary identifier used to track samples. Instead, computer-generated sample identification numbers (Sample ID) should be utilized as the primary ID as they are able to overcome these limitations.Rather than describing the sample, a computer-generated sample ID provides a link between a physical sample and a database entry that contains more information associated with the sample.The Sample ID is the only piece of information that needs to be printed on the sample label (Rule 5) because it allows researchers to retrieve all the sample information from a database.A sample tracking system should rely on both computer-readable and human-readable Sample IDs. Computer-readable IDs Since the primary purpose of a sample ID is to provide a link between a physical sample and the computer record that describes the sample, it saves time to rely on Sample IDs that can be scanned by a reader or even a smartphone [27,28] (Fig 2).Barcodes are special fonts to print data in a format that can be easily read by an optical sensor [29].There are also newer alternatives, such as quick response (QR) codes, data matrices, or radio-frequency identification (RFID), to tag samples [30,31].QR codes and data matrices are 2D barcodes that are cheaper to generate than RFID tags and store more data than traditional barcodes [27].Nevertheless, these technologies encode a key that points to a database record. Uniqueness is the most important property of the data encoded in barcodes, and the use of unique and persistent identifiers is a critical component of the Findability of your (meta)data [24].Several vendors now offer products with 2D barcodes printed on the side or bottom of the tube.It is common for such products, as well as lab-made reagents, to be split across multiple tubes or moved from one tube to another.In these cases, each of these "new" samples should have unique barcodes.A barcoding system can therefore facilitate the accurate identification of "parent" samples (e.g., a stock solution with ID X) and the unique "child" samples derived from them (e.g., aliquots of the stock solution with IDs Y and Z). Human-readable IDs While computer-readable IDs should be the main ID used when tracking a sample or supply, it is sometimes necessary for laboratory personnel to have a secondary sample ID they can read without the use of any equipment or while doing manual operations (i.e., handling samples). To make an identifier readable by humans, it is best to keep the ID short and use their structure to provide contextual information.For example, the use of a prefix may help interpret the ID.For example, the ID #CHEM1234 would naturally be interpreted as a chemical or #MCUL2147 as a mammalian culture (Fig 2). Since these identifiers do not need to map to a unique database entry, human-readable IDs do not have the same uniqueness requirements as computer-readable IDs.For example, it may be acceptable to allow 2 labs using the same software environment to use the same humanreadable ID, because this ID will only be seen in the context of a single lab.The software system should maintain the integrity of the relationships between the human-readable ID and the computer-readable ID by preventing users from editing these identifiers. Rule 5: Label everything Print labels to identify supplies, equipment, samples, storage locations, and any other physical objects used in your lab.Many labs are extensively relying on handwritten markings that create numerous problems [17].A limited amount of information can be written on small sample containers, and like manually generated sample names, handwritten labels can be difficult to read or interpret. Some labels are self-contained.For example, shipping labels include all the information necessary to deliver a package.However, in a laboratory environment, a sample label must not only identify a physical sample but also establish a connection to a record describing the sample and the data associated with it (Fig 2). Content of a label Only 2 pieces of information are necessary on a label: a computer-readable Sample ID printed as a barcode and a human-readable Sample ID to make it easier for the researcher to work with the sample.If there is enough space to print more information on the label, your research needs should inform your label design.Ensure you have sufficient space to meet regulatory labeling requirements (e.g., biosafety requirements, hazards) and if desired, information such as the sample type, sample creator, date (e.g., of generation or expiration), or information about related samples (e.g., parent/child samples). Label printing technology Putting in place a labeling solution requires the integration of several elements, but once configured, proper use of label printing technologies makes it much faster and easier to print labels than to label tubes manually. There are many types of label printers on the market today and most are compatible with the Zebra Programming Language (ZPL) standard [32].Labeling laboratory samples can be challenging due to harsh environmental conditions: exposure to liquid nitrogen or other chemicals, as well as ultra-low or high temperatures, will require specialized labels.For labeling plastic items, thermal transfer will print the most durable labels, especially if used with resin ribbon instead of wax, while inkjet printers can print durable labels for use on paper [33][34][35].Furthermore, laboratory samples can be generated in a broad range of sizes, so labels should be adapted to the size of the object they are attached to.A high-resolution printer (300 dpi or greater) will make it possible to print small labels that will be easy to read by humans and scanners.Finally, select permanent or removable labels based on the application.Reusable items should be labeled with removable labels, whereas single-use containers are best labeled with permanent labels. Label printing software applications can take data from a database or a spreadsheet and map different columns to the fields of label templates, helping to standardize your workflows (Rule 8).They also support different formats and barcode standards.Of course, the label printing software needs to be compatible with the label printer.When selecting a barcode scanner, consider whether it supports the barcode standards that will be used in your label, as well as the size and shape of the barcodes it can scan.Inexpensive barcode scanners will have difficulty reading small barcodes printed on curved tubes with limited background, whereas professional scanners with high performance imagers will effectively scan more challenging labels.When used, barcode scanners transmit a unique series of characters to the computer.How these characters are then used depends on the software application in which the barcode is read.Some applications will simply capture the content of the barcode.Other applications will process barcoded data in real-time to retrieve the content of the corresponding records. Rule 6: Manage your data proactively Many funding agencies now require investigators to include a data management and sharing plan with their research proposals [36,37], and journals have data-sharing policies that authors need to uphold [38].However, the way many authors share their data indicates a poor understanding of data management [39,40].Data should not be managed only when publishing the results of a project, they should be managed before the data collection starts [41].Properly managed data will guide project execution by facilitating analysis as data gets collected (Fig 1).Projects that do not organize their data will face difficulties during analysis, or worse, a loss of critical information that will negatively impact progress. Use databases to organize your data It can be tempting to only track data files through notebook entries or dump them in a shared drive (more in Rule 9).That simple data management strategy makes it very difficult to query data that may be spread across multiple files or runs, especially because a lot of contextual information must be captured in file names and directory structures using conventions that are difficult to enforce.Today, most data are produced by computer-controlled instruments that export tabular data (i.e., rows and columns) that can easily be imported into relational databases.Data stored in relational databases (e.g., MySQL) are typically explored using standard query language (SQL) and can be easily analyzed using a variety of statistical methods (Table 1).There are also no-code and low-code options, such as the Open Science Framework (https://osf.io/)[42], AirTable, and ClickUp, which can also be used to track lab processes, develop standardized workflows, manage teams, etc. In the age of big data applications enabled by cloud computing infrastructures, there are more ways than ever to organize data.Today, NoSQL (not only SQL) databases [43][44][45], data lakes [46][47][48], and data warehouses [49,50] provide additional avenues to manage complex sets of data that may be difficult to manage in relational databases (Table 1).All these data management frameworks make it possible to query and analyze data, depending on the size, type, and structure of your data as well as your analysis goals.NoSQL databases can be used to store and query data that is unstructured or otherwise not compatible with relational databases.Different NoSQL databases implement different data models to choose from depending on your needs (Table 1).Data lakes are primarily used for storing large-scale data with any structure.It is easy to input data into a data lake, but metadata management is critical for organizing, accessing, and interrogating the data.Data warehouses are best suited for storing and analyzing large-scale structured data.They are often SQL-like and are sometimes optimized for specific analytical workflows.These technologies are constantly evolving and the overlap between them is growing as captured in the idea of "lakehouses" such as Databricks and Snowflake Data Cloud (https://www.snowflake.com/en/)(Table 1). When choosing a data management system, labs must consider the trade-off between the cost of the service and the accessibility of the data (i.e., storage in a data lake may be cheaper than in a data warehouse, but retrieving/accessing the data may be more time-consuming or costly) [51].Many companies offer application programming interfaces (API) to connect their instruments and/or software to databases.In addition, new domain-specific databases continue to be developed [52].If necessary, it is also possible to develop your own databases for particular instruments or file types [53].Nevertheless, when uploading your data to a database, it is recommended to import them as interoperable nonproprietary file types (e.g., .csvinstead of .xlsfor tabular data; .gb(GenBank flat file https://www.ncbi.nlm.nih.gov/genbank/)instead of .clc(Qiagen CLC Sequence Viewer format [54]) for gene annotation data; see Rule 4 of [51] for more), so that the data can be accessed if a software is unavailable for any reason and to facilitate date sharing using tools such as git (Rule 10) [14,24]. Link data to protocols One of the benefits of data organization is the possibility of capturing critical metadata describing how the data were produced.Many labs have spent years refining protocols to be used in different experiments.Many of these protocols have minor variations that can significantly alter the outcome of an experiment.If not properly organized, this can cause major reproducibility issues and can be another uncontrolled source of technical variation.By linking protocol versions to the associated data that they produced (ideally all the samples generated throughout the experiment), it is possible to use this metadata to inform data reproducibility and analysis efforts. Capture context in notebook entries Organizing data in databases and capturing essential metadata describing the data production process can greatly simplify the process of documenting research projects in laboratory notebooks [55].Instead of needing to include copies of the protocols and the raw data produced by the experiment, the notebook entry can focus on the context, purpose, and results of the experiment.In the case of electronic lab notebooks (ELNs; e.g., SciNote, LabArchives, and eLabJournal), entries can benefit from providing links to previous notebook entries, the experimental and analytical protocols used, and the datasets produced by the workflows.ELNs also bring additional benefits like portability, standardized templates, and improved reproducibility.Finally, notebook entries should include the interpretation of the data as well as a conclusion pointing to the next experiment.The presence of this rich metadata and detailed provenance is critical to ensuring the FAIR principles are being met and your experiments are reproducible [24]. Rule 7: Separate parameters and variables Not all the data associated with an experiment are the same.Some data are controlled by the operator (i.e., parameters), whereas other data are observed and measured (i.e., variables).It is necessary to establish a clear distinction between set parameters and observed variables to improve reproducibility and analysis.When parameters are not clearly identified, lab personnel may be tempted to change parameter values every time they perform experiments, which will increase the variability of observations.If, instead, parameter values are clearly identified and defined, then the variance of the observations produced by this set of parameters should be smaller than the variance of the observations produced using different parameter values. Separating and recording the parameters and variables associated with an experiment makes it possible to build statistical models that compare the observed variables associated with different parameter values [41,56].It also enables researchers to identify and account for both the underlying biological factors of interest (e.g., strain, treatment) and the technical and random (noise) sources of variation (e.g., batch effects) in an experiment [56]. Utilizing metadata files is a convenient way of reducing variability caused by parameter value changes.A metadata file should include all the parameters needed to perform the same experiment with the same equipment.In an experimental workflow, pairing a metadata file with the quantified dataset is fundamental to reproducing the same experiment later [51,55,57].Additionally, metadata files allow the user to assess whether multiple experiments were performed using the same parameters. Track your parameters from beginning to end Experimental parameters have a direct influence on observations.However, some factors may have indirect effects on observations or affect observations further downstream in a pipeline.For example, the parameters of a DNA purification process may indirectly influence the quality of sequencing data derived from the extracted DNA. To uncover such indirect effects, it is necessary to capture the sequence of operations in workflows.For the above example, this would include the DNA extraction, preparation of the sequencing library, and the sequencing run itself.When dealing with such workflows, it is not possible to use a single Sample ID as the key connecting different datasets as in Rule 4. The workflow involves multiple samples (i.e., the biological sample or tissue, the extracted DNA, the sequencing library) that each have their own identifier.Comprehensive inventory and data management systems will allow you to track the sample lineage and flows of data produced at different stages of an experimental process. Recording experimental parameters and workflows is especially critical when performing new experiments, since they are likely to change over time.As they are finalized, this information can be used to develop both standardized templates for documenting your workflow, as well as metrics for defining the success of each experiment, which can help you to optimize your experimental design and data collection efforts (Fig 1). Document your data processing pipeline After the experimental data are collected, it is important to document the different steps used to process and analyze the data, such as if normalization was applied to the data, or if extreme values were not considered in the analyses.The use of ELNs and LIMS can facilitate standardized documentation: creating templates for experimental and analysis protocols can ensure that all the necessary information is collected, thereby improving reproducibility and publication efforts [55,58]. Similarly, thorough source-code documentation is necessary to disseminate your data and ensure that other groups can reproduce your analyses.There are many resources on good coding and software engineering practices [14][15][16]59], so we only touch on a few important points.Developing a "computational narrative" by writing comments alongside your code or using interfaces that allow for markdown (e.g., Jupyter notebooks, R Markdown) can make code more understandable [60][61][62].Additionally, using syntax conventions and giving meaningful names to your code variables increases readability (i.e., use average_mass = 10 instead of am = 10).Furthermore, documenting the libraries or software used and their versions is necessary to achieve reproducibility.Finally, implementing a version control system, such as git, protects the provenance of your work and enables collaboration [63]. Rule 9: Avoid data silos Depending on your workflows, you may collect information from different instruments or use several databases to store and interact with different types of data.Care must be taken to prevent any of these databases from becoming data silos: segregated groups of data that restrict collaboration and make it difficult to capture insights resulting from data obtained by multiple instruments [47,49,64].Data lakes and data warehouses are good solutions for integrating data silos [47,49,64]. Data silos not only stymie research efforts but also raise significant security issues when the silo is the sole storage location.Keeping your data management plan up-to-date with your current needs and utilizing the right databases for your needs can prevent this issue (Rule 6).Regardless, it is crucial to back up your data in multiple places for when a file is corrupted, a service is unavailable, etc. Optimally, your data should always be saved in 3 different locations: 2 on-site and 1 in the cloud [51].Of course, precautions should always be taken to ensure the privacy and security of your data online and in the cloud [65,66]. Never delete data As projects develop and data accumulates, it may be tempting to delete data that no longer seems relevant.Data may also be lost as computers are replaced, research personnel leave, and storage limits are reached.Poorly managed data can be easily lost simply because it is unorganized and difficult to query.However, while data collection remains expensive, data storage continues to get cheaper, so there is little excuse for losing or deleting data today.The exception may be intermediary data that is generated by reproducible data processing pipelines, which can be easily regenerated if and when necessary.Most data files can also be compressed to combat limitations on storage capacity. Properly organized data is a treasure trove of information waiting to be discovered.By using computer-generated sample IDs (Rule 4) and data lakes/warehouses (Rule 6) to link data collected on different instruments, it is possible to extract and synthesize more information than originally intended in the project design.Data produced by different projects using common workflows (Rule 8) can be analyzed to improve workflow performance.Data from failed experiments can be used to troubleshoot a problem affecting multiple projects. Rule 10: Place your lab in context Once you have developed a common culture (Rule 1), inventoried your laboratory (Rules 2 and 3), labeled your samples and equipment with computer-generated IDs (Rules 4 and 5), standardized your parameters and workflows (Rules 7 and 8), and backed up your data in several databases (Rules 6 and 9), what comes next? Track processes occurring outside the lab Laboratory operations and the data they produce are increasingly dependent on operations that take place outside of the lab.For example, the results of a PCR assay will be affected by the primer design algorithm and the values of its parameters.They will also be affected by the quality of the primers manufactured by a specialized service provider.Even though the primer design and primer synthesis are not taking place in the lab, they are an integral part of the process of generating PCR data.They should therefore be captured in data flows (Rule 8).Furthermore, the software and computational steps used to design experiments and analyze data they produce must also be properly recorded, to identify as many factors that may affect the data produced in the lab as possible. Increase the accessibility of your work There are several ways to place your lab in the greater scientific context and increase reproducibility.As discussed, using standardized, non-proprietary file types can increase ease of access within a lab and across groups [14,51].You may also choose to make your data and source code public in an online repository to comply with journal requirements, increase transparency, or allow access to your data by other groups [67].In addition, data exchange standards, such as the Synthetic Biology Open Language [68,69], increase the accessibility and reproducibility of your work. Practice makes perfect Whereas traditional data management methods can restrict your analyses to limited subsets of data, centralized information management systems (encompassing relational and NoSQL databases, metadata, sample tracking, etc.) facilitate the analysis of previously disparate datasets.Given the increasing availability and decreasing cost of information management systems, it is now possible for labs to produce, document, and track a seemingly endless amount of samples and data, and use these to inform their research directions in previously impossible ways.When establishing your LIMS, or incorporating new experiments, it is better to capture more data than less.As you standardize your workflows (Rule 8), you should be able to establish clear metrics defining the success of an experiment and to scale the amount of the data you collect as needed. While there are plenty of existing ELN and LIMS services to choose from (see Rule 6 and Rule 2, respectively), none are a turnkey solution (S1 Supporting information).All data management systems require configuration and optimization for an individual lab.Each service has its own benefits and limitations your group must weigh.Coupled with the need to store your data with multiple backup options, thoughtful management practices are necessary to make any of these technologies work for your lab.The 10 rules discussed here should provide both a starting place and continued resource in the development of your lab information management system.Remember that developing a LIMS is not a one-time event; all lab members must contribute to the maintenance of the LIMS and document their supplies, samples, and experiments in a timely manner.Although it might be an overwhelming process to begin with, careful data management will quickly benefit the data, users, and lab through saved time, standardized practices, and more powerful insights [17,18]. Conclusions Imparting a strong organizational structure for your lab information can ultimately save you both time and money if properly maintained.We present these 10 rules to help you build a strong foundation in managing your lab information so that you may avoid the costly and frustrating mistakes we have made over the years.By implementing these 10 rules, you should see some immediate benefits of your newfound structure, perhaps in the form of extra fridge space or fewer delays waiting for a reagent you did not realize was exhausted.In time, you will gain deep insights into your workflows and more easily analyze and report your data.The goal of these rules is also to spur conversation about lab management systems both between and within labs as there is no one-size-fits-all solution for lab management.While these rules provide a great starting point, the topic of how to manage lab information is something that must be a constant dialogue.The lab needs to discuss what is working and what is not working to assess and adjust the system to meet the needs of the lab.This dialogue must also be extended to all new members of the lab as many of these organizational steps may not be intuitive.It is critical to train new members extensively and to ensure that they are integrated into the lab's common culture or else you risk falling back into bad practices.If properly trained, lab members will propagate and benefit from the organizational structure of the lab. Fig 1 . Fig 1. Information management enhances the experimental and modeling cycle.Our 10 simple rules for managing laboratory information (green) augment the cycle of hypothesis formulation, design, data analysis, modeling, and decision-making (gray).The experimental design phase is improved by carefully tracking your inventory, samples, parameters, and variables.Proactive data management and the thoughtful use of databases facilitate statistical and exploratory analyses as well as the development of conclusions that inform the next round of experiments.Frequent reevaluation of project, team, and workflow success is a critical component of refining experimental processes, developing a common culture, and positioning your research group in the greater scientific context. Fig 2 . Fig 2. Sample label.The first line includes a unique computer-readable barcode as well as a human-readable computer-generated sample identification number.The second and third lines include a description of the sample content, the date, and the identity of the inoculum.https://doi.org/10.1371/journal.pcbi.1011652.g002
2023-12-09T05:09:47.539Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "09a47a869ec2bff17903c3e9de09dc6f65012c37", "oa_license": "CC0", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "09a47a869ec2bff17903c3e9de09dc6f65012c37", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
196506174
pes2o/s2orc
v3-fos-license
Dental concerns of children with lip cleft and palate-a review These are the most severe of congenital anomalies which affect the mouth and related structures. There are one of the most common congenital defects and occur about once in 1000 births. 1 There is a family history in only about one third of cases. Clefts of the palate only are more common in girls while clefts of the lip, with or without palatal involvement, are more common in boys. It is interesting to know that the left side is more often than the right.2 The exact cause of clefting is unknown in most cases. For most cleft conditions, no single factor can be identified as the cause. However, it is important to distinguish between isolated clefts (in which the patient has no other related health problems) and clefts associated with other birth disorders or syndromes.3,4 In the absence of the family history the occurrence of a congenital abnormality may be due to the action of mutation or some chance occurrence during the pregnancy. The lips and palate develop during the 5 -8 weeks of intrauterine life and any factor disrupting the heir formation must exert its influence during this relatively short period. In this respect it is difficult to offer positive views on factors related to maternal health during the pregnancy. it is accepted that German measles and x-ray examination during early pregnancy may produce congenital abnormalities and are to be avoided. The teratogenic action of certain drugs, particularly thalidomide is well known. Infarct the expectant mother is well advised to avoid all necessary drugs taking during the early stages of pregnancy.5-7 (Table 1-3) A useful classification of clefts is that of Kernahan and stark which is bases on an embryological considerations of the tissues involved there are essentially three main groups.7,8 Introduction These are the most severe of congenital anomalies which affect the mouth and related structures. There are one of the most common congenital defects and occur about once in 1000 births. 1 There is a family history in only about one third of cases. Clefts of the palate only are more common in girls while clefts of the lip, with or without palatal involvement, are more common in boys. It is interesting to know that the left side is more often than the right. 2 The exact cause of clefting is unknown in most cases. For most cleft conditions, no single factor can be identified as the cause. However, it is important to distinguish between isolated clefts (in which the patient has no other related health problems) and clefts associated with other birth disorders or syndromes. 3,4 In the absence of the family history the occurrence of a congenital abnormality may be due to the action of mutation or some chance occurrence during the pregnancy. The lips and palate develop during the 5 -8 weeks of intrauterine life and any factor disrupting the heir formation must exert its influence during this relatively short period. In this respect it is difficult to offer positive views on factors related to maternal health during the pregnancy. it is accepted that German measles and x-ray examination during early pregnancy may produce congenital abnormalities and are to be avoided. The teratogenic action of certain drugs, particularly thalidomide is well known. Infarct the expectant mother is well advised to avoid all necessary drugs taking during the early stages of pregnancy. [5][6][7] (Table 1-3) A useful classification of clefts is that of Kernahan and stark which is bases on an embryological considerations of the tissues involved there are essentially three main groups. 7,8 Classification 7, 8 ( Tables 4-6) Group I-comprises those which involve the lip, alveolus and anterior part of the palate as far as palatal foramen these are clefts of primary palate Group II-represents those clefts of soft palate which may extent forward to involve the hard palate as far as palatal foramen, these are clefts of secondary palate. Group III-comprises clefts of both primary and secondary palates these may be unilateral or bilateral. Group IV-Rare facial clefts Children with a cleft of palate are prone to upper respiratory tract infections, and as a result there is a high incidence of middle ear problems and resultant defects in hearing. Clefts of the lip and palate give rise to problems related to actual structures involved in a cleft .in general, clefts of the lip give rise to aesthetic problem, clefts of alveolus give rise to dental problems and clefts of palate give rise to speech problems. These structures are of course closely inter related and thus the problems not always isolated, for example the features related to the cleft of the alveolus may also contribute to a speech defect. The cleft of the alveolus rarely occurs in the absence of a clef of the lip or palate. all clefts of the lip and palate present a surgical challenge.10 The vast majority, well over 90%, of cleft lip and palate children develop normal speech, a minority requiring the help of a speech therapist many of them appear to present speech problems similar to those of other normal children and it would seem that as such hey are not related to actual cleft. On occasions special speech problems arise which benefits from close liaison between speech therapist and dental surgeon. 11 A syndrome is a set of physical, developmental, and sometimes behavioral traits that occur together. Clefts have been identified as one feature in over 300 syndromes, most of which are quite rare. Syndromes account for roughly 15% of the total number of cases of cleft lip and/or palate. 12 Genetic factors (Monogcnic) Cleft lip associated with cleft palate is more common in males. Cleft palate alone is more common in female. This shows vague influence of heredity over occurrence of clefts. In about one-fifth to two-fifth of these patients, a positive family history can be elicited. Vander Woude's syndrome where lip pits are associated with cleft lip and palate gives rise to Mendelian pattern of autosomal recessive inheritance. In rare clefts the hereditary factor seems to be very unusual. Gene (Polygenic) environment interactions Many cleft lip and palate cases show a slight familial tendency but do not give rise to the Mendelian patterns of inheritance. In these cases, the interaction between multiple genes with small defects and environmental factors results in the defect. There is increasing evidence that most clefts in human beings appear due to multi factorial causes, i.e., due to combined effect of genetic influence and various environmental factors. 10,13 Behavioral Children may have hyper nasal speech which is difficult to understand as a result of velopharyngeal insufficiency. Many young children with clefts will exhibit shy, nervous, or uncooperative behavior. This may have to do with previous hospitalization or frequent hospital visits. Bone support for these teeth is generally poor. Teeth that are present may be malformed and prone to caries. Parents appreciate education about teeth present or missing, surrounding a cleft. Simple explanations about the variability of teeth at the cleft site may allay concerns. Panoramic and/or Occlusal radiographs are indicated to monitor development. The majority of children with a cleft palate will require orthodontics. Orthodontic treatment may be required in the primary, mixed, and permanent dentition. Facilitate contact with an orthodontic provider if child has not been evaluated. 14 Naso Alveolar Moulding (NAM) The management of cleft patients has evolved dramatically in recent years. Outcome is improving because of better surgical techniques, timing, and incorporation of procedures like pre surgical orthopedics. Pre surgical infant orthopedics was first introduced by McNeil 1 in 1950. Since then, techniques are changing and so are the results. Active molding and repositioning of the nasal cartilages take advantage of the plasticity of cartilage in the newborn infant. 16,17 In the last decade, it has been shown that correction of nasal deformity by stretching of the nasal mucosal lining, and achievement of nonsurgical columella elongation can be combined with molding of the alveolar process in cleft patients. 18,19 The impression was obtained with the infant fully awake, in prone position without anesthesia. Before impressions, child was kept nil orally for about 2 hours. Impressions were taken on dental chair with child in the lap of his or her parents. Impression should be taken very carefully and is always done after insuring the availability of anesthesia team. First, the impression tray was checked in the mouth of patient. After selection of a proper size tray, alginate paste was made, loaded in the tray, and inserted in the mouth. Soon after this, alginate paste was applied over the plate by hand up to root of the nose. Child'slower jaw was pulled down, and precautions were taken to avoid falling of impression material into oral cavity. After some time (15-20s), this nose, lip, and alveolus negative impression was removed in a single piece. Oral and nasal cavities were inspected for any remaining particles. After impression, a dental stone cast was made by filling it with paste of dental stone material. It was allowed to fix. Dental stone model was made for purpose of measurements and fabrication of appliance. These dental stone casts were labeled with patient's name, age/sex, and date. A conventional molding plate was fabricated on the maxillary cast using clear acrylic resin with a nasal stent wire passed from it going superiorly toward nose. The tip of wire was covered with hard and then soft acrylic. At the active tip of nasal stent, the acrylic was covered with a thin layer of soft denture lining material to insure that tissue irritation does not occur when pressure is applied for nasoalveolar molding. After the nasoalveolar molding plate was ready and check for any rough areas.Plate was handed over to parents, and they were explained about maintaining oral hygiene, cleaning, insertion, and removal of plate. Patients were called at weekly intervals to gradually change the direction of nasal wire. At every visit, local area was examined for any ulceration or pressure points. Measurements of different nasal parameters and alveolus were taken on prepared dental stone model as well as on patients directly with the help of thread and artery forceps, and were recorded. It was done for the purpose of accuracy. Measurements on patients were matched with measurements on dental models, and they were found to be almost similar. 20,21 It is desirable to provide continuity of soft tissues so that various soft tissue functional units may assume their normal functions at a time as near normal as possible. Surgery to repair the lip cleft is usually carried out and above the three months of age, when the baby is about 5.5 kg body weight in the case of bilateral clefts, the second side is repaired some 3 months or later. The anterior part of the palate is repaired at the same time as the lip the main palatal defect is repaired between 15 and 18 months at a time when it will shortly be required for development of speech . 22,23 Oral condition When the alveolus is involved by the cleft the occlusion is disrupted this occurs due to action of group of related forces. firstly, the break in the continuity of the basal bone allows the collapse to occur further the tendency of the general tightness of the upper lip, following the surgical repair, tends to produce the collapse of the arch, due to firm molding action of lip in absence of a stable arch form. 24 There is often deficiency of maxillary development both in anterio posterior and also vertical direction. This increases the tendency for the development of the class III occlusion and of buccal segment open bite which is frequently seen. 25,26 The lateral incisor on the same side as the cleft is often absent, or when present , may be either or mesial or distal side of cleft . it is often rotated and its root may be dilacerated. The central incisor may be hypoplastic and it is distally inclined. 27,28 Children with the cleft lip or palate deformity tend to present a poor gingival state, often a high caries rate and tendency to neglect the general care of their teeth. These features appear to be closely related in that presence of good dental cares the gingival and caries problems are minimized. 29,30 The emotional aspect should also be considered for the family of children with cleft lip and palate, which may give rise to negligent attitudes on oral hygiene, because of the fear to manipulate the child's oral cavity or even the attempt to avoid unpleasant procedures. 31 Dental treatment Dental caries extremely important to these children. a good dietary awareness in relation to dental caries should be encouraged from a early life, commencing with discussions between the mother and dental surgeon shortly after the birth of the child, before the surgical is commenced tooth brushing, commencing with the soft brush, as soon as first tooth is erupted, is to be encouraged as that the number of teeth is increases so do the difficulties and degree of disturbance to the child. Hence it is obviously desirable to start early. The mother sometimes deep seated concern that the accidental trauma, while brushing the teeth especially with a tearful struggling child, may result in further permanent damage to the palate. The dental surgeon should be alert to the possibility of this fear, and be ready to give reassurance that such trauma is unlikely to occur. If the domestic water supply is not fluoridated, the prescription of fluoride tablets from infancy to reduce the caries susceptibility is recommended. 32.33 The presence of carious lesions leads to the need of restorative treatment, if detected on time, or tooth extraction if the extent of the lesion does not allow restoration. The atraumatic restorative technique should be indicated for initial carious lesions without risk of pulp contamination. Whenever individuals with cleft lip and palate present dental caries with risk of pulp contamination, treatment should be conventionally performed. Individuals submitted to surgery should have an excellent oral condition, removing the sources of infection that may compromise the surgery 13 Supernumerary and/or malpositioned deciduous teeth adjacent to the cleft should be maintained as long as possible, in order to preserve bone tissue that is already defective at this region. 34,35 Rubber dam isolation is recommended for dental treatment whenever possible, especially in cases of unrepaired cleft palate. The rubber dam isolates the constant water flow of the high speed handpiece, dental caries or restorative material remnants, avoiding their penetration in the airway, which communicates with the oral cavity in these individuals. 36 The general dental practioner can give the cleft palate patient and orthodontist very valuable help, first by training the child to aspect the dental care, and then by conserving the both primary and permanent teeth at the earliest indication. This will greatly reduce the problems that this handicapped brings and has a great psychological value in indicating the child and parent that the unsightly appearance of the teeth is low excuse for the neglect of the oral hygiene practice that are too often occurs. 12 The tight upper lip in bilateral cases may cause difficulty in conserving the anterior maxillary teeth as well as affecting the standard of cleansing in this area. if the co operation is particularly poor, conservation under general anesthesia is considered as these children deserve all the help they can get .local anesthesia produces no problems but in the events of extractions in the region of cleft, adequate radiograph to be taken to ascertain the alignment of the root .if there is any degree of dilacerations, it is advisable to consult n oral surgeon extraction of premaxillary teeth in case of bilateral cleft must be well supported because of the mobility of bone. 37,38 The first permanent molars may be in a poor state then the almost hopeless long term prognosis of restoration these teeth are especially important if the orthodontic appliance therapy is required during mixed dentition stage. If this is so, every effort should be made to retain these teeth, if only temporarily, and it is worth while bearing in the mild that is usually desirable to avoid the very early loss, better results are obtained by removal of these teeth at 9 years of age, at a time when there is a radiographic evidence of inter radicular calcification on the mandibular second permanent molar. 39 Dental anesthesia in individuals with cleft is not different for most regions in the oral cavity, except for the cleft area. At this region, the maxilla is divided in different segments by the bone defect, with individual innervations. Even though the clinical aspect is improved after surgical repair, the alveolar separation is maintained 38 this is important when teeth at this region must be anesthetized, because the malpositioning may complicate determination of the site of tooth implantation. Therefore, previous periapical radiographic examination is recommended to analyze the bone segment in which the tooth is implanted. The surgical lip repair usually causes a secondary scar fibrosis at the region, making the mucosa more resistant and consequently the puncture is more painful. The initial puncture with anesthetic infiltration should be parallel to the tooth long axis. Because of the bone defect that separates the innervations of the two cleft segments, the adjacent region must also be anesthetized to avoid pain or discomfort during treatment, using the same puncture as the nitial infiltration, yet directing the needle to this region. Anesthesia of the palatal region is always necessary. 37,38 Orthodontic treatment Much can be achieved by fairly simple orthodontic procedures to improve the occlusion of these children. Prolonged complex orthodontic treatment is to be avoided in general it is advisable to correct the lingual occlusion of upper permanent incisors shortly after the eruption. Expansion of premolar area is is best commenced when about half to three quarters of palatal surface are erupted and retention following such expansion is essential. If the lateral incisor is absent are severely rotated, a partial denture may be required . in such cases if central incisor of poor quality are severely distally inclined it may be advisable to remove this teeth as well and add it to already necessary partial denture in bilateral cases, where the premaxillary unit is mobile every efforts to be made to retain the central incisors owing to the complications nature of this floating premaxillary when the denture is requires to replace the central incisor teeth. The dental clasp should be carefully placed using dental floss ligatures to avoid the risk of aspiration. This also applies when dental clasps are placed in supernumerary, rotated or malpositioned teeth and Also, the orthodontist should be aware that the thin periodontal bone surrounding the teeth next to the alveolar cleft constitutes a limitation for tooth movement previously to alveolar bone graft procedure in patients with clefts and important aspects should be considered during orthodontic treatment before bone graft: a) Rotated teeth adjacent to the cleft should not be corrected before the bone graft surgery, because of the risk of dehiscence and fenestrations. b) Supernumerary teeth erupted in the palatal side of clefts should be extracted at least three months before the bone graft, because the palatal mucosa may not be interrupted to cover the entire graft. [40][41][42] Prosthetic obturation of palatal fistulae may be necessary in some children. Referral to appropriate specialists in cases with velopharyngeal insufficiency is indicated. Clefts are often associated with middle ear problems and hearing difficulties. 43 Tooth in the cleft A tooth is often seen lying within the cleft or in residual fold of the soft tissues following repair high in palate. Such a tooth sometimes causes local irritation to the tip of the tongue. those often concern that removal of such a tooth may allow further collapse of dental arch to occur in the presence of fistula there may be concern that removal of tooth may result in extension of fistula neither of these possible sequalae tend to occur so frequently, if indeed ever, as to cause any strong reluctance to remove such a teeth if it is source of irritation. Such teeth tend to become carious and the crown is often eventually lost. 44,45 Palatal fistula These sometimes occur, but usually present no particular problem. Very occasionally the child complains of food, particularly or milk mixtures, entering into the nasal cavity. There is seldom any complaint of large particles of food entering into the nose pre assurance is often all that is required some times in presence of nasal tone to the voice of fistula is regarded as being responsible in these cases it may be the wish of speech therapist that a simple obturator plate is fitted in order to occlude fistula if such plate fitted a careful assessment of speech must be made both with the palate in and again with it out .in order to determine if indeed it is of real value. Clefts are often associated with middle ear problems and hearing difficulties. 46,47 Lateral's' defect This is a characteristic speech defect in which, when the child endeavors to make the S sound, there is an escape of air laterally to the corner of the mouth instead of normal central flow during formation of normal S sound. any tendency of this o occur this favored by the presence of buccal segment open bite and if of course accentuated if there has been early loss of deciduous molars . It is seldom possible for the dental surgeon to help the regard to the lateral gap until the premolar teeth are erupted and orthodontic treatment, if any is contemplated, is completed. [48][49][50] Presurgical dental orthopaedics By the use of appliances in new born child, this is possible to greatly reduce the degree of displacement of individual segments of palate before surgery. Such treatment can be of great value, especially in the severe bilateral cases to be of greatest value indeed to be even being possible it must be commenced with in the first few days following birth. The interested dental surgeon is referred to the reading list and should be made inquiries regarding the arrangement in his locality. 51,52 Protocol for dental care with specialists 53,54 At birth Referral Feeding plate, pre surgical orthopedics Helps in surgeon in repair by stimulating palatal bone growth and preventing collapse of dental arches. Prepare study models and photographs 3-5 months Explain parents about dental care of primary teeth. Alignment of primary teeth. Expansion of the palatal arch by using simple fixed appliance like a Quadhelix, W-arch etc. Plasic surgeon to repair the lip. ENT surgeon's first assessment Months Review y the Pediatric dentist To correct the velopharygeal incompetence by giving palatal prosthetic speech appliance. Possible eruption abnormalities to be explained. Review by Plastic surgeon. 6-7 Years Mixed dentition period, review by the pediatric dentist. Correction of cross bite, removal of supernumerary teeth Preventive and orthodontic intervention followed by radiographic evaluation. Orthodontic consultation if necessary. 8 Years Suitability about bone grafting Dental bone assessment by using lateral cephalogram, orthopantamogram, hand-wrist X-ray etc). If required, patient should be prepared for bone graft protocol. Review by the speech pathologist, plastic surgeon and ENT specialist. Pediatric dentist review to relieve crowding and retroclination of anterior teeth 9 Years Combined Pedodontist and Orthodontic coalescence Bone graft alveolar alveolar cleft at half to one -third root development of permanent cuspid. Review by other experts if needed. 10-12 Years Orthodontic consultation Pediatric dentist for future treatment Monitoring growth and dentition Conclusion Cleft lip and palate children benefit from team approach special treatment requirements. such a team lead by the plastic surgeon should include a speech therapist and orthodontist having ready access to pediatric, ENT and dental treatment facilities. Preventive dental measures should be considered at an early age, commencing with advice to the mother shortly after birth of child. Extensive dental treatment may be requires but it should not be made more extensive or complex than is necessary to achieve a reasonable standard of dental perfection. The multidisciplinary approach towards this problem led to a steady improvement in its end-results. Now a cleft children leading a happy life in the society without many esthetic or functional deficiencies.
2019-03-18T14:02:41.473Z
2018-07-19T00:00:00.000
{ "year": 2018, "sha1": "da4996aed0b0545c7cec71e939d5c7a961d13f71", "oa_license": "CCBYNC", "oa_url": "https://medcraveonline.com/JPNC/JPNC-08-00333.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "11d9d886a8626da60f328ead4f7496d20ef929ea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232141274
pes2o/s2orc
v3-fos-license
mRNA-encoded, constitutively active STINGV155M is a potent genetic adjuvant of antigen-specific CD8+ T cell response mRNA vaccines induce potent immune responses in preclinical models and clinical studies. Adjuvants are used to stimulate specific components of the immune system to increase immunogenicity of vaccines. We utilized a constitutively active mutation (V155M) of the stimulator of interferon (IFN) genes (STING), which had been described in a patient with STING-associated vasculopathy with onset in infancy (SAVI), to act as a genetic adjuvant for use with our lipid nanoparticle (LNP)-encapsulated mRNA vaccines. mRNA-encoded constitutively active STINGV155M was most effective at maximizing CD8+ T cell responses at an antigen/adjuvant mass ratio of 5:1. STINGV155M appears to enhance development of antigen-specific T cells by activating type I IFN responses via the nuclear factor κB (NF-κB) and IFN-stimulated response element (ISRE) pathways. mRNA-encoded STINGV155M increased the efficacy of mRNA vaccines encoding the E6 and E7 oncoproteins of human papillomavirus (HPV), leading to reduced HPV+ TC-1 tumor growth and prolonged survival in vaccinated mice. This proof-of-concept study demonstrated the utility of an mRNA-encoded genetic adjuvant. INTRODUCTION mRNA vaccines represent a significant advancement in vaccine technology, providing flexibility and precision in antigen design with rapid scalability. We and others have demonstrated that mRNA-based vaccines can induce potent immune responses for various infectious agents in preclinical models and in clinical studies, as well as against tumor neoantigens. [1][2][3][4][5][6][7][8][9][10][11] Importantly, since mRNA-based vaccines allow direct processing of antigens through endogenous major histocompatibility complex (MHC) class I antigen presentation pathways, this vaccine platform should be ideal for generating CD8 + T cell responses. Cytolytic CD8 + T cells are critical components in the response to intracellular pathogens and for cancer immunotherapy. 12,13 Detectable antigen-specific T cell responses have been demonstrated after mRNA vaccine administration in nonhuman primates 14 and in patients with melanoma. 11 (For recent reviews, see Pardi et al., 15 Jackson et al., 16 and Linares-Fernández et al. 17 ) Conventional vaccines, typically composed of inactivated pathogens, recombinant proteins, or protein subunits, are usually administered in the presence of adjuvant to potentiate a vaccine response. Generation of an adaptive immune response to infection requires the presence of antigen and simultaneous sensing of damage-associated molecular patterns (DAMPs) 18 by pattern-recognition receptors (PRRs). 19 Although some approved adjuvants, such as GlaxoSmithKline's AS04 and AS01 B , act by activating PRRs to trigger signaling pathways that induce transcription factors associated with an immune response, the basis of the adjuvant effect of others, such as alum, MF59, and AS03, is poorly characterized. The use of a welldefined stimulatory pathway to produce adjuvant activity offers the advantage of allowing more precise identification of biomarkers of safety and mechanism of action of adjuvanticity. In addition, despite some success of licensed adjuvants to improve humoral responses to vaccines, challenges remain to develop vaccines and vaccine adjuvants to improve cellular immunity mediated by CD4 + and CD8 + T cells, which also contribute to protective immunity. 20 Notably, type I interferons (IFNs) have been shown to potentiate CD8 + T cell immunity. 21 Vaccines or vaccine adjuvants that induce type I IFN responses would therefore be expected to enhance CD8 + T cell development or function. Toll-like receptors (TLRs) are PRRs that recognize structurally conserved microbial molecules; most TLRs signal through the transcription factor myeloid differentiation factor 88 (MyD88) to activate IFN-a. 22 Other key pathways/factors include activator protein 1 (AP1) and nuclear kB (NF-kB), which induce production of inflammatory cytokines, and IFN response element 3 (ISRE3) and ISRE7 through IFN response factor 3 (IRF3) and IRF7, respectively, which induce production of type I IFNs, IFN-a, and IFN-b. 22 Cytosolic doublestranded DNA (dsDNA) can act as a PRR agonist to activate stimulator of IFN genes (STING), which in turn activates signaling through NF-kB and IRF3/7. 22 Mitochondrial antiviral signaling protein (MAVS) is an adaptor for PRRs that senses double-stranded RNA, which also activates type I IFNs through transcription factors IRF3, IRF7, and NF-kB. 23 Adjuvants that could induce cell-mediated immunity via PRRs could be beneficial for designing T cell vaccines. Although exogenous mRNA itself has immunostimulatory properties, 24 untimely activation of an innate immune response can interfere with protein translation, antigen expression, and processing, which can lead to a suboptimal immune response. However, the inhibition of antigen translation can be circumvented with naturally occurring base modifications, such as replacing uridine with pseudouridine. 25,26 In previous studies, we have demonstrated that administration of lipid nanoparticle (LNP)-encapsulated modified mRNA encoding influenza H10 hemagglutinin targeted antigen-presenting cells (APCs) and elicited transient priming of antigen-specific T cells in vivo in the absence of exogenous adjuvant, in mice and in nonhuman primates. 14 In this study, we aimed to improve our current vaccine design to induce selective activation of type I IFN signaling in vivo, while preserving and maximizing antigen expression during mRNA vaccine delivery. We tested the potential and feasibility of using constitutively activated signaling molecules and transcription factors as mRNA-encoded genetic adjuvants to improve CD8 + T cell responses when coadministered with mRNA-encoded antigens in a murine tumor model using E6 and E7 oncoproteins of human papillomavirus (HPV). RESULTS Constitutively active mutations of the innate immune receptor signaling pathway mediator STING provided the best adjuvant effect We used mRNA to express constitutively active forms of DAMP and PRR signaling molecules. We targeted activation of type I IFN pathways through coformulation of mRNA to encode proteins that constitutively activate the TLR, IRF3/7, MAVS, and STING signaling pathways with mRNA-encoded antigens into LNP. Several constitutively active mutations of STING have previously been described, [27][28][29][30][31] and mutated STING expressed in vitro was shown to potently stimulate the induction of type I IFN. 30,32,33 In particular, a dominant gainof-function mutation (V155M) was first described in affected individuals with familial inflammatory syndrome with lupus-like manifestations 28 and in STING-associated vasculopathy with onset in infancy (SAVI). 30 The constitutively active form of STING (V155M) was encoded into mRNA and transfected into a STING-knockout (KO) reporter cell line, which has a stably integrated IFN-responsive secreted embryonic alkaline phosphatase (SEAP) reporter. STING (V155M), but not the wild-type counterpart, showed potent IFN-b-inducible IFN-stimulated gene (ISG)54 promoter activity ( Figure 1A). The constitutively active TAK1 construct is an engineered fusion of TGF-b-activated kinase 1 (TAK1/MAP3K7) kinase domain and the minimal domain of TAK1-binding protein 1 (TAB1) required for TAK1 activation. The TAK1-TAB1 fusion functions as a constitutively active mitogen-activated protein kinase kinase kinase (MAPKKK) and induces both AP1 and NF-kB transcription. 34 Wild-type murine TRAM/TICAM2 is an adaptor protein involved in MyD88-independent signaling by TLR4. [35][36][37] MyD88 is an adaptor protein that mediates TLR and interleukin (IL)-1 receptor signaling. A point mutation of MyD88 (L265P) has been identified as an oncogene in the activated B cell subtype of diffuse large B cell lymphoma. 38,39 Mechanistically, the L265P mutation allows nucleation of MyDDosomes in a TLR4-independent manner, resulting in constitutive activation of prosurvival NF-kB signaling. 40 These previously described constitutively active (ca) forms of TAK1 (caTAK1), TRAM (caTRAM), and inhibitor of NF-kB kinase subunit beta (caIKKb) were encoded into mRNA and the proteins were expressed. The relative activity of engineered versions of TRAM, caTAK1, and caIKKb was first tested in vitro and all were shown to induce potent NF-kB signaling ( Figure S1; Supplemental materials and methods) The selected variants were assessed for adjuvant potency when coadministered with mRNA encoding ovalbumin (OVA). To measure the effect of coformulation of genetic adjuvants on antigen-specific T cell responses, C57BL/6 (B6) mice were immunized with LNP-encapsulated mRNA expressing OVA. A construct of mRNA encoding OVA combined with a nontranslatable mRNA (NTFIX) served as a negative control. At days 21 and 50, animals receiving STING V155M had higher antigen-specific CD8 + T cell responses than did the other TLR/MyD88 molecules tested ( Figure 1B). Two key downstream mediators of STING signaling are the transcription factors IRF3 and IRF7, which transcriptionally upregulate type I IFNs and ISGs. 22 Constitutively active proteins were designed and assayed in vitro for relative potency in activation of the B16 KO STING cell line, which has a stably integrated IFN-responsive SEAP reporter ( Figure S2A). One of the most potent IRF3 mutations was a phosphomimetic point mutant of human IRF3 (S396D), 41 whereas the most potent version of IRF7 was a variant of the murine sequence with several point mutations that mimic phosphorylation of key sites as well as tandem deletion of an autoinhibitory domain (del.238-410/ S429.430.431.437.438.441D) ( Figure S2B). 42 In addition, a constitutively active form of MAVS comprising full-length MAVS with a truncation mutant of latent membrane protein 1 (LMP1) from Epstein-Barr virus (DLMP-MAVS), 43 which induces a high level of IFN-b in vitro, was also included ( Figure S1). caIRF3, caIRF7, ca-MAVS, and STING V155M were then evaluated as adjuvants when coadministered with OVA ( Figure 1C; Supplemental materials and methods). Although caIRF3, and to a lesser extent caIRF7, increased antigen-specific CD8 + T cell responses initially to OVA, STING V155M was more potent at both time points tested (days 21 and 50) and the constitutively active DLMP-MAVS was the second most potent variant. Both pathways are known to activate the downstream protein kinase TANK-binding kinase 1 (TBK1). As an alternative to inducing constitutively active innate immune sensor signaling pathways as genetic adjuvants, we also attempted to induce two forms of programmed cell death: necroptosis and pyroptosis. 44 Both mechanisms result in loss of cell membrane integrity and subsequent release of DAMPs such as high mobility group box 1 (HMGB1) and ATP. The pseudokinase mixed lineage kinase domain-like isoform 1 (MLKL1) contains a four-helical bundle domain that forms pores in the cell membrane to induce necroptosis. 45 In vitro, a constitutively active variant of MLKL1 (caMLKL1), consisting of the four-helical bundle pore-forming domain encoded by mRNA induced cell death, resulting in HMGB1 and ATP release ( Figure S3). We also tested the combination of caspase-1, caspase-4, and caIKKb. Providing caIKKb and caspase-4/5 mRNA to produce the corresponding proteins recapitulates key molecular signaling pathways that underlie canonical and noncanonical pyroptosis. caIKKb potentiates NF-kB signaling and transcription of pro-inflammatory cytokines while caspase-4 and caspase-5 cleave gasdermin D (GSDMD), which is a necessary step in pyroptosis. 46 Although MLKL1 or the combination of caspase-1, caspase-4, and caIKKb demonstrated some adjuvant effect at an earlier time point, none was as potent an adjuvant as STING V155M in sustaining the antigen-specific memory CD8 + T cell population ( Figure 1D). We also compared the adjuvant effects of STING with a combination of caIRF3 and caIRF7, with or without caIKKb, a regulator of NF-kB activity. While expression of only caIRF3, caIRF7 ( Figure 1C), or caIKKb alone (data not shown) demonstrated minimal adjuvant activity at late time points, the combination of these three transcription factors to some extent recapitulated the adjuvant effect on effector T cell response seen with STING V155M , but neither combination was as potent as STING V155M at generating a memory T cell response ( Figure 1E). To confirm the adjuvant activity of STING V155M for other antigens, mice were immunized with LNP-encapsulated mRNA expressing the E7 oncoprotein of HPV ( Figure 1F), or with ADR neoantigen concatemer (three peptides with mutant epitopes for Adpgk, Dpagt1, and Reps1 from the MC38 murine colorectal tumor cell line) ( Figure 1G). 47 Lastly, STING V155M potentiated the CD8 + T cell response to known human A11-restricted CTL epitopes when the histocompatibility leukocyte antigen (HLA)-A11 transgenic mice were immunized with LNP-encapsulated mRNA expressing concatemer composed of epitopes for HIV-NEF, EBV-EBNA4, and EBV-BRLF1 ( Figure 1H). mRNA-encoded STING V155M demonstrated the greatest ability to induce IFN and NF-kB activity in vitro and the best vaccine adjuvant effect in vivo Given that STING V155M was the most potent genetic adjuvant tested in previous experiments, additional STING single or multiple mutations were designed based on mutations identified. [27][28][29][30][31] Of note, the N154S and V147L mutations were also reported in patients diagnosed with SAVI. 30 All three mutations are clustered in exon 5. It has been suggested that these mutant residues act by localizing STING to the endoplasmic reticulum-Golgi intermediate compartment and activate downstream signaling through the TBK1-IRF3 axis. 27 Recently, additional new mutations in STING isolated from patients with SAVI have been identified. 31 Unlike the previously described mutations, these residues cluster within the cyclic guanosine monophosphate (cGMP)-binding domain (CBD) in exons 6-7. All caSTING mRNA constructs were tested in B16-Blue ISG-KO-STING cells to determine whether they could activate the I-ISG54 promoter (Figure S2C). In addition, the constructs were also assayed for their ability to induce IFN-b in B16F10 murine melanoma cells. Most variants induced significant IFN-b responses that were higher than levels induced by overexpressed human or mouse wild-type STING (Figure 2A). The activity of R375A showed the lowest activity (similar to wild-type), while V155M was consistently one of the most potent mutations in this cell-based assay. Combining single mutants into triple or quadruple mutant constructs did not meaningfully impact the extent of IFN-b activation in vitro. STING is also known to activate the NF-kB pathway, 48 which was examined using a reporter cell line engineered to express SEAP upon NF-kB activation. Most mutations induced 2-fold higher activation of NF-kB-responsive SEAP relative to human and murine wildtype versions, similar to IFN induction results in B16F10 cells ( Figure 2B). In vivo administration of LNP-encapsulated mRNA encoding STING V155M induced rapid production of IFN-a measured in serum, as well as other proinflammatory cytokines such as IL-6, monocyte chemotactic protein-1 (MCP-1), regulated on activation, normal T cell expressed and secreted (RANTES), and macrophage inflammatory protein-1b (MIP-1b) at 6 h after administration ( Figure 2C). We also compared different mRNA-encoded caSTING mutations as vaccine adjuvants. Single-point mutant variants (V155M and R284M) and two combined mutants (R284M/V147L/N154S/V155M and V147L/N154S/V155M) that showed consistently strong activity in vitro were coformulated into LNPs with the mRNA encoding OVA antigen. Antigen-specific CD8 + T cell responses were assessed by intracellular staining of IFN-g, tumor necrosis factor a (TNF-a), and IL-2 after restimulation ex vivo with the cognate peptide. Immunization with OVA combined with all caSTING mutations resulted in a much higher percentage of antigen-specific T cells over the negative control group that received NTFIX ( Figures 2D and 2E). No immune potentiation was observed when mice were immunized with OVA plus wild-type STING (data not shown). We also observed that mice vaccinated with OVA and caSTING V155M showed a slightly higher percentage of CD8 + T cells expressing IL-2 compared to the group vaccinated with antigen only. An antigen/STING V155M ratio of 5:1 demonstrated maximal immunogenicity Although type I IFNs are crucial for induction of protective immunity, they can also be detrimental to vaccine responses, largely due to the timing and intensity of the type I IFN signals relative to T cell receptor (TCR) activation. 21 For example, it has been shown that stimulation of T cells with type 1 IFNs prior to TCR activation led to T cell apoptosis (activation-induced T cell death) and upregulation of the coinhibitory receptor PD-1 (programmed cell death protein 1), thus limiting the T cell responses elicited. 49 In other studies, TCR activation, when coincided with signaling induced by inflammatory cytokines such as type 1 IFN ("signal 3"), led to stronger effector and memory responses and better T cell survival. 50,51 These observations suggest the timing and amount of type I IFNs induced are critical for maximal immune potentiation. Therefore, we hypothesized that modulating the antigen/STING V155M mass ratio of the mRNA encapsulated within the LNP would impact the adjuvant effect observed. We first varied antigen/STING V155M mRNA mass ratios in the ADR neoantigen model. An antigen/STING V155M mass ratio of 5:1 (molar ratio 13.8) resulted in the highest antigen-specific CD8 + T cell responses ( Figure 3A), although all ratios tested showed some adjuvant effect. Antigen-specific CD8 + T cell responses in mice immunized with HPV16 E6/E7 mRNA-LNP were similar across antigen/ STING V155M ratios based on intracellular IFN-g production, but peak responses were observed with antigen/STING V155M mass ratios of 1:1 (molar ratio 2.76) or 5:1 (molar ratio 13.8) ( Figure 3B). Similarly, by combining the HPV E7 protein as antigen with varying mass ratios of STING V155M , the peak frequencies of E7-specific CD8 + T cells were also recovered at these ratios ( Figures 3C and 3D). We also characterized the T cell phenotypes that were induced when STING V155M was used as a vaccine adjuvant. Among CD44 antigen-experienced populations (ADR vaccine model) and E7-specific CD8 + T cells (HPV vaccine model), there was an increase in the percentage of CD62L lo among CD44 hi T cells ( Figures 3E and 3F), suggesting that STING V155M likely induces cells of an effector memory phenotype, which are more likely to remain in circulation and peripheral tissues for immune surveillance. No other noticeable differences were observed among other surface markers examined, including killer cell lectin-like receptor G1 (KLRG1), CD127 (IL-7 receptor), chemokine receptor CX3CR1, or CD27 (a member of the TNF receptor superfamily) (data not shown). On day 21, the percentage of SIINFEKL-specific CD8 + T cells in spleens was determined by intracellular staining of IFN-g, TNF-a, and IL-2 after a 4-h ex vivo stimulation with cognate peptide. Representative flow cytometry plots gated on total CD8 + T cells are shown. Data are representative of at least two independent experiments with four to five mice per group. Data plotted are mean ± SEM. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. www.moleculartherapy.org Evaluation of mRNA-encoded constitutively active STING V155M as an adjuvant in murine tumor models Vaccines using STING agonists (e.g., cyclic dinucleotides [CDNs]) show overall improvement of adaptive immune responses to poorly immunogenic antigens in preclinical studies. 52 Importantly, tumor-bearing mice immunized with vaccine containing a STING agonist had delayed tumor growth and longer survival. 53 Therefore, we tested whether addition of STING V155M to an mRNA-encoded cancer antigen vaccine with HPV16 E6 and E7 model antigens could result in tumor volume reduction and/or longer survival. . At 21 days after immunization, spleens were assessed for the frequency of CD8 + T cells specific for the HPV16-E7 epitope RAHYNIVTF by flow cytometry. (E) C57BL/6 mice were immunized with mRNA-LNP for HPV E6/E7 as described at varying ratios. The percentage of CD62L lo among E7-specific CD8 + T cells was determined by flow cytometry at the indicated time points. (F) C57BL/6 mice were immunized with mRNA-LNP formulated with MC38 tumor neoantigens at varying antigen/STING mass ratios. The percentage of CD62L lo among CD44 hi CD8 + T cells was determined by flow cytometry at indicated time points. Data are representative of at least two independent experiments with four to five mice per group. Data plotted are mean ± SEM. *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. We utilized a TC-1 cell line that was derived from primary lung epithelial cells from C57BL/6 mice and transformed with HPV16 E6 and E7 oncoproteins. 54 Tumors were implanted subcutaneously in mice and their growth was monitored ( Figure 4A). Vaccination with mRNA-encoding antigen coformulated with STING V155M mRNA, when the vaccine was given therapeutically after TC-1 tumors reached an average of 80-120 mm 3 , resulted in significant inhibition of tumor growth compared to vaccine alone and prolonged survival;~50% of tumor-bearing mice that received vaccination with STING V155M survived at day 60 ( Figure 4B). We also examined the potential for enhancing the adjuvant effect of STING by inhibition of cytotoxic T lymphocyte-associated protein 4 (CTLA-4) function with a blocking antibody (9H10). Addition of anti-CTLA-4 antibodies had minimal synergy with STING V155M in prolonging survival and only slightly delayed tumor growth in this model ( Figure 4C; Figure S4). In addition, we compared mRNA encoded STING to DMXAA, a known murine STING agonist. 55 We did not observe additional survival benefit in the vaccine group treated simultaneously with DMXAA when coadministered intramuscularly with the same mRNA vaccine (Figures 4B and 4C; Figure S4). Antitumor (D and E) Tumor growth and Kaplan-Meier survival curves of mice were treated as described in (A) with or without depleting antibodies for CD8 or CD4 throughout the duration of the study. (F) C57BL/6 mice were immunized intramuscularly on day 0 and day 14 with mRNA-LNP (10 mg/mouse) coformulated into LNPs with HPV E6/E7 and STING V155M . Mice were treated with depleting antibodies for CD4 prior to each immunization. On day 21, the percentage of HPV E7-specific CD8 + T cells in spleens was determined by intracellular staining of IFN-g. Data are representative of at least two independent experiments. Data plotted are mean ± SEM. Statistical significance for survival analyses was calculated using the log-rank test: *p < 0.05, ***p < 0.001, ****p < 0.0001; ns, not significant. Data are representative of two independent experiments. effects associated with STING V155M were abrogated by depletion of CD8 + T cells, but not by depletion of CD4 + T cells ( Figures 4D and 4E), suggesting a critical role for CD8 + T cells in STING V155M -adjuvanted mRNA vaccine responses. Significant HPV E7-specific CD8 + T cell function (production of IFN-g and TNF-a) was induced by STING V155M -adjuvanted mRNA vaccine even with depletion of CD4 + T cells ( Figure 4F). We further examined the efficacy of STING V155M as a genetic adjuvant in a murine lung metastasis model using firefly luciferase-expressing TC1 (TC1-Luc) cells ( Figure 5A). In this model, mice vaccinated therapeutically after tumor implant with an mRNA antigen coformulated with STING V155M alone or coadministered with the STING agonist DMXAA resulted in inhibition of lung tumor growth ( Figure 5B). Mice vaccinated with mRNA-encoded antigen with mRNA-encoded STING V155M showed improved survival benefit over unvaccinated mice or mice vaccinated with mRNA-encoded antigen alone ( Figure 5C). Overall, these results demonstrated that enhanced vaccine responses to HPV-derived tumors can be achieved with constitutively active STING as an mRNA-encoded genetic adjuvant. DISCUSSION mRNA vaccines have been shown to be effective at inducing immune responses. We evaluated several potential candidates for genetic adjuvants encoded by mRNA to be coadministered with viral or tumor antigens also encoded by mRNA, including constitutively active forms of mediators of TLR, MAVS, and STING signaling pathways and mediators of cell death. Of all variants tested in different antigen models, the constitutively active mutation STING V155M , which had been described in a patient with SAVI, was most potent at inducing antigen-specific CD8 + T cell responses, possibly because it could activate NF-kB, IRF3, and IRF7. STING V155M and other constitutively active STING mutations induced IFN-g, TNF-a, and IL-2 production in antigen-specific CD8 + T cells in vivo, as well as inflammatory cytokines IFN-a, IL-6, MCP-1, MIP-1b, and RANTES in serum. We further optimized the antigen/STING V155M mass ratio (5:1) to maximize antigen-specific CD8 + T cell responses. Notably, mRNA-encoded STING V155M increased the efficacy of cancer vaccines in murine tumor models and a murine lung metastasis model. STING has been shown to participate in immune responses to viral infections, including DNA viruses, RNA viruses, and retroviruses, as well as bacterial infections, including both Gram-negative and Gram-positive bacteria. 56 STING has been postulated to play a role in autoinflammatory diseases associated with inappropriate leakage or exposure of nucleic acids, such as rheumatoid arthritis and systemic lupus erythematosus, and with antitumor activity (via activation of adaptive immunity). 56 This signaling molecule is therefore integral to many cellular processes involved in cell defense. Given the importance of STING in generating cellular immunity, adjuvants to activate STING (most notably CDNs) have been developed. 57 STING agonists have been shown to be effective adjuvants in both viral and cancer vaccine models in preclinical studies. 52,53 For example, synthetic CDN derivatives enhanced antitumor responses in therapeutic models of established cancers in mice. 53 Notably, direct activation of STING using CDNs injected into tumor cell lines resulted in regression of established tumors and development of immunologic memory. 58 However, using unformulated Representative flow cytometry plots gated on total CD8 + T cells are shown. Statistical significance for survival analyses was calculated using the log-rank test: *p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001. CDNs presents challenges that limit their use as adjuvants. For example, CDNs are anionic and do not readily cross cellular membranes, preventing them from reaching the cytosol where STING is localized. 59 Moreover, CDNs are rapidly cleared from the body with modest delivery to tumors and/or lymph nodes. However, these challenges can be partially overcome by using a polymer nanoparticle formulation, as CDNs used as a nanoparticulate adjuvant induced expansion of vaccine-specific CD4 + T cells and increased germinal center B cell differentiation in lymph nodes. 60 Providing constitutively active STING as an mRNA-encoded genetic adjuvant has advantages over coadministration of vaccine with a STING agonist (as CDNs) in an LNP delivery system. For example, a challenge with coadministration of STING agonists with adjuvants such as CDNs is inefficient drug loading (i.e., a low ratio of drug to carrier), which has been reported to be <10% in current nanomedicines 61 and only~35% in a bench-level formulation. 52 Drug loading is not an issue with a genetic adjuvant that is encoded by mRNA rather than presented as a CDN. The use of LNP as the delivery system to deliver mRNA-encoded STING as a genetic adjuvant for mRNA-encoded vaccines provides several benefits. First, LNPs are biodegradable, and the components of LNPs have been approved for clinical use by the US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) at much higher (systemic) exposures than would be expected for vaccines. 62 Second, LNPs protect, stabilize, and improve the bioavailability of the encapsulated mRNA. Third, intracellular mRNA is rapidly degraded, preventing prolonged activation of STING pathways and avoiding any subsequent undesirable inflammatory reactions. Moreover, we have demonstrated the ability of an mRNA-LNP vaccine against influenza to activate APCs in non-human primates. 14 Since direct activation of APCs is required to generate appropriate effector T cells, 63 we postulate that codelivery of STING V155M and antigens, both encoded by mRNA and delivered via LNP, will further enhance the T cell responses elicited by the vaccine. Because we are using mutant STING (V155M), there is a slight possibility of cross-reactivity to normal STING proteins. However, the mutant STING (V155M) is only transiently expressed and is an intracellular protein; therefore, the risk of eliciting cross-reactivity is considered low. An alternative in vivo strategy would be to use a constitutively active bacterial cGMP-AMP synthase (cGAS) molecule encoded by mRNA, which has been done for an adenovirus-based vaccine. 64 cGAS binds to cytosolic DNA and produces CDNs that can then indirectly activate the STING pathway in host cells. However, this approach is unlikely to provide sufficient STING ligand in the mRNA-based vaccine. We did not explore the role or impact of mRNA-encoded STING V155M on innate immunity, such as natural killer (NK) cells. Previous studies have shown that CD8 + T cells, but not NK cells, were required for anti-tumor immunity and prevented TC1 tumor growth in other vaccination settings, including E7/HSP70 DNA and Listeria-expressing E7 models. 65,66 However, because some recent reports suggest a role of STING on NK cell activation, 67,68 this possibility should be explored in future studies, perhaps in a different disease or tumor setting. In addition, it would be of interest to conduct tumor rechallenge experiments in cured mice to further characterize the memory response that was observed in the present study. This proof-of-concept study showed that including an mRNA-encoded genetic adjuvant with an mRNA-encoded antigen in a LNP vaccine enhances immune responses to viral and tumor antigens. These results demonstrate the exciting potential of genetic adjuvants to enhance immune responses to mRNA vaccines. MATERIALS AND METHODS mRNA synthesis and formulation mRNA was synthesized in vitro by T7 RNA polymerase-mediated transcription with N1-methylpseudouridine in place of uridine. The linearized DNA template incorporates the 5 0 and 3 0 untranslated regions (UTRs) and a poly(A) tail as previously described. 3 The final mRNA was capped to increase mRNA translation efficiency. After purification, the mRNA was diluted in citrate buffer to the desired concentration. The mRNA was coformulated in the same LNP at the indicated mass ratio. LNP formulations were prepared using a modified procedure of a method previously described. 1 Briefly, lipids (ionizable/helper/ structural/polyethylene glycol [PEG]) were dissolved in ethanol and combined with an acidification buffer of 50 mM citrate buffer (pH 4.0) containing mRNA at a ratio of 2:1 aqueous/ethanol using synchronized syringe pumps (Harvard Apparatus). Formulations were diafiltered and concentrated against 20 mM Tris (pH 7.4) with 8% sucrose using Pellicon XL 100-kDa tangential flow membranes (EMD Millipore), passed through a 0.22-mm filter, and stored frozen until use. The structure and composition of the LNP was similar to that described previously. 69 Formulations were tested for particle size, RNA encapsulation, and endotoxin. All were found to be 80-100 nm in size by dynamic light scattering and with >80% encapsulation and <10 endotoxin units (EU)/mL endotoxin. In vitro IFN and NF-kB induction assays B16-Blue ISG-KO STING (#bb-kostg) and THP1-Dual (#thpd-nfis) stable cell lines were purchased from InvivoGen (San Diego, CA, USA). Cells were seeded in duplicate wells at a density of 25,000 cells/well in 96-well plates and transfected with STING mRNA variants or mCitrine mRNA at 250 ng/well using 0.3 mL/well Lipofectamine 2000 (Thermo Fisher Scientific, USA; #11668019) according to the manufacturer's instructions. Supernatants were harvested 24 h after transfection, and IFN-b levels were determined by a standard murine IFN-b ELISA assay (BioLegend, San Diego, CA, USA; #439408) according to the manufacturer's instructions. Optical density (OD) was measured at 450 nm on a microplate reader (Synergy H1, BioTek, Winooski, VT, USA). NF-kB and ISG activation were determined by assessing the levels of SEAP using QUANTI-Blue reagent (InvivoGen), with the OD read at 655 nm. Data analysis was performed using GraphPad Prism (GraphPad, La Jolla, CA, USA). TC-1 and TC-1 luc murine tumor models TC-1 is a murine cell line derived from primary lung epithelial cells of C57BL/6 mice and cotransformed with HPV16 E6, HPV16 E7, and cHa-RAS oncogenes. 54 TC-1 cells were retrovirally infected with pLuci-thy1.1 and isolated by preparative flow cytometry to yield TC-1 luc cells. 70,71 Both cell lines were licensed from Johns Hopkins University through a material transfer agreement. Efficacy studies were conducted by Charles River Discovery (NC, USA). TC-1 cells used for implantation were harvested during log phase growth. Female C57BL/6 mice were injected with a single-cell suspension of 2.5 Â 10 5 cells/animal subcutaneously. Tumor growth was measured using a caliper to determine the tumor size. The first dose of mRNA-LNP vaccine was given intramuscularly when TC-1 tumors reached an average of 80-120 mm 3 . The second dose of mRNA-LNP was given 7 days after the first injection. For some experiments, anti-CTLA-4 9H10 (Bio X Cell, West Lebanon, NH, USA; lot no. 624316D1B) was administered intraperitoneally (5 mg/kg on day 0; 2.5 mg/kg on day 3 and day 6) or DMXAA (Inviv-oGen, San Diego, CA, USA) was coadministered with the mRNA vaccine intramuscularly (200 mg/animal). For the TC-1 luc study, female C57BL/6 Albino mice (B6N-Tyrc-Brd/ BrdCrCrl, 8 weeks old; Charles River) were injected intravenously with 5 Â 10 5 cells. Luciferase activity was measured in live animals using IVIS SpectrumCT (PerkinElmer, Waltham, MA, USA) equipped with a charge-coupled device camera. On the day of imaging, animals were injected with VivoGlo D-luciferin substrate (Promega, Madison, WI, USA; #P1043) (150 mg/kg intraperitoneally). Light emitted from the bioluminescent cells was detected, digitalized, and imaged to allow for anatomical localization. Data were analyzed and exported using Living Image software 4.5.1 (PerkinElmer). Flux (photons/s) equaling the radiance in each pixel summed or integrated over the region of interest (cm 2 ) Â 4p was used to report quantifiable bioluminescent signal reflecting tumor burden. Statistical analyses Statistical analyses were performed using GraphPad Prism software (version 7.03). All values and error bars are mean ± standard error of the mean (SEM) unless otherwise indicated. Comparisons of multiple groups were performed using one-way analysis of variance (ANOVA), unless otherwise noted. An adjusted p value of <0.05 was considered significant. Data and materials availability All data needed to evaluate the conclusions in the paper are present in the paper and/or in Supplemental information. Additional data related to this paper may be requested from the authors. SUPPLEMENTAL INFORMATION Supplemental information can be found online at https://doi.org/10. 1016/j.ymthe.2021.03.002. received salary and stock options as compensation for their employment.
2021-03-09T06:22:54.198Z
2021-03-04T00:00:00.000
{ "year": 2021, "sha1": "3b024b2d8584e1843652770fbbfa03e66611078d", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": "CLOSED", "pdf_src": "PubMedCentral", "pdf_hash": "ccaf0d1987e2b4607880372d7a79a4cf578039a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
8154370
pes2o/s2orc
v3-fos-license
Using VLBI Data to Investigate the Galaxy Structure in the Gravitationally Lensed System B1422+231 Gravitationally lensed systems with multiply imaged quasars are an excellent tool for studying the properties of distant galaxies. In particular, they provide the most accurate mass measures for the lensing galaxy. The system B1422+231 is a well studied example of a quadruply imaged quasar, with high-quality VLBI data available. Very accurate data on image positions, fluxes and deconvolved image sizes provide good constraints for lensing models. We discuss here the failure of smooth models in fitting the data. Since it is intuitively clear that the mass of a lens galaxy is not a smooth entity, we have investigated how deviation from a smooth model can influence lensing phenomena, especially the image flux ratios. To explore expectations about the level of substructure in galaxies and its influence on strong lensing, N-body simulations of a model galaxy are employed. By using the mass distribution of this model galaxy as a lens, synthetic data sets of different four image system configurations are generated. Their analysis can possibly provide evidence for the presence and strong influence of substructure in the primary lens galaxy. The mystery of B1422+231 The gravitational lens system B1422+231 was discovered in the course of the JVAS survey (Jodrell Bank -VLA Astrometric Survey) by Patnaik et al. (1992). It consists of four image components. The three brightest images A, B, and C (as designated by Patnaik et al. 1992) are fairly collinear. The radio flux ratio between images A and B is approximately 0.9, while image C is fainter (flux ratio C to B is approximately 0.5). Image D is further away and is much fainter than the other images (with flux ratio D:B of 0.03). We used the most recent available radio data for the image positions and fluxes from the polarisation observations made at 8.4 GHz using the VLBA and the 100m telescope at Effelsberg (Patnaik et al. 1999). For each of the components, the authors measured positions (relative to the image B) and fluxes as well as the deconvolved image shapes. The radio source of this lens system is associated with a 15.5 mag quasar at a redshift of 3.62 (Patnaik et al. 1992). The lensing galaxy has been observed in the optical; its redshift has been determined to be 0.338 and its position relative to image B has been measured (Impey et al. 1996). The main lens galaxy is a member of a compact group with a median projected radius of 35 h −1 kpc and velocity dispersion of ∼ 550 km s −1 (Kundic et al. 1997). Several groups have tried to model B1422+231 (Hogg & Blandford 1994;Kormann et al. 1994;Keeton et al. 1997;Mao & Schneider 1998) and all of them have experienced difficulties in fitting the image parameters. As we used data with even more precise image positions one might expect that it would become even harder to model the system. However, as already pointed out by some authors, the difficulties do not lie in fitting the image posi-tions but rather in the flux ratios. All the results can be found in detail in Bradač et al. (2002) 2. Lens modelling with smooth models First we considered two standard gravitational lens models. We used a singular isothermal ellipsoid with external shear from Kormann et al. (1994) (hereafter SIE+SH) and a non-singular isothermal ellipsoid model with external shear (NIE+SH) from Keeton & Kochanek (1998) to fit the image positions and fluxes of B1422+231. We have applied the standard χ 2 minimisation procedure to the radio data, using image positions, fluxes, and their uncertainties from Patnaik et al. (1999). The optical position of the galaxy was taken from Impey et al. (1996). Although the image positions are very accurate (uncertainties of the order of 50 µarcsec), we have no difficulties fitting them. However, as already pointed out in previous works on B1422+231, such models completely fail in predicting the image fluxes. In particular image A is predicted too dim (the flux ratio A:B as predicted by SIE+SH model is 0.80, much below the measured value of 0.93). We have also tried to model the system with a NIE+SH model; however, the χ 2 did not improve significantly. Although other smooth models can still be investigated, it seems unlikely that another smooth model can explain all four image fluxes simultaneously. Even when one disregards the flux of image A, a smooth macro-model seems to be incapable of explaining the remaining flux ratios. Models with substructure The A:B flux ratio causes the biggest difficulty in fitting B1422+231. Since the radio and optical flux ratios are very different, one is tempted to exclude it from the χ 2 measure. However, one can also try to deal with this problem in another way. Adding a small perturber at the same angular diameter distance as the primary lens and at approximately the same position as image A can change the flux ratio A:B substantially. On the other hand, calculations show (Mao & Schneider 1998) that such a perturber does not affect the positions of any of the images appreciably. Furthermore, a small perturber can also change the flux ratio of the other images slightly and this might help to improve the results from the previous section. We model the perturber as a non-singular isothermal sphere. We are aware of the fact that the choice of this particular model for the substructure is oversimplified in many ways. However, we are not trying to constrain the nature of substructure in this case, which is impossible due to the number of constraints available. The resulting model has 12 parameters, which leaves us 0 degrees of freedom. For a model with zero degrees of freedom we expect χ 2 to vanish if the model is realistic. The resulting χ 2 = 5.6 remained high; this family of models considered does not seem to be adequate for the description of the galaxy in B1422+231 lens system. Using image shapes as constraints For the more sophisticated models it is very difficult to ensure a constrained model that accounts for the substructure using as constraints only image positions, flux ratios, and the galaxy position. For this reason we also included the axis ratios and orientation angles of the deconvolved images from Patnaik et al. (1999) as additional constraints. It turned out that it is difficult to simultaneously stretch and rotate the images with one (or two) perturber(s). The macro-model is not very successful in predicting both components of the image ellipticities and the fluxes, and therefore corrections are needed in the case of all four images. We can safely assume that the inclusion of further sub-clumps in the model would eventually lead to a "perfect" fit with the observed data. In particular three more sub-clumps close to the images B, C, and D would yield a significant improvement to the χ 2 and could explain the observed data. Strong lensing by an N-body simulated galaxy A question that arises from (difficulties in) model fitting of B1422+231 is whether such behaviour is seen with an N-body simulated galaxy and therefore generic of a typical galaxy lens. We used the cosmological N-body simulation data including gas dynamics and star formation of Steinmetz & Navarro (2001) for this purpose. The lens properties are calculated using the IMCAT software. We use this "lens" to generate systems that have similar configuration and flux ratios as in the case of B1422+231. In total we considered 11 different synthetic systems. The fitting procedure is performed in the same way as for the B1422+231 data. Again, image B is taken as a reference. We try to fit the positions and fluxes with SIE+SH and SIE+SH+NIS models. We experience similar problems fitting fluxes as before; the χ 2 -function value is high for all 11 data sets. There are indications that the level of substructure as obtained from simulations can influence lensing phenomena a lot. In particular, the synthetic fluxes we obtained deviate strongly from those predicted by smooth models. This particular example of a simulated galaxy can of course not give us the answers to the aforementioned questions. To draw stronger conclusions, one would have to investigate many different realisations of N-body simulated galaxies and in addition use higher resolution simulations (currently unavailable). A statistical analysis to investigate the strong lensing properties could then be made.
2014-10-01T00:00:00.000Z
2002-06-01T00:00:00.000
{ "year": 2002, "sha1": "a8b6aaad739bae6676cae9eb554c84b906d577f5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "0d46c4ab7dac927c19b59781aad441ffb60d60aa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
21726968
pes2o/s2orc
v3-fos-license
Myelodysplasia in a psoriasis patient receiving etanercept: Cause or coincidence? presenting sign of cystic fibrosis. J Cyst Fibros 2006;5:257‐9. 3. Bernstein ML, McCusker MM, Grant‐Kels JM. Cutaneous manifestations of cystic fibrosis. Pediatr Dermatol 2008;25:150‐7. 4. Darmstadt GL, McGuire J, Ziboh VA. Malnutrition‐associated rash of cystic fibrosis. Pediatr Dermatol 2000;17:337‐47. 5. Muñiz AE, Bartle S, Foster R. Edema, anemia, hypoproteinemia, and acrodermatitis enteropathica: An uncommon initial presentation of cystic fibrosis. Pediatr Emerg Care 2004;20:112‐4. Sir, Myelodysplastic syndrome comprises hemopoietic insufficiency, often associated with cytopenias of one or more cell lineages that may lead to leukemic transformation. 1 Here, we report a case of myelodysplastic syndrome in a psoriatic patient following therapy with tumour necrosis factor-α blocking agent (etanercept). A 76-year-old male patient presented with psoriasis vulgaris since the last 20 years without any arthritis. He was treated with subcutaneous etanercept, 50 mg twice a week for 6 months followed by once a week for another 6 months, to achieve psoriasis area and severity index 90 (PASI90) at 6 months. Prior to etanercept, he had been treated with topical steroids, topical vitamin D analogues, 6-month course of narrow-band-ultraviolet-B and 1-year acitretin without any appreciable benefit. Bone marrow aspirate [ Figure 1] and biopsy [ Figure 2] revealed hypercellularity. Blast cells accounted for almost 8% of the total nucleated marrow cells. The majority of neutrophils were hypogranular. Hypolobulated megakaryocytes were observed. Reticulin fibres were not increased. Peripheral blood also revealed bicytopenia, blasts and 0.5 × 10 9 /L monocytes. Bone marrow showed 8% trilineage dysplasia and blasts. A diagnosis of myelodysplastic syndrome with excess blasts-1 was made based on the hematologic picture. We stopped etanercept treatment and administered two cycles of azacitidin and folic acid supplementation, with almost no response and even worsening of platelet count (17 × 10 9 /L) and hemoglobin (8.3 g/dl). While he was waiting for the third cycle, he was admitted to the emergency unit suffering from lower gastrointestinal bleeding, epistaxis and shock. The patient expired eventually due to cardiopulmonary arrest. Immune dysregulation and altered T-cell hemostasis are essential factors for the development of myelodysplastic syndrome. Several authors have reported a higher risk of myelodysplastic syndrome in patients suffering from autoimmune disorders, resulting from chronic overproduction of apoptosis inducing cytokines like tumor necrosis factor-alpha. It has been proven that accelerated apoptosis of bone marrow cells accounts for the disturbed hemopoiesis and peripheral blood cytopenias leading to myelodysplastic syndrome, despite the presence of hypercellular bone marrow. 1 In addition, nonspecific activation and proliferation of T lymphocytes seen in myelodysplastic syndrome has been documented to promote epidermal growth in genetically susceptible psoriasis patients. 2 Myelodysplastic syndrome may be associated with psoriasis in about 7% of cases. 3 Özbek et al. reported a 3.5-year-old girl with psoriasis, hypogammaglobulinemia and pancytopenia who developed myelodysplastic syndrome-excess blasts that progressed to acute myeloid leukemia. 2 Moreover, Maleszka et al. noted increased incidence of leukemia and laryngeal cancer among families of psoriasis patients. 4 In addition, there are some reported cases of leukemia that developed in psoriasis patients receiving systemic immunosuppressives (cyclosporine, methotrexate and etanercept). However, the association of leukemia and psoriasis is not well-investigated. 4 Etanercept may induce various hematological side effects including pancytopenia and aplastic anaemia, as reported by the US Food and Drug Administration. 5 Tumor necrosis factor-alpha enforces apoptosis of tumor cells and promotes different antitumor activities like activation of natural killer cells, stimulation of CD8 cells and acceleration of camptothecan and etoposide antitumor effect. Loss of such activities may mediate tumor growth in acute and chronic myeloid leukemia. 6 Tumor necrosis factor-alpha is also inhibitory to hematopoietic stem cells. Studies have shown increased levels of tumor necrosis actor-alpha in myelodysplastic syndrome marrow. Thus, antagonizing it significantly enhances in-vitro hematopoietic colony formation. Deeg et al. tried to treat myelodysplastic syndrome with etanercept as a pilot trial. 7 However, their study found limited favorable response in some patients and worsening of blood cell counts in others. The findings of Deeg et al. and the contradictory effects of tumor necrosis factor-alpha on dysplastic bone marrow suggest that tumor necrosis factor-alpha is only partially accountable for the dysregulated hemopoiesis in myelodysplastic syndrome; thus there are various other pathomechanisms for myelodysplastic syndrome. 7 Only three cases of myelodysplastic syndrome have been reported till date in psoriasis patients following etanercept therapy, ours being the 4 th case. Nair et al. reported the case of a 57-year-old man with psoriatic arthritis who developed myelodysplasia that progressed to acute myeloid leukemia after 6 months of etanercept treatment. 6 Bachmeyer et al. 8 and Knudson et al. 9 reported the cases of 40 and 43-year-old psoriatic males who were diagnosed with myelodysplastic syndrome after 4 and 14 months of etanercept therapy, respectively. The case reported by Knudson et al. had progressed to acute myeloid leukemia followed by death. 9 In addition to the aforementioned cases, another case was reported by Bakland and Nossent where a 31-year-old female with ankylosing spondylitis developed acute myeloid leukemia 4 months after etanercept treatment. 5 In our case, myelodysplasia with excess blasts developed 1 year after initiating etanercept therapy. Taken together, the current case adds to the growing evidence that suggests a link between myelodysplastic syndrome and etanercept treatment in psoriasis patients. Pre-treatment thrombocytopenia was seen in our patient (131 × 10 9 /L) and the patient reported by Knudson et al. (126 × 10 9 /L) 9 , while mild leucopenia (3.6 × 10 9 /L) was reported by Bakland and Nossent before initiation of etanercept. 5 More studies are needed to clarify whether this was an accidental association or etanercept may aggravate myelodysplasia in all susceptible patients. Although progressive and critical worsening of blood counts following etanercept treatment may demonstrate its aggressive hematologic and myelodysplastic adverse events, the susceptibility of psoriasis patients to myelodysplasia cannot be ruled out. Therefore, the present case and literature review support the need for pharmacovigilance, prospective cohort or retrospective case-control studies to prove or disprove this association. White blood cells (3.0-11.0×10 9 /L) 9.4 6.0 In conclusion, psoriatic patients being treated with etanercept should be considered at dual risk of developing myelodysplastic syndrome -therapy-related and autoimmunity-associated. Hence, we recommend that psoriatic patients who are receiving etanercept should be followed regularly by routine blood counts and it should be discontinued upon onset of any cytopenias. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient has given his consent for his clinical information to be reported in the journal. The patient understand that name and initials will not be published. Financial support and sponsorship Nil.
2018-05-21T21:28:04.221Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "80c37aac33c822e3509ffd58ae74f5e39175d909", "oa_license": null, "oa_url": "https://doi.org/10.4103/ijdvl.ijdvl_463_17", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "3c1b75aa5fd45b7c25e989e31b0c65e6ccea61c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248651607
pes2o/s2orc
v3-fos-license
Local transformation of the Electronic Structure and Generation of Free Carriers in Cuprates and Ferropnictides under Heterovalent and Isovalent Doping We have previously shown that most of the anomalies in the superconducting characteristics of cuprates and ferropnictides observed at dopant concentrations within the superconducting dome, as well as the position of the domes in the phase diagrams, do not require knowledge of the details of their electronic structure for explanation, but can be understood and calculated with high accuracy within the framework of a simple model describing the cluster structure of the superconducting phase. This fact suggests a change in the paradigm that forms our understanding of HTSC. In this paper, we propose a unified view on the transformation of the electronic structure of cuprates and ferropnictides upon heterovalent and isovalent doping, based on the assumption of self-localization of doped carriers. In this representation, in undoped cuprates and ferropnictides, which initially have different electronic structures (Mott insulator and semimetal), local doping forms percolation clusters with the same electronic structure of a self-doped excitonic insulator where a specific mechanism of superconducting pairing is implemented, which is genetically inherent in such a system. The proposed model includes a mechanism for generating additional free carriers under heterovalent and isovalent doping and makes it possible to predict their sign, which, in the general case, does not coincide with the sign of doped carriers. Introduction Elucidation of the mechanism of high-temperature superconductivity still remains a topical issue of condensed matter physics. Despite many years of efforts by theorists and an enormous volume of accumulated experimental knowledge of these materials, many of their features have eluded satisfactory explanation. At the same time, we believe that based on the already established experimental facts, it is possible to identify key aspects that open the way to solving the problem of HTSC. To date, a list of high-temperature superconductor compounds includes mainly representatives of two classes: cuprates and ferropnictides. In an undoped (stoichiometric) state, 1 Correspondence to [mitsen@lebedev.ru] 2 these compounds (except LiFeAs) are not superconductors. A superconductivity in them emerges only as a result of heterovalent or isovalent doping, i.e., at a partial substitution of atoms of an element with atoms of another element with a different or the same valence, respectively. Under heterovalent doping, both cuprates and ferropnictides experience, in the general case, the same sequence of transitions from an antiferromagnetic to a superconducting state and then to a normal metal, i.e., superconductivity takes place in only a limited range of doping. It is generally accepted that heterovalent doping of cuprates and ferropnictides introduces additional charge carriers into their basal CuO2 and FeAs planes, which, due to some causes, leads in a certain range of concentrations to the initiation of a superconducting pairing in this compound at T < Tc. Herewith the Tc(x) dependence curve has a dome shape in both cuprates and ferropnictides, which suggests the similarity of the mechanisms leading, with increasing doping, to the transition of these compounds to a state where the specific mechanism of superconducting pairing can be implemented. It is surprising, however, that the same dome-shaped Tc(x) dependence is also observed for ferropnictides with isovalent substitution of ions in the basal plane, for example, in BaFe2(As1-xPx)2 and Ba(Fe1-xRux)2As2, and, moreover, as shown by experiment, such doping, as in the case of heterovalent substitution, is accompanied by a change in the concentration of carriers [1,2]. All these facts make one look closer at the doping process leading to the formation of superconducting phase in initially non-superconducting parent compounds of cuprates and ferropnictides. Until now, the predominant majority of works have considered doping as a process of a spatially homogeneous change of electronic structure. This approach, however, fails to agree with many experiments where spatial variations of local charge density and superconducting order parameter are observed in doped specimens [3][4][5][6]. What is more, many experiments demonstrate that doped carriers in both cuprates and ferropnictides are strictly localized in the nearest vicinity of the dopant [7][8][9][10][11]. Previously, we proposed a unified mechanism of local iso-and heterovalent (hole and electron) doping, which is implemented in superconducting cuprates and ferropnictides [12,13]. The proposed mechanism assumes the local nature of the transformation of their band structure under doping, which leads to the formation of inhomogeneous cluster structure of the superconducting phase [13], the parameters of which are determined by the concentration of the dopant and crystal structure of the undoped compound. We have shown that basing only on the knowledge of the crystal structure and type of dopant, it is possible: 1) to accurately determine the positions of superconducting domes in the 3 phase diagrams of HTSC compounds [12], 2) to explain the nature and the positions of sharp peaks in the London penetration depth as well as the positions of anomalies in the anisotropy of resistance depending on the level heterovalent or isovalent doping [13], 3) to understand the nature of "magic" concentration values corresponding to abrupt changes in superconducting properties (1/8 anomaly, etc.) [12]. In other words, we have demonstrated that all the above experimentally observed anomalies of the superconducting characteristics depending on doping can be understood and accurately calculated within the framework of unified model describing the cluster structure of the superconducting phase. It is importantly, the proposed interpretation does not require the introduction of additional theoretical concepts to explain observed anomalies. The possibility of obtaining this information without using any data on the features of their electronic structure can be considered as an indication of the existence of some general and rather crude mechanism that operates regardless of the details of the electronic structure of these compounds and provides superconducting pairing. In this work, we propose a unified view on the transformation of the electronic structure of cuprates and ferropnictides upon heterovalent and isovalent doping. According to our idea, in undoped cuprates and ferropnictides, which initially have different electronic structures (Mott insulator and semimetal), local doping forms percolation clusters with the same electronic structure of a self-doped excitonic insulator. The proposed model also includes the specific mechanism for generating of extra free carriers under heterovalent and isovalent doping and makes it possible to predict their sign, which, in the general case, does not coincide with the sign of doped carriers. It is also assumed that a specific mechanism of superconducting pairing is realized in the formed clusters, which is genetically inherent in such a system and is due to the interaction of band electrons with excitonic states". Heterovalent doping and trionic complexes According to [12], in the case of heterovalent doping of cuprates and ferropnictides, the local nature of the transformation of the electronic structure is due to the self-localization of doped carriers as a result of the formation of trion complexes by them. Such a complex binds the doped carrier with charge-transfer excitons (CT-excitons), which are formed in the basal plane under the influence of this very carrier. Note that self-localization takes place as the temperature decreases below a certain localization temperature of doped carriers -Tloc. The possibility for the existence of CT excitons in cuprates and ferropnictides follows from the fact that both classes of compounds are characterized by a low concentration of charge carriers, n<10 22 cm -3 , which corresponds to a mean distance between carriers, rs >0.4 nm, and exceeds the distance between the anion and cation. This means that the interaction inside the cell is essentially unshielded, which is what enables the existence of well-defined CT excitons [14]. ; (c,d), the band structures of undoped cuprates and ferropnictides; energy Δ required for interband transition related to the transfer of an electron from an oxygen ion to a copper ion (in cuprates), or to the transfer of a hole from an iron ion to an arsenium ion (in ferropnictides), is approximately the same in both cases and makes ~2 eV; (e,f), the value of Δct can be decreased under the impact of a doped charge localized near the plaquette to a value of ∆ ct * smaller than CT-exciton bond energy Eex. This will lead to a transition to an exciton-bound state of an electron (hole) on the central ion and a hole (electron) on the orbital of the surrounding ions (an intracrystalline analog of the hydrogen atom). consideration but without the loss of generality, we will consider the FeAs lattice ( Fig. 1 b) as a plane lattice and will replace several bands in the vicinity of Fermi energy by one band (Fig. 1 d,f). The lattices in Fig. 1 a,b can be considered as a net of hydrogen-like plaquettes CuO4 (in cuprates) and AsFe4 or FeAs4 (in ferropnictides). The meaning of singling these plaquettes out is that the energy required for electron (hole) transfer between the central ion and four adjacent ions is approximately the same and equals ct  1.5-2 eV [15,16]. This energy, however, can be decreased to a value of ∆ ct * smaller than CT exciton bond energy Eex, which is estimated to equal ~1 eV [14]. That is to say, a transition to the exciton-bound 5 state of an electron (hole) on the central ion and of a hole (electron) on the orbital of the adjacent ions will occur in this plaquette. We will call such plaquettes CT plaquettes. The value of ∆ct can be reduced to ∆ ct * , for example, by placing a charge of a certain value on one of the plaquette ions or nearby it . Indeed, within the framework of a simple ionic model, the value of ∆ct is determined by the following relationship [17]: Here, Ecat is the minimal energy of an electron transferred to a cation; Ean, the maximal Obviously, by placing near one of the ions forming the plaquette some additional charge of a certain sign, one can change |EM| and, thus, to decrease ct for this plaquette. If this reduction is sufficient to fulfill the condition ∆ ct * <Eex, then this will lead to the creation of CT plaquette. The ct can also be reduced to the required value by placing a charge of a certain magnitude and sign directly on one of the plaquette ions. At the same time, it is important to note that adding a localized negative charge to the cation, as well as a localized positive charge to the anion, will lead to the same result -a local decrease in |EM|. Consider the mechanism of formation of CT plaquettes in real structures ( For the geometry of trion complexes in other HTSC compounds, see [12]. Note that if we place two charges ~|e/4| (for example, from two doped carriers), then the gap ct will disappear, and this will lead to the spread of the doped carrier beyond the first group orbital. In this case, these charges will be distributed over the group orbital of 8 ions following the nearest ones and induce on them the charges ~|e/4|, sufficient for the formation of 8 CT plaquettes. Such a situation, in particular, takes place in Ba(Fe1-xNix)2As2, where the Ni ion dopes two electrons. Therefore, despite the close relationship between Ba(Fe1-xNix)2As2 and Ba(Fe1-xCox)2As2, the geometry of their trion complexes is significantly different (Fig. 2 e, f), which causes the difference in the phase diagrams [13]. This, in our opinion, is the essence of heterovalent doping, which is accompanied with the formation of trion complexes. The geometry of such a complex can be determined for each compound, proceeding only from the knowledge of dopant position and surrounding symmetry [12,13]. Figure 2 shows examples of trion complexes in some cuprates and ferropnictides. The interaction between these trion complexes corresponds to repulsion, which promotes the ordering of dopants and related trions into square lattices with the parameter l depending on the concentration. As the concentration increases, СT plaquettes in neighboring trion complexes begin to bind into a network through 1 or 2 common ions, which is accompanied by the formation of clusters of СT plaquettes characterized by a certain value of l. In a certain range of concentrations, individual clusters of CT plaquettes with the same parameter l form a percolation CT cluster, and it is this cluster that we identify with the HTSC phase [12]. In such a cluster, two-particle transitions back and forth between two single-particle states (p-electron + d-hole) on the one hand and an excitonic two-particle state (d-electron + phole) on the other side become possible. Therefore, the state of electron in the CT cluster can be considered as a superposition of the band and exciton states. Formation of CT clusters under isovalent doping In the case of isovalent doping in ferropnictides, the role of an additional charge that affects the value of ct in the surrounding plaquettes is played by a change in the electron density near As 8 anions or Fe cations due to the difference in the ionic radii of the dopant and the matrix ion. Thus, in BaFe2(As1-xPx)2, the replacement of the As 3ion by a P 3ion with a smaller ionic radius is equivalent to a decrease in the negative charge near four surrounding Fe ions, which reduces the gap ct for the transfer of an electron to these Fe ions from the nearest As ions to the value ∆ ct * (Fig. 3a). We will mark such Fe ions as Fe'. Since such a decrease in the gap ct due to the difference in ionic radii is expected to be less than with heterovalent doping, it is possible to place two P ions next to the Fe ion in order to further reduce the gap to a value of 0<∆ * * <∆ * . Such Fe ions, whose neighbors are two P ions, we will denote Fe''. For the formation of AsFe'4 (or AsFe''4) CT plaquettes (shaded in brown), it is necessary that the As ion be surrounded by four Fe' (or Fe'') ions. Therefore, as can be seen from symmetry elements of the matrix crystal (the Curie principle) [13]. This is confirmed by the appearance at these x values of sharp minima in the dependence of the London penetration depth on doping [18,19]. When the concentration deviates from these special values, the sizes of clusters with this type of ordering rapidly decrease, since this order does not meet the average concentration for homogeneous doping. Thus, in BaFe2(As1-xPx)2, the concentration range corresponding to the existence of CT clusters (and hence the position of the superconducting dome) is in the range 0.2<x0.5 [19,20]. In the case of Ba(Fe1-xRux)2As2, the replacement of the Fe ion by isovalent Ru with a large ionic radius is equivalent to an increase in the negative charge near 4 surrounding As ions, which reduces the gap for electron transfer from these As ions to neighboring Fe ions (Fig. .3e). We will denote such As ions as As', and As ions whose neighbors are two Ru ions, we will denote As''. At Ru concentration x>0.2, clusters of FeAs'4 CT plaquettes will be formed, and at larger x, FeAs''4 clusters begin to predominate. Therefore, the range of Ru concentrations corresponding to the existence of CT clusters in the phase diagram of Ba(Fe1-xRuxAs)2 coincides with the same range of P concentrations in BaFe2(As1-xPx)2 ( Fig. 3a-d). As is known, isovalently doped superconducting compounds have been found only in the series of ferropnictides. Is the same doping possible in cuprates? Due to the symmetry of the basal plane, this type of doping can only be achieved by replacing Cu with an ion of a larger radius (for example, Ag). However, due to the large difference between the radii of Cu and Ag, it is impossible to carry out substitution in the required proportion (~50%). At the same time, it is known that the substitution of 5% Cu for Ag in nonsuperconducting La2CuO4 actually results in the formation of a superconducting phase with Tc = 28 K [22]. Heitler-london centres The As shown in [12,13], the range of dopant concentrations corresponding to the superconducting dome coincides with the interval where the existence of CT clusters, which form the superconducting phase, is possible. These clusters can be percolation (optimal phase) or represent a Josephson medium, where these clusters are immersed in a non-superconducting phase (the insulator is in underdoped cuprates, the AFM metal is in underdoped ferropnictides, and the metal is in overdoped cuprates and ferropnictides). For the formation of a CT plaquette, it is necessary that a localized excess charge of a certain magnitude and sign, coming from the nearest dopants, be located near its cation or anion. The value of Tc in a superconducting cluster (percolation or Josephson coupled) is determined by the concentration of CT plaquettes. In the overdoping range, the number of CT plaquettes decreases with concentration due to an increase in the number of plaquettes exposed to several dopants, which leads to vanishing of the /delta_ct in them and a gradual transition to the metal phase. Therefore, the range of existence of superconductivity in the phase diagram has the shape of a dome, where Tc decreases at concentrations below and above the optimal doping range. Naturally, the formation of large percolation clusters of HL centres, as well as large CT clusters, is possible only with an ordered arrangement of trion complexes (or dopants). As was shown in [13], this is facilitated by their ordering into lattices with a symmetry that retains the main symmetry elements of the matrix crystal. In the CT cluster, doping reduces the gap Δct to ∆ ct * <Eex ( fig. 5 a,b). As a result, some electrons from the p-band (O 2-, As 3-) and holes from the d-band (Cu + , Fe 2+ ) pass into the excitonic-bound states d + (p -) and p -(d + ) (Fig. 5c). Here d + (p -) is the state of a d-electron on Cu or Fe, in the same plaquette with which there is a p-hole on O or As, respectively, and similarly p -(d + ) is the state of a p-hole on O or As, in one plaquette with which there is a d-electron on Cu or Fe, respectively. Such a transformation of the band structure of the CT cluster corresponds to its transition to the excitonic insulator state, where two-particle transitions to and from become possible between two one-particle states (p electron + d hole) on the one hand, and an excitonic two-particle state (d electron + p hole) on the other hand. Thus, in both cuprates and ferropnictides, in a certain range of dopant concentrations, doping produces clusters of the CT-excitonic insulator in the CuO2 or FeAs planes. The sizes of these clusters depend on the concentration, which determines their ability to be ordered into lattices with a various symmetry [13]. Note that the transition of the system to the state of an excitonic insulator justifies the simplified description of the band structure, since such a transition couples single-particle states and forms a gap at the Fermi level. It should be especially noted that at a temperature lower than the localization temperature of doped carriers Tloc the doping does not in itself give additional free carriers, but, on the contrary, leads to the transition of the CT cluster to the state of an excitonic insulator. Nevertheless, as is 13 known, free carriers are present in the system, although their concentration, according to Hall measurements, decreases with temperature. What is the mechanism of free carrier generation in this model? The point, in our opinion, is the presence of HL centres, which, as we will see, play the role of acceptors (donors) in this case, and only for this reason doping is accompanied by the generation of additional free carriers. Let's consider this mechanism in more detail. As we noted above, two CT plaquettes of the CT cluster centreed on the nearest cations or anions represent a HL centre where two electrons (two holes) on the central ions form a bound state with two holes (electrons) on the orbital of outer ions. In this case, bound pairs of electrons d + (p -) on the central ions of two adjacent plaquettes occupy an electron pair level (Fig. 5d). Similarly, bound pairs of holes p -(d + ) on the central ions of two adjacent plaquettes occupy a hole pair level (Fig. 5e). Continuing the analogy with hydrogen molecule, we note that, in addition to H2 molecule, there is also a bound state of two protons and one electron, namely, H 2 + ion. In our case, this Thus, in the case of cuprates and ferropnictides, the ionization of filled HL centres is the mechanism of generation of free carriers in the system. The concentration of free carriers n will be determined from the condition of equality of chemical potentials for electron (hole) pairs at the pair level and holes (electrons) in subbands [24]. For doping levels that are not too high, the concentration of additional free carriers arising due to the population of the pair level will depend on temperature as nT [15]. We note that just such a dependence is observed in cuprates, where the existence of carriers of only such a nature is possible [25,26]. As HL centres centred on Cu cations (Fig. 4 a) are possible in cuprates, carriers in them at any (electron or hole) type of doping (Fig. 2 a-c) will always be holes (Fig. 5 f). This explains the transition to the hole type of conduction observed in electron-doped cuprates as the temperature decreases below the localization temperature of doped carriers [27][28][29]. Different cases are possible in ferropnictides, depending on which plaquettes (AsFe4 or FeAs4) form the cluster of HL centers. In the former case (Fig. 5 f), holes will form; in the latter 14 ( Fig. 5 g), electrons. It is obvious that the sign of the emerging carriers will not depend on the configuration of the HL centre, i.e. on whether the CT plaquettes have one or two common ions in the HL centre (Fig. 4 b,c). Since dopants form AsFe4 CT clusters in almost all known ferropnictides, free electrons are generated in them. And only in Ba(Fe1-xRuxAs)2, where dopants form FeAs4 CT clusters, such carriers are holes [2,30]. To confirm this conclusion, let us consider the hole-doped compound Ba1-xKxFe2As2. In this compound, each doped hole is distributed over 4 nearest As ions [12], forming a trion complex that includes four CT AsFe4 plaquettes and, therefore, according to Fig. 5g, additional electrontype carriers should be generated. Indeed, it can be observed experimentally that the contribution of electron carriers becomes noticeable at low temperatures, when doped holes are localized [31][32][33], and also at high temperatures, when the concentration of free electrons generated due to ionization of HL centres becomes sufficiently high [34]. In the first case, this contribution manifests itself in the suppression of the growth (and even decrease) of the Hall constant at T<100 K [31][32][33], and in the second case, in the change of sign of the transverse thermal conductivity to negative at T>150K [34]. On a possible mechanism of superconducting pairing Since the states of electrons in a CT cluster can be considered as a superposition of band and exciton states, it is natural to associate the pairing mechanism with the formation of a virtual biexciton bound state at the HL centre. In this case, the basal planes of doped cuprates and ferropnictides can be considered as another type of structures in addition to one-dimensional Little chains [35] and Ginsburg "sandwiches" [36], where the excitonic mechanism of superconductivity can be realized. This interaction will be effective if the electronic (hole) pair levels of HL centres are not occupied by real electrons (holes), and therefore are vacant to scattering processes with the formation of a virtual biexciton state. Since the process of populating HL centres with real electrons (holes) is accompanied by the generation of free carriers, the efficiency of the interelectronic interaction due to their scattering on HL centres will increase simultaneously with a decrease in the concentration of free carriers n, which depends on temperature as nT [12,23]. According to the above said, the noncoherent transport via a CT cluster at T > 0 is performed by carriers emerging at the occupation of paired HL centres. At T0 their concentration n0, and at T = 0 no noncoherent transport will be possible due to the absence of free carriers. At the same time, such a system in which an electron and a hole are always present in a CT plaquette permits a coherent transport, when all carriers of the same sign move coherently, as a single whole, e.g., as a superconducting condensate. The latter is possible at a superconducting pairing which emerges due to the formation of a bound state of two electrons (holes) scattered into paired states of an HL centre and two holes (electrons) remaining automatically on the surrounding ions of plaquette. Specific features of LiFeAs As noted earlier, in the majority of doped HTSC cuprates and ferropnictides, superconducting clusters of HL centres do not fill the entire base plane, but form a Josephson medium Only in some compounds where doped carriers are localized outside the basal plane, this entire plane can be a percolation cluster of HL centres. A special place among HTS compounds is "111" type ferropnictide LiFeAs, which is a superconductor in the absence of doping. In accordance with the proposed model, this means that in this compound (if the possibility of self-doping is excluded), the condition ct<Eex is initially satisfied, as a result of which the entire base plane can be considered as a single CT cluster. In support of this statement, we can cite the results of experiments on heterovalent doping of another compound of the 111 type -NaFeAs(Co), which we will compare with the results of similar experiments on a compound of the 122 type -Ba(Fe1-xNix)2As2. In contrast to LiFeAs, in NaFeAs superconductivity (filamentary type) in the undoped state is observed sporadically, with Tc in the range from 0 to 10 K [36][37][38][39]. However, at P > 4 GPa, bulk superconductivity is already observed for all samples with Tc above 30 K. Similarly, bulk superconductivity is consistently observed in Co-doped NaFeAs at x>0.01 with a maximum Tc ~20 K in the range of 0.02<x<0.04 [36]. At the same time, doping with Co in LiFeAs only monotonically lowers Tc [40]. Based on these results, we believe that in undoped NaFeAs the gap ctEex, i.e. slightly exceeds the value of ct in LiFeAs. Therefore, small changes in the lattice parameters or deviations from stoichiometry in NaFeAs can locally change the ratio ctEex in one direction or another, leading to the appearance or disappearance of filamentary superconductivity. Within the framework of the model under consideration, doped electrons reduce the gap ct in plaquettes at the outer boundary of their localization area to the value of 0<∆ ct * <Eex. In this case, if ct is reduced throughout the crystal, then the doped carrier localization area, on the boundary of which the condition 0<∆ ct * <Eex is satisfied, will expand. In this case, inside the carrier localization area *=0. Therefore, in the initially superconducting LiFeAs, where ct<Eex takes place, we deal with a gradual decrease in superconductivity upon doping and its complete disappearance at some x, when the percolation threshold over the doped carrier localization areas is reached. In the case of initially non-superconducting NaFeAs, where, according to the assumption, ct~Eex, doping with Co reduces the value of 0 in plaquettes at the outer boundary of the doped carrier localization area to 0<∆ ct * <Eex and thus forms superconducting phase clusters from them [13]. To verify this statement, let us compare the phase diagrams of NaFe1-xCoxAs [36,37,41] and Ba(Fe1-xNix)2As2 [42]. It is easy to see that the positions of the superconducting domes on them practically coincide despite the fact that a Ni dopant brings two extra electrons to FeAs plane, in contrast to a Co dopant that introduces one extra electron [43]. According to the model under consideration [13], the positions of domes and features of the phase diagrams of cuprates and ferropnictides with heterovalent doping are entirely determined by the geometry of trion complexes (Fig. 2). Based on this, the coincidence of the phase diagrams of NaFe1-xCoxAs and Ba(Fe1-xNix)2As2 means that the trion complexes formed in them are the same and have the form shown in Fig. 2f. To make sure that the similarity of the phase diagrams of NaFe1-xCoxAs and Ba(Fe1-xNix)2As2 is due to the same geometry of the trion complexes formed in them, let us pay attention to the presence of features in both phase diagrams at two identical values x=0.027 and 0.032 [37 (Fig.4), 45 (Fig.3b)]. The appearance of features at given values of x, corresponding to x=1/36 and x=1/32, is due to the formation of large ordered clusters from trion complexes with a given geometry, repeating the symmetry of the matrix [13]. For Ba(Fe1-xNix)2As2, the geometry of the trion complex (Fig. 2f) is determined by the fact that 2 doped electrons from Ni completely suppress  for the nearest belt of FeAs4 plaquettes (in contrast to Ba(Fe1-xCox)2As2 in Fig. 2e) and their localization area expands until the excess charge on the Fe ions of the next coordination sphere becomes equal to e/4. The similarity of the phase diagrams of NaFe1-xCoxAs and Ba(Fe1-xNix)2As2 means that in NaFe1-xCoxAs one doped electron from the Co ion completely suppresses  for the nearest FeAs4 belt of plaquettes, and the area of its localization expands until the excess charge on the Fe ions from the next coordination sphere will not become equal to e/8, which will correspond to the condition 0<∆ ct * <Eex. It does mean, the value of 0 in undoped NaFeAs is much smaller than in other classes of heterovalently doped ferropnictides. This is all the more true in LiFeAs, where 0 is smaller than in NaFeAs. That confirms our assumption that in LiFeAs the condition ct<Eex is realized without doping, i.e. the entire base plane can be considered as a single CT cluster. As previously noted, the formation of CT clusters from AsFe4 plaquettes is accompanied by the formation of additional electron carriers, while the formation of CT clusters from FeAs4 plaquettes leads to the appearance of hole carriers (Fig. 5). This raises the question of the type of carriers in LiFeAs compound where the basal plane can be considered as filled with small CT clusters of various types of CT plaquettes either AsFe4 or FeAs4. Herewith, carriers of both types should be generated. The small size of the clusters prevents the formation of a large coherent cluster in which moving carriers scatter into pair states (either electron or hole). This can explain the relatively low Tc = 17 K in this compound and the observed two-phase nature within the coherence area [45]. Conclusions Thus, in this work, we propose a unified mechanism of the transformation of the electronic structure of cuprates and ferropnictides upon heterovalent and isovalent doping. In this representation, in undoped cuprates and ferropnictides, which initially have different electronic structures (Mott insulator and AFM semimetal), local doping forms hydrogen-like CT plaquettes, from which, in a certain doping range, percolation CT clusters are formed that have the electronic structure of an excitonic insulator. The most important property of a CT cluster is that each pair of adjacent CT plaquettes in it is an HL center resembling an intracrystalline hydrogen molecule, on which two electrons and two holes can form a bound state. In such a CT cluster, free carriers are generated as a result of partial ionization of filled HL centers. The sign of the generated free carriers in the general case does not coincide with the sign of the doped carriers, but is determined by the type of CT plaquettes that make up the HL center.
2022-05-10T15:47:29.078Z
2022-05-25T00:00:00.000
{ "year": 2022, "sha1": "3ee1e9ab639ef63634900408d0f849d8c7e73410", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.rinp.2022.105577", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "da53bfce573aa16f7705a13232906829bde69829", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
255356919
pes2o/s2orc
v3-fos-license
Hematological effects of Coenzyme Q10 in streptozotocin-induced diabetic rats Aim: This study was conducted to evaluate the effects of Coenzyme Q10 on some hematological parameters in streptozotocin-induced diabetic rats. Materials and Methods: In the experiment, 38 healthy, adult male rats were divided into five groups. Group 1 was fed standard rat pellets for four weeks. Group 2 was adminis-tered with 0,3 ml corn oil IP daily for four weeks, group 3 was injected IP with 10 mg/kg CoQ10 daily for four weeks, group 4 was made diabetic by SC injections of 40 mg/kg streptozotocin as single daily dose for two days, group 5 was made diabetic by SC injections of streptozotocin in the same way and then was injected IP with 10 mg/kg CoQ10 daily for four weeks. In blood samples received from all animals, red blood cells (RBC) and white blood cells (WBC) counts, hemoglobin amount, hematocrit value (PCV), differential leucocyte counts and mean cell volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC) were determined. Results: In diabetic rats, RBC, hemoglobin and hematocrit values were determined to be lower (p<0.05) than the control group. CoQ10 application to diabetic animals increased (p<0.05) hematocrit value compared diabetic group. Experimentally induced diabetes caused significantly (p<0.05) increments in WBC count and neutrophil percentage, while lymphocyte percentage was decreased (p<0.05) compared to control group. CoQ10 treatment to the diabetic group markedly (p<0.05) decreased the neutrophil percentage and increased the lymphocyte percentage compared to diabetic group. Conclusion: These results showed that CoQ10 treatment to diabetic rats may have some beneficial properties on hematological parameters negatively affected from diabetic injury. Introduction Diabetes mellitus (DM) is a metabolic disorder with different etiologies. It is characterized by insufficiency to regulate blood glucose level caused by relative or absolute deficiency in insulin. Diabetes mellitus may occur as a result of pancreatic β-cells dysfunction, which causes reduction in insulin secretion. The disease could also occur due to the insulin receptors resistant to the functions of circulating insulin (ADA 2010). Repetitive or obstinate hyperglycemia during diabetes causes formation glycated body proteins. This effect leads to secondary complications affecting eyes, kidneys, nerves and arteries (Sharma 1993, Aladodo et al. 2013. Erythrocytes (RBC) are seriously affected by hyperglycemia. It was reported these effects of hyperglycemia such as altered membrane lipid composition, reduced filterability, glycated hemoglobin, and the accumulation of advanced glycosylation end-products on the membrane (Schmid-Schönbein and Volger 1976, Wautier et al. 1994, Labrouche et al. 1996, Manodori and Kuypers 2002. In diabetes, reduced hemoglobin has been reported (Mansi 2006) and it may be associated with the decrease in the red blood cell count and packed cell volume (Moss 1999, Muhammad and Oloyede 2009, Aladodo et al. 2013. Coenzyme Q10 (CoQ10) has been focused as a dietary supplement capable of influencing cellular bioenergetics and counteracting on some damage caused by free radicals (Linnane et al. 2002, Butler et al. 2003, Rosenfeldt et al. 2003, Zhou et al. 2005. CoQ10 is known as a vitamin like and fat-soluble substance existing in all cells (Crane 2001). It is strictly related with various activities including the transferring of electrons within the mitochondrial oxidative respiratory chain and ATP production. Further, it acts as [1] an essential antioxidant and supporting the regeneration of other antioxidants, [2] affecting the stability, fluidity and permeability of membranes, [3] stimulating cell growth and inhibiting cell death (Niki 1997, Crane 2001, Cooke et al. 2008. Hence, the objective of the present study was to investigate hematological effects of CoQ10 in streptozotocin-induced diabetic rats. Materials and Methods We used 38 healthy, adult male Wistar Albino rats. The animals were kept in individual cages during the four weeks experiment and allowed free access to water and standard pellets. Diabetes was induced by SC injection of streptozotocin (Sigma-Aldrich, St. Louis, MO, USA) at a dose of 40 mg/kg daily in 0.1 M citrate buffer (pH 4.5) single daily dose for two days. To prevent the streptozotocin-induced hypoglycemia, rats received 5% dextrose solution after 6 h of streptozotocin administration for next 3 days. After 1 week, induction of diabetes was verified by measuring blood glucose level with strips using glucometer (PlusMED Accuro, Taiwan) via the tail vein. Animals whose blood glucose level higher than 250 mg/dL were considered diabetic and included in the experiments. The mean weights of all groups were similar. The rats were divided into five groups. Group 1 (n=6) was fed standard rat pellets for four weeks, group 2 (n=6) was administered at 0,3 ml corn oil IP daily for four weeks, group 3 (n=6) was injected IP with 10 mg/kg CoQ10 (Sigma-Aldrich, St. Louis, MO, USA) daily for four weeks, group 4 (n=7) was made diabetic by SC injections of streptozotocin at dose of 40 mg/kg in 0.1 M citrate buffer (pH 4.5) single daily dose for two days, group 5 (n=9) was made diabetic by SC injections of streptozotocin in the same way and then was injected IP with 10 mg/kg CoQ10 daily for four weeks. During the experiment, three animals from group 4 and one animal from group 5 were died due to streptozotocin-induced hypoglycemia. At the end of the study, blood samples were taken from all animals. In these blood samples, red blood cells (RBC) and white blood cells (WBC) counts, hemoglobin amount, hematocrit value (PCV), differential leucocyte counts and mean cell volume (MCV), mean corpuscular hemoglobin (MCH), mean corpuscular hemoglobin concentration (MCHC) were determined using an automated cell counter (Abbott Cell Dyn 3700, Chicago, USA). The Ethical Committee of Selcuk University Experimental Medicine Research and Application Center (Report no. 2015-50) approved the study protocol. The data were analyzed using one-way ANOVA (SPSS 17). Differences among the groups were determined by Duncan's multiple range test. Differences were considered significant at p<0.05. Results In streptozotocin induced diabetic rats, RBC, hemoglobin and hematocrit values were determined to be significantly lower (Table 1, p<0.05) than the control group. There were no any changes in respect to the same parameters with CoQ10 treatment alone compared to the control group. CoQ10 application for four weeks to diabetic animals alleviated the reducing in some hematologic parameters compared with the diabetic group levels. But the changes in hematocrit value was important ( different from the control group levels. Discussion The reductions in RBC, hemoglobin and hematocrit levels of the diabetic group when compared to the control group were seem to consistent with other studies which used similar experimental diabetes (Mansi 2006, Aladodo et al. 2013. The reasons of these changes are attributed to oxidative stress, decreasing of erythrocyte lifespan, bone marrow suppression. It has been suggested that anemia resulting from diabetes mellitus are due to peroxidation of membrane lipids, decreasing of membrane fluidity, oxidation of glycosylated membrane proteins and hemolysis of RBC (Kennedy and Baynes 1984, Bakan et al. 2006, Mansi 2006. In this study, CoQ10 treatment to diabetic animals alleviated the reducing in some hematologic parameters compared with the diabetic group levels. The changes in hematocrit value was important (Table 1, p<0.05) in the diabetic group treated with CoQ10. It has been well known that elevated glycose causes oxidative stress as a result of increased production of reactive oxygen species (ROS), nonenzymatic glycation of proteins and glycose autoxidation in diabetes mellitus (Brownlee 2001, Modi et al. 2006 The difference between mean values with different superscripts in the same column is significant for each parameter, p<0.05. Group 1, control; group 2, oil; group 3, CoQ10; group 4, diabetes; group 5, CoQ10 and diabetes. Table 2. Mean WBC and differential leukocyte counts (Mean ± SE) Groups The difference between mean values with different superscripts in the same column is significant for each parameter, p<0.05. Group 1, control; group 2, oil; group 3, CoQ10; group 4, diabetes; group 5, CoQ10 and diabetes. be based on its some antioxidant properties. CoQ10 is recognized as a powerful systemic radical scavenger (Prosek et al. 2008). Moreover, there are many findings related to antioxidant effects of CoQ10 in diabetes mellitus. Niklowitz et al. (2004) found that CoQ10 is greater antioxidant than Vitamin E. In another research, it has been suggested that CoQ10 enhanced the availability of other antioxidants such as Vitamin E, Vitamin C and beta-carotene (Shekelle et al.2003). The beneficial effect of CoQ10 on hematological parameters may be because of decreasing lipid peroxidation and inhibiting certain enzymes involved the formation of free radicals, blocking oxidative injuries to DNA and reducing glycation of membrane proteins (Modi et al. 2006, Prosek et al. 2008). Further, Ahmadvand et al. (2012) reported that CoQ10 reduced the lipid peroxidation and enhanced antioxidant enzyme activities (SOD, GSH, CAT) in experimentally diabetic rats. Quinzii et al. (2010) stated that there is strictly correlation reactive oxygen species, oxidative stress and cell death with CoQ10 deficiency. On the other hand, experimentally induced diabetes lead to significantly ( with CoQ10, neutrophil percentage and relative lymphocyte percentage were found to be lower and higher (Table 2, p<0.05) than that of diabetic group, respectively. In the same group, WBC count tended to decline but this changing was not important compared to diabetic group. A positive association between increased levels of inflammatory markers (WBC count, CRP, and inflammatory cytokines) and diabetes incidence were determined in several studies (Gkrania-Klotsas et al. 2010, Twig et al. 2013. Tong et al. (2004) reported that elevated WBC count, even within the normal range, is associated with macro-and microvascular complications in diabetes. It has been noted that higher WBC count in diabetes reflects an inflammation and the other tissue complications (Tong et al. 2004, Twig et al. 2013. The increase neutrophil percentage by diabetes in this study is important regarding to show immune response against causal effects of diabetes. This result seems in coherent with above studies that noted about the same data obtained from diabetic subject. Although the decrease in WBC count is not important, significantly decrease in neutrophil percentage may be evaluated as a result of CoQ10 treatment's beneficial effect. There are many acknowledgements about anti-inflammatory properties of CoQ10. It was reported that CoQ10 administration in several doses decreased WBC count and inflammatory cytokines in human and animals (Ahmadvand et al. 2012, Abdollahzad et al. 2015. In the study conducted in animals, it has been suggested that CRP level were reduced with CoQ10 application (Devadasu et al. 2011). Above findings have supported our results obtained from CoQ10 administration in diabetic animals. Conclusion In the light of our results, it can be said that CoQ10 supplementation has some ameliorative effects in respect to at least hematological parameters in streptozotocin-induced diabetic rats.
2020-12-03T09:06:01.296Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "08e4a5b669869eae8a8e36de834d91dc789f746c", "oa_license": null, "oa_url": "https://doi.org/10.15312/eurasianjvetsci.2017.154", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "12a5f80c3dc8d4b8c11ba1bb1f34b4e4f505cc8a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
8039931
pes2o/s2orc
v3-fos-license
Two Time Physics with a Minimum Length We study the possibility of introducing the classical analogue of Snyder's Lorentz-covariant noncommutative space-time in two-time physics theory. In the free theory we find that this is possible because there is a broken local scale invariance of the action. When background gauge fields are present, they must satisfy certain conditions very similar to the ones first obtained by Dirac in 1936. These conditions preserve the local and global invariances of the action and leads to a Snyder space-time with background gauge fields. Introduction Two-Time Physics [1,2,3,4,5,6,7] is an approach that provides a new perspective for understanding ordinary one-time dynamics from a higher dimensional, more unified point of view including two time-like dimensions. This is achieved by introducing a new gauge symmetry that insure unitarity, causality and absence of ghosts. The new phenomenon in two-time physics is that the gauge symmetry of the free two-time physics action can be used, by imposing gauge conditions, to obtain various different actions describing different free and interacting dynamical systems in the usual one-time physics, thus uncovering a new layer of unification through higher dimensions. An approach to the introduction of background gravitational and gauge fields in two-time physics was first presented in [7]. In [7], the linear realization of the Sp(2, R) gauge algebra of two-time physics is required to be preserved when background gravitational and gauge fields come into play. To satisfy this requirement, the background gravitational field must satisfy a homothety condition [7], while in the absence of space-time gravitational fields the gauge field must satisfy the conditions [7] X.A(X) = 0 (1.1a) In this paper we show how a set of subsidiary conditions very similar to (1.2) can be obtained in the classical Hamiltonian formalism for two-time physics. As in Dirac's original paper on the SO(4, 2)-invariant formulation of electromagnetism [8], the conditions we find in two-time physics are necessary for the SO(d, 2) invariance of the interacting theory. A new result in this work is that we show that these conditions are also necessary for a perfect match between the number of physical degrees of freedom contained in the (d + 2)-dimensional gauge field and the number of physical canonical pairs describing the dynamics in the reduced phase space. The paper is divided as follows. In the next section we review the basic formalism of two-time physics and show how the SO(d, 2) Lorentz generator for the free 2T action can be obtained from a local scale invariance of the Hamiltonian. Invariance under this local scale transformation of only the Hamiltonian reveals that two-time physics can also be consistently formulated in terms of another set of classical phase space brackets, which are the classical analogues of the Snyder commutators [9]. In 1947 Snyder proposed a quantized space-time model in a projective geometry approach to the de Sitter space of momenta with a scale θ at the Planck scale. In this model, the energy and momentum of a particle are identified with the inhomogeneous projective coordinates. Then, the space-time coordinates become noncommutative operatorsx µ given by the "translation" generators of the de Sitter (dS) algebra. Snyder's space-time has attracted interest in the last few years in connection with generalizations of special relativity. In particular, it was pointed out [10] that there is a one-to-one correspondence between the Snyder space-time and a formulation of de Sitter-invariant special relativity [11] with two universal invariants, the speed of light c and the de Sitter radius of curvature R. However, a particle moving in a de Sitter or Anti-de Sitter space-time with signature (d − 1, 1), where d is the number of spacelike dimensions, is only one of the many dual lower-dimensional systems that can be obtained by imposing gauge conditions on the free two-time physics action (see, for instance, ref. [4]). Furthermore, a Snyder space-time with signature (d − 1, 1) for a free massless relativistic particle has already been obtained from the (d, 2) spacetime of two-time physics by using the Dirac bracket technique, after imposing gauge conditions to reduce the gauge invariance of the free 2T action [12]. This shows that in the (d − 1, 1) space-time there are inertial motions and inertial observers in the Snyder space-time, giving a principle of relativity for dS/AdSinvariant special relativity. The results of this work may be used to suggest that the other universal invariant of dS/AdS special relativity, the radius of curvature R, can be interpreted as a very large integer multiple of the minimum spacelike length introduced by the Snyder commutators. In the treatment of [12], the appearance of Snyder's space-time in the reduced phase space of the Dirac brackets is a direct consequence of the fact that the gauge conditions break the conformal SO(1, 2) ∼ Sp(2, R) gauge invariance of the 2T action, leaving only τ -reparametrization invariance. Then a length scale induced by the Snyder commutators emerges in the resulting (d − 1, 1) space-time, leaving the global scale and conformal invariances of the gaugefixed action untouched. To preserve the powerful unifying properties of 2T physics, and retain the massless particle in a dS/AdS space-time as one of its gauge-fixed versions, it is then interesting to investigate the possibility of constructing a Snyder space-time with signature (d, 2), in which the Sp(2, R) gauge invariance and consequently the full duality properties of the 2T action would be preserved. In this work we take this task and show that it can be done while also explicitly preserving the global Lorentz SO(d, 2) invariance of the action. These developments are the content of section two. In section three we introduce interactions with a background gauge field by modifying the constraint structure of two-time physics according to the minimal coupling prescription to electrodynamic gauge fields. We show how a set of subsidiary conditions very similar to (1.2) emerge after requiring Sp(2, R) gauge invariance of the interacting theory and how these conditions lead to the same Snyder brackets we found in the free theory. Some concluding remarks appear in section four. Two-time Physics The central idea in two-time physics [1,2,3,4,5,6,7] is to introduce a new gauge invariance in phase space by gauging the duality of the quantum commutator [X M , P N ] = iη MN . This procedure leads to a symplectic Sp(2,R) gauge theory. To remove the distinction between position and momentum we set X M 1 = X M and X M 2 = P M and define the doublet is a symmetric matrix containing three local parameters and ǫ ij is the Levi-Civita symbol that serves to raise or lower indices. The Sp(2, R) gauge field A ij is symmetric in (i, j) and transforms as The covariant derivative is An action invariant under the Sp(2, R) gauge symmetry is After an integration by parts this action can be written as and the canonical Hamiltonian is The equations of motion for the λ's give the primary [13] constraints and therefore we can not solve for the λ's from their equations of motion. The values of the λ's in action (2.4b) are arbitrary. Constraints (2.6)-(2.8), as well as evidences of two-time physics, were independently obtained in [14]. We have introduced the weak equality symbol ≈. This is to emphasize that constraints (2.6)-(2.8) are numerically restricted to be zero on the submanifold of phase space defined by the constraint equations, but do not identically vanish throughout phase space [15]. This means, in particular, that they have nonzero Poisson brackets with the canonical variables. More generally, two functions F and G that coincide on the submanifold of phase space defined by constraints φ i ≈ 0, i = 1, 2, 3 are said to be weakly equal [15] and one writes F ≈ G. On the other hand, an equation that holds throughout phase space and not just on the submanifold φ i ≈ 0, is called strong, and the usual equality symbol is used in that case. It can be demonstrated [15] that If we consider the Euclidean, or the Minkowski metric as the background space-time, we find that the surface defined by the constraint equations (2.6)-(2.8) is trivial. The only metric giving a non-trivial surface, preserving the unitarity of the theory, and avoiding the ghost problem is the flat metric with two time-like dimensions [1,2,3,4,5,6,7]. Following [1,2,3,4,5,6,7] we introduce another space-like dimension and another time-like dimension and work in a Minkowski space-time with signature (d, 2). We use the Poisson brackets where M, N = 0, ..., d+1, and verify that constraints (2.6)-(2.8) obey the algebra These equations show that all constraints φ are first-class [13]. Equations (2.11) represent the symplectic Sp(2, R) gauge algebra of two-time physics. The 3parameter local symmetry Sp(2, R) includes τ -reparametrizations, generated by constraint φ 1 , as one of its local transformations, and therefore the 2T action (2.4) is a generalization of gravity on the worldline. It corresponds to conformal SO(2, 1) gravity on the worldling [4,14]. Since we have d + 2 dimensions and 3 first-class constraints, only d + 2 − 3 = d − 1 of the canonical pairs (X M , P M ) will correspond to true physical degrees of freedom. Action (2.4) also has a global symmetry under Lorentz transformations SO(d, 2) with generator [1,2,3,4,5,6,7] It satisfies the space-time algebra and is gauge invariant because it has identically vanishing brackets with the first-class constraints (2.6)- In one-time physics, a natural way to implement the notion of a minimum length [16,17,18,19,20] in theories containing gravity is to formulate these models on a noncommutative space-time. By a minimum length it is understood that no experimental device subject to quantum mechanics, gravity and causality can exclude the quantization of position on distances smaller than the Planck length [20]. It has been shown [21] that when measurement processes involve energies of the order of the Planck scale, the fundamental assumption of locality is no longer a good approximation in theories containing gravity. The measurements alter the space-time metric in a fundamental manner governed by the commutation relations [x µ , p ν ] = iη µν and the classical field equations of gravitation [21]. This in-principle unavoidable change in the space-time metric destroys the commutativity (and hence locality) of position measurement operators. In the absence of gravitation locality is restored [21]. This effect of a minimum length can be modeled by introducing a nonvanishing commutation relation between the position operators [22]. Let us now consider how the classical analogue of Snyder's noncommutative space-time can be made to emerge in two-time physics. To arrive at these classical Snyder brackets we use what can be considered as a broken local scale invariance of the free 2T action. This local scale invariance is a symmetry only of the 2T Hamiltonian. It is not a symmetry of the action because the kinetic termẊ.P in the Legendre transformation, giving the Lagrangian from the Hamiltonian, is not invariant under this local scale transformation. This is why we can introduce Snyder brackets in two-time physics and still preserve the original invariances of the action. Hamiltonian (2.5) is invariant under the local scale transformations where β is an arbitrary function of X M (τ ) and P M (τ ). Keeping only the linear terms in β in transformation (2.14), we can write the brackets for the transformed canonical variables. If we choose β = φ 1 = 1 2 P 2 ≈ 0 in equations (2.15) and compute the brackets on the right side using the Poisson brackets (2.10), we find the expressions We see from the above equations that, on the constraint surface defined by constraints (2.6)-(2.8), brackets (2.16) reduce to To impose φ 1 = 1 2 P 2 ≈ 0 strongly at the end of the computation of brackets (2.16), the expressions for the corresponding Dirac brackets should be used in place of the Poisson brackets. However, for the special case β = φ 1 = 1 2 P 2 ≈ 0 we can use the property [15] of the Dirac brackets that, on the first-class constraint surface, when G is a first-class constraint and F is an arbitrary function of the canonical variables. This justifies the use of Poisson brackets to arrive at (2.17). Now, keeping the same order of approximation used to arrive at brackets (2.15), that is, retaining only the linear terms in β, transformation equations (2.14a) and (2.14b) read Using again the same function β = φ 1 = 1 2 P 2 ≈ 0 in equations (2.19), we write them asX Equations (2.21) are obviously in the form (2.9) and so we can writẽ Using these weak equalities in brackets (2.17) we rewrite them as to emphasize that these brackets are valid only on the constraint surface defined by constraints (2.6)-(2.8). But, as we saw above, the non-trivial surfaces corresponding to constraints (2.6)-(2.8) require a space-time with signature (d, 2). Brackets (2.23) are the classical 2T equivalent of the Lorentz-covariant Snyder commutators [9], which were proposed in 1947 as a way to solve the ultraviolet divergence problem in quantum field theory by introducing a minimum space-time length. In the canonical quantization procedure, where brackets are replaced by commutators according to the rule [commutator] = i{bracket} the 2T brackets (2.23) will lead directly to a Lorentz-covariant noncommutative space-time for two-time physics, thus implementing the notion of a minimum length in the (d + 2)-dimensional space-time for this theory. The Snyder brackets (2.23) give an equally valid description of two-time physics at the classical level. If we compute the bracket {L MN , L RS } using the Snyder brackets we find that the same space-time algebra (2.13) is reproduced. This implies that the Snyder brackets (2.23) preserve the global SO(d, 2) Lorentz invariance of action (2.4). Since SO(d, 2) contains scale as well as conformal transformations we see that, although we may introduce a scale at the Planck length using the Snyder brackets (2.23), global scale and conformal invariances still exist. This is because to arrive at the Snyder brackets (2.23) we have used the local scale invariance (2.14) of the 2T Hamiltonian, which is a broken scale invariance from the Lagrangian point of view. If we compute the brackets {L MN , φ i } using (2.23) to verify the gauge invariance of L MN in a phase space with Snyder brackets, we find that the {L MN , φ i } identically vanish, proving that L MN is gauge invariant in this phase space. Computing the algebra of constraints (2.6)-(2.8) using (2.23) we arrive at the expressions which show that the first-class property of constraints (2.6)-(2.8) is preserved by brackets (2.23). Equations (2.24) are the realization of the Sp(2, R) gauge algebra of two-time physics in a phase space with Snyder brackets. Equations (2.24) exactly reproduce the gauge algebra (2.11) if we take the linear approximation on the right side.. Notice that L MN explicitly appears with a minus sign in the right hand side of the Snyder bracket (2.23c), establishing a connection between the global SO(d, 2) Lorentz invariance of action (2.4) and the local scale invariance (2.14) of Hamiltonian (2.5). The new result obtained in this section is that the classical and free twotime physics theory can also be consistently formulated in a phase space where the Snyder brackets (2.23) are valid. In the next section we will see that this remains true in the presence of a background gauge field A M (X) when a set of subsidiary conditions very similar to (1.2) are satisfied. 2T Physics with Gauge Fields To introduce a background gauge field A M (X) we modify the free action (2.4b) according to the usual minimal coupling prescription to gauge fields, P M → P M − A M . The interacting 2T action in this case is then where the Hamiltonian is The equations of motion for the multipliers now give the constraints The Poisson brackets between the canonical variables and the gauge field are Computing the algebra of constraints (3.3)-(3.5) using the Poisson brackets (2.9) and (3.6) we obtain the equations Now that we have seen that action (3.1) has a local Sp(2, R) gauge invariance when conditions (3.8) hold, we may consider the question of which is the SO(d, 2) Lorentz generator for action (3.1). A possible answer is obtained if we use the minimal coupling prescription P M → P M − A M in the expression for L MN in the free theory, thus obtaining in the interacting theory We can even use the first-class gauge function β = φ 1 = 1 2 (P − A) 2 ≈ 0 and formally construct a set of Snyder brackets for the interacting theory in which L I MN appears on the right side of the bracket {X M , X N }, exactly in the same way as L MN appears on the right side of (2.23c) in the free theory. However, it can be verified (using Poisson brackets) that the global transformations generated by The Lorentz generator for the interacting theory is then effectively identical to L MN in the free theory. This agrees with Dirac's interpretation of the conformal SO(4, 2) symmetry of Maxwell's theory as being the Lorentz symmetry in 6 dimensions. This was also pointed out, but in a rather unclear way, in reference [7] (see section four of [7]). The above conclusion implies that L MN must be invariant under the gauge transformations generated by constraints (3.3)-(3.5). Using the Poisson brackets (2.9) and (3.6) we find the equations We see from the above equations that L MN is gauge invariant, {L MN , φ i } = 0, when conditions (3.8) are valid. Action (3.1), complemented with the subsidiary conditions (3.8), gives therefore a consistent classical Hamiltonian description of two-time physics with background gauge fields in a phase space with the Poisson brackets (2.9) and (3.6). But, as we saw in section two, there is another Hamiltonian description of two-time physics based in a phase space with the Snyder brackets (2.23). Let us then consider this Hamiltonian formulation in the case when background gauge fields are present. Since the Lorentz generator L MN in the interacting theory is identical to the one in the free theory, the form (2.23) of the Snyder brackets must also be preserved in the interacting theory because L MN explicitly appears in the right side of (2.23c). This creates a mathematical difficulty because the gauge function β = 1 2 P 2 we used to arrive at (2.23) in the free theory is no longer a firstclass function on the constraint surface defined by (3.3)-(3.5). Consequently, equations (2.9) and (2.18) can not be used. To solve this difficulty we incorporate conditions (3.8) as new constraints for the interacting theory. Combining conditions (3.8) with constraints (3.3)-(3.5), we get the irreducible [15] set of constraints Note that Dirac's equations (1.2a) and (1.2b) are now reproduced by constraints φ 4 and φ 5 above. But now there is a clear meaning for the third condition: the gauge field must remain massless. Constraints (3.12)-(3.17) obey the Sp(2, R) gauge algebra (2.11) together with the equations Equations (2.11) together with equations (3.18) show that all constraints (3.12)-(3.17) are first-class constraints. Hamiltonian (3.2) will be invariant under the local scale transformations (2.14) when the gauge field effectively transforms as Using this local scale invariance we can again construct the same brackets (2.15). On the constraint surface defined by equations (3.12)-(3.17) we can use again equation (2.9) and the property (2.18) of the Dirac bracket and choose β = φ 1 = 1 2 P 2 ≈ 0 to arrive, by the same steps described in the previous section, at the same Snyder brackets (2.23). Finally, let us consider the role of conditions (3.8). As in the free theory, the first-class constraints (3.12)-(3.14) reduce the number of physical canonical pairs (X, P ) to be d − 1. We introduced a gauge field A M (X) with d + 2 components, but as a consequence of (3.8) now there are 3 first-class constraints (3.15)-(3.17) acting on these d + 2 components. These constraints can be used to reduce the number of independent components of the gauge field to be d + 2 − 3 = d − 1, creating a perfect match of the number of independent components of the gauge field with the number of physical canonical pairs. It is this perfect match that preserves the local Sp(2, R) invariance in the presence of the gauge field. Concluding remarks In this investigation we considered the possibility of introducing a minimum length in the classical two-time physics theory by constructing its Hamiltonian formulation in a phase space with Snyder brackets. It makes sense to try to introduce this minimum length in two time physics because the action is a generalization of gravity on the world-line and gravity introduces additional uncertainties in the quantum position measurement process. We saw that it is possible to introduce a minimum length in the free theory and in the presence of background gauge fields, while at the same time preserving the usual symmetries of two-time physics, because of the existence of a broken local scale invariance which is a symmetry only of the Hamiltonian. We clarified a previous observation of the fact that the global SO(d, 2) Lorentz generator in the presence of background gauge fields is identical to the one in the free theory and exposed the connection of this Lorentz generator with the concept of a minimum length We also revealed the mechanism for the preservation of the local Sp(2, R) invariance of the action, which consists in a perfect match, in the Hamiltonian formalism, between the number of physical canonical pairs describing the dynamics and the number of physical components in the gauge field.
2014-10-01T00:00:00.000Z
2006-07-06T00:00:00.000
{ "year": 2006, "sha1": "342888d131d3cf1b5fb487c6d963967bf8660adf", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "626fea8802e94b08cfc0ee609976e32f1132346a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
31600588
pes2o/s2orc
v3-fos-license
They See a Rat, We Seek a Cure for Diseases: The Current Status of Animal Experimentation in Medical Practice The objective of this review article was to examine current and prospective developments in the scientific use of laboratory animals, and to find out whether or not there are still valid scientific benefits of and justification for animal experimentation. The PubMed and Web of Science databases were searched using the following key words: animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, surgical simulation, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases. The use of laboratory animals in scientific research continues to generate intense public debate. Their use can be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and teaching of new surgical techniques. This is because there are inherent limitations in the use of alternatives such as in vitro studies, human clinical trials or computer simulation. However, there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suitable alternatives to animal experimentation like cell and tissue culture and computer simulation. For the foreseeable future, it would appear that to enable scientists to have a more precise understanding of human disease, including its diagnosis, prognosis and therapeutic intervention, there will still be enough grounds to advocate animal experimentation. However, efforts must continue to minimize or eliminate the need for animal testing in scientific research as soon as possible. imal experimentation of new drugs is of any benefit to mankind [1][2][3] . However, most objective scientists and many members of the public agree that animal research should be permitted as long as it is carried out for good reason, using humane conditions as much as possible, where there are no feasible alternatives and under strict regulation [1,[4][5][6] . This is because most scientists agree that experiments involving the use of animals have great potentials like facilitating innovation, developing platform technologies and very often providing a link with clinical trials. In addition, animal experimentation is useful in exploring disease mechanisms, in validating and testing new targets for drug research and in providing insights into drug toxicity and interactions [5][6][7][8][9][10][11][12][13] . The objectives of this review are the following: (1) to provide a scientific basis for animal experimentation; (2) to discuss controversies surrounding animal experimentation; (3) to describe briefly animal models available for studying benign and malignant disorders, and (4) to discuss briefly diseases suited to animal experimentation. Evidence Acquisition The PubMed and Web of Science databases were searched using the following key words; animal models, basic research, pharmaceutical research, toxicity testing, experimental surgery, ethics, animal welfare, benign, malignant diseases. Important relevant reviews, original articles and references from 1970 to 2012 were reviewed for data on the use of experimental animals in the study of diseases, with focus on urological diseases. About 161 articles were abstracted for type of experiments, type of animal models used, number of experimental animals used, analysis and conclusions. Relevant chapters of 11 textbooks were reviewed. In addition, the websites of 19 companies and societies with information germane to the topic were reviewed. An assessment was made of the benefits of the experimental animal models used and whether in vitro study and computer simulation in some cases could have provided an equally satisfactory model. Evidence Synthesis Historically, animals have been used for a wide range of scientific research that has proved beneficial to mankind, particularly in relation to the advancement of scientific knowledge, drug development for use in animals and humans, training in surgical techniques, the safety of chemical products and, very importantly, the safe development of vaccines [1,5,6] . Animal experimentation was frowned upon by laymen and scientists primarily because of the pain and suffering to which some scientists subjected experimental animals between the 19th and mid-20th centuries [1,5,[7][8][9][10] . This led to the formation of groups of people concerned about the welfare of nonhuman animals used in experimental work in many countries of the world and forced the scientific community to come up with regulations that ensured that animals subjected to experimentation did not suffer undue distress or pain [13][14][15] . Similarly, the vehemence of attacks by animal rights activists in some countries in the 1980s has led to the cessation of the use of animals for the testing of cosmetic products, alcohol and tobacco [16] . Animal rights activists as used in this article refers to a group of people who hold extremist views about research involving animals to the extent that they are ready to use violent means to stop all research involving the use of laboratory animals (vide infra). This group should be distinguished from people concerned about the welfare of non-human animals used in experimental work or people with concerns for animal welfare in general [1,14,16] . The rise in the influence of animal rights activist groups has also forced the scientific community to use animals that can be easily concealed in the laboratory (mice, rabbits, etc.) as opposed to the use of monkeys or the great apes which are difficult to conceal [13][14][15] . The other reasons for the widespread use of small laboratory animals, particularly mice, are that they are cheap, widely available and easy to take care of and have a shorter generation time -the generation time being the combination of gestation period, time to sexual maturity and overall life span. At the present time, most non-scientists (including many people with concerns for animal welfare) and scientists agree that a world in which the important benefits of scientific research can be tapped but without causing undue pain, distress, suffering or death to the animals being used for research should be the ultimate goal [1,12,[14][15][16] . The use of laboratory animals can still be justified today in the following areas of research: basic scientific research, use of animals as models for human diseases, pharmaceutical research and development, toxicity testing and surgical skills training or simulation [6,12,16] . This is because there are inherent limitations in the use of alternatives like in vitro study, human clinical trials or computer simulation. It should be noted that there are problems of transferability of results obtained from animal research to humans. Efforts are on-going to find suit-able alternatives to animal experimentation. Among the methods being explored are cell and tissue culture, computer simulation and postmortem research [2,3,6,17,18] . Types of Research Involving Animals There are five main reasons for the continued use of animals in research. These include basic scientific research, their use as models for human diseases, pharmaceutical research, toxicity testing and surgical skills training or simulation. Animals Used for Basic Scientific Research The aim of basic research is to increase scientific knowledge about the way animals and humans behave, develop or function biologically. This type of research may not necessarily lead to applications for humans, although a primary objective is that it may eventually lead to applications from which humans may directly benefit. It covers areas such as observational research, assessment of physiological mechanisms and developmental and genetic studies. Of these, the most important is perhaps physiological research [16] . These studies involve surgical, dietary or drug treatments that are directed at a better understanding of function at the physiological, cellular or molecular levels and have made significant contributions to current knowledge about human and animal biology and medicine. In fact, it has been said that much of current modern medicine is evidence-based basic research. For example, most of our current knowledge about the endocrine, immune and nervous systems has emerged from research involving animals. Research involving the use of immune-deficient rodents has contributed very substantially to our understanding of the complex processes of diseases that affect the immune system, neoplasia, HIV/AIDS and other diseases [6,11,12,16,18] . Animals as Models for Human Diseases Laboratory animals are also often used as models for understanding of disease processes and to develop new vaccines and medicines [16,19,20] . Very often these types of research draw on findings derived from basic scientific research. For example, animal models using the chimpanzee and monkey were employed extensively for the study of hepatitis B and poliomyelitis leading to the development of effective vaccines against these diseases [21][22][23][24] . Similarly, much of the current knowledge about hepatitis C has been derived from studies in the chimpan-zee as for a long time it was the only non-human host for the virus. Unfortunately, unlike hepatitis B, an effective vaccine against hepatitis C is yet to be discovered. Animal models may be difficult to find or develop for some diseases such as HIV/AIDS and some cancers [24][25][26] . This is due to the complex pathogenesis of these diseases and their many different subtypes in humans and animals, which makes it inherently difficult to study them and to develop successful animal models [3,[24][25][26] . Another important area of animals as models for human diseases involves the use of genetically modified animals. Effective treatment has been developed for some types of cancer, such as breast cancer (tamoxifen) and prostate cancer (goserelin), based in part on the study of transgenic mice that express human receptors on their cells, which were used as replacements for primates. These animals are genetically modified to study the role of genes in disease processes [16,19,20] because the pathology of various diseases (neoplastic, infectious, nutritional, inherited, etc.) is affected directly or indirectly by an individual's genome. The study of genetics, therefore, can help in the understanding of these fundamental interactions. The sequencing of the human and mouse genomes has revealed remarkable similarities. About 99% of the genes in these two genomes have direct counterparts in the two species. Therefore, the mouse is used extensively as a model for research on human diseases in various types of studies. Furthermore, because mice breed rapidly and are easy to look after in the laboratory, and because the methods of genetic modification are more effective in mice compared with other mammals, they are a favoured species in genetic modification studies. Other animals used in genetic modification studies include rats and zebrafish. At present, it can be argued that the use of genetically modified animals as models has allowed researchers to generate more accurate and appropriate models of human diseases that have facilitated progress and has made it more likely that research findings in such models will transfer to human subjects more quickly. While most animal models cannot be considered exact replicas of human diseases, most biomedical scientists working in the field are of the view that there are often enough similarities between mice and humans to make informative comparisons [16] . Examples include findings from models used for diabetes, deafness, psychiatric disorders, neurodegenerative disorders and some cancers. However, it should be noted that when scientists think that they have a good model, it is often difficult to determine how much its attributes are due to its genes or to environmental factors. This is because, in some instances, wildly differing results have been found to occur in different laboratories using the same strains of animal in the same procedures. This observation is itself becoming an important area of further research. The use of genetically modified animals has a wide range of welfare implications because the animals involved usually suffer from the disease being studied for the duration of their lives [16] . These animals are also likely to be the subject of procedures carried out to characterize the different stages of the disease, including blood, metabolic and behavioural tests. These procedures may inflict pain or cause distress to the animals. On this note, it can be argued that people concerned about the welfare of non-human animals have a valid point. Pharmaceutical Research In the past 80 years, pharmaceutical research and development has been transformed because of the availability of advanced information and diagnostic technologies, better understanding of genetics and increasing use of computational analysis [27] . Consequently, a wide range of advanced methods that do not involve animals is used in conjunction with animal research for pharmaceutical research and development. Overall, there has been a substantial reduction in the total number of animals used for pharmaceutical research [16] . However, it still remains responsible for a significant proportion of the animal experiments conducted in most countries in Europe and the USA at the present time [11,[28][29][30] . As part of the search for new medicines and vaccines for use in humans, a very wide range of basic and applied medical and veterinary research projects is supported or conducted by pharmaceutical companies. It has been estimated that 60-80% of animal experimentation used for pharmaceutical research and development are in the characterization of promising candidate drugs and about 5-15% are used in the discovery and selection process [11,31,32] . In the early stages of development of new medicines to assess the importance of a drug target, genetically modified mice are most commonly used. They are also used increasingly in target validation of new medicines or as animal models of diseases. For certain biological compounds such as vaccines, animal testing is mandatory for each batch that is produced, to ensure potency and safety [22,28,31,33] . It is possible that the use of animals for pharmaceutical research and development will continue to decline as the use of advanced methods or other alternatives increases. On the other hand, the use of animals may remain unchanged because advanced imaging, sensing and the use of biomarkers will allow the extraction of more information [27,34,35] . It is difficult to envisage a future in which there will be a rise in the use of animals for pharmaceutical research and development. The current ethical debate on the use of animals for experimentation as well as increasing violent activities of animal rights activist groups will most likely ensure that this does not happen [2,9,10,13,16,36,37] . While the violent activities of some animal rights activists should be condemned, scientists too need to do a better job of explaining to the public the justification for animal research today [14,16] . Toxicity Testing Toxicological studies are often carried out on animals to help test the safety of a wide range of substances that could be harmful to humans, animals and the environment [16, 38,39] . These tests are carried out on new products like medicines, household and industrial chemicals, agrochemicals and food additives. Some of these chemicals are tested for their potential to cause irritation, produce physiological reactions, induce cancers, produce a teratogenic effect on the developing fetuses in utero and produce adverse effects on fertility [40] . Specified doses and exposures of the chemicals are given to animals, from which information regarding safe human dose and exposure levels is then determined. In order to observe the effects seen when a new product is used, misused or abused in different situations, the tests usually range from one single high dose to long-term exposure to a particular chemical. Furthermore, the tests are designed to mimic the possible routes of exposure that humans might be subjected to, such as through the mouth, skin, eyes or airways. Various species of animals are used for toxicological testing or safety evaluation of medicines. These include rats and mice (most commonly), larger animals like rabbits and dogs, and less often primates like chimpanzees and monkeys, as well as fish and chickens in some instances. It has been observed that a full complement of toxicity tests for a pharmaceutical compound that reaches the market usually involves preliminary testing on 1,500-3,000 animals. Occasionally, these toxicological tests are also used to assess the metabolism and efficacy of these products as well as drug interactions. It has been argued that computer modelling or simulation cannot provide adequate insight into new drug toxicity the way animal models can [38] . The relevance of initial toxicology studies in animals to actual experience in man has been the subject of intense debate [2,3,13,17,39,41] . This is because the concordance between short-term toxic effects of new pharmaceuticals in animals and humans (during clinical trials) has been reported to be about 71% [36,38] . This means that 71% of acute toxicities in humans resulting from compounds that entered clinical trials were predicted by preclinical safety or toxicity studies in animals [2,3,38,39,41] . Drugs producing significant adverse effects in animals will obviously not progress to clinical trials [1,6,16] . It has also been argued that for longer-term toxicities such as carcinogenicity and teratogenicity it has been difficult to establish the benefits if any of initial toxicology testing in laboratory animals [40,41] . This is because the concordance between animals and humans, in terms of long-term toxicity, has been found to be lower than the 71% obtained for short-term toxicities [39] . However, this may be due to the fact that assessment of long-term toxicity is a highly complex process. Surgical Skills Training or Simulation For many decades, live anaesthetized animals have been used as a method of educating, developing and refining complex surgical procedures [42,43] . A lot of developments that have taken place in reconstructive surgery, particularly in urological surgery, have taken place largely from the result of initial experimentation using laboratory animals. In recent times, it has been found that the use of laboratory animals is indispensable in the learning or teaching of new surgical skills like laparoscopic or robotic procedures [44][45][46] . These new techniques have a steep learning curve. Trainers have found that the use of inanimate and animate models have shortened the learning curves. Trainees can be assessed without putting any human lives in danger by training on live laboratory animals in a regulated environment before they are exposed to operating on humans under the supervision of a mentor. Many surgical training programs make use of inanimate simulators (for example bench and pelvic trainers) prior to the use of animal models [47][48][49] . Because animals like pigs, dogs, sheep and calves have similar anatomies and physiological responses to humans, they are often used in various laparoscopic and robotic training procedures. The physiological response to various surgical manipulations in these animals resemble those seen in humans, such as organ movements due to respiration, tissue resistance and reflection, vessel pulsations, and bleeding when vessels are cut. This makes them excellent models for training, unlike inanimate models where such complexity may be difficult to reproduce [48,49] . From a practical point of view, despite some anatomical differences from the urinary tracts of humans, the porcine model is most often used for various urological procedures involving the kidneys, ureters, bladder, bowel and prostate [45] . Limitations of Animal Models/Dangers of Extrapolation from Animals to Humans There are sometimes problems in developing effective experimental approaches in biomedical research and in extrapolating from animal models to humans. This is primarily because of the vast complexity and variability of biological systems [2,17,50] . The difficulties are an intrinsic part of any modelling approach that relies on surrogates for the range of organisms of interest. These difficulties are not confined to experimental animal studies, but are also encountered in developing and applying other experimental approaches such as in vitro and clinical studies [51,52] . None of these two broad methods can reproduce exhaustively all the features that characterize the wide diversity and variation of genetic and biological processes that occur in humans. Limitations of Human Clinical Trials Even if the stage of animal research during the development of a new drug was omitted, intrinsic problems resulting from the way clinical trials are designed or conducted remain. This is because most initial clinical trials of a new drug in humans would typically require testing the new drug on about 3,000 human volunteers and patients. Consequently, if a side effect occurs in 1 in 10,000 patients, such side effects may not become apparent until after the product has been marketed [52,53] . Furthermore, human clinical trials often involve a relatively homogeneous sample of patients in order to distinguish clearly between the effects of the therapy against the background of variation between different patient responses [51,52] . Hence, most initial clinical trials frequently fail to provide any information about the effects of drug interactions, since they usually do not mimic the actual situation in real life where patients may be on several different medications at the same time [2,17,53] . Thus, uncertainties about the effects of treatments in the clinical setting are therefore inevitable and clinicians must not only be cautious in extrapolating the results of clinical trials to individual patients, but must remain vigilant for the occurrence of new side effects or drug interactions that failed to occur during initial clinical trials of a new drug [52,53] . A drug that clearly illustrates some of the abovementioned dangers is pioglitazone (Actos), which was introduced in 1999 for the treatment of diabetes mellitus [54] . It was not until it had been used for more than 8 years and by over one million patients that it was found to increase the incidence of bladder cancer. This increased incidence is seen mostly in men [54][55][56][57] . Limitations of in vitro Research There is little doubt that there are major differences between human cells in vitro and in vivo which can pose challenges in extrapolating findings from research on the functioning of human cells in culture to the functioning of human cells in vivo [2,17,50] . While cell culture is cheap and easy, there is no doubt that the usefulness of data obtained from in vitro cultures is very limited. Human cells evolved to be part of an intact organism and what they do when dissociated is fundamentally different from what they do when they are a part of a large community of cells. It is therefore not surprising that more acute challenges arise in using the findings from cell culture studies to make predictions relating to the integrated physiology of intact tissues, organs or the whole human body compared to findings from intact animals. From the above, it is clear that laboratory animals are useful in basic and applied forms of scientific research. In many cases, they can be useful models for studying aspects of human biology and disease and the likely effects of chemicals and medicines, particularly interactions between drugs. However, the usefulness of animal models has to be judged on a case by case basis for each type of research or testing. There is little doubt that initial experimentation in animals is preferable to the alternative of discovering major flaws in new drugs, vaccines or industrial or agrochemical products only after humans have been exposed to their harmful and in some cases debilitating side effects. This scenario most likely will increase the overall cost of new drug developments [54,55] . Ethical Issues in Animal Experimentation There are currently about 4 different views regarding the ethics of animal experimentation. These are the 'anything goes' view, the 'on balance justification' view, the 'moral dilemma' view and the 'abolitionist' view [16] . It will not be hyperbolic to state that the correct ethical position on this contentious issue will be an amalgam of some elements of all 4 views [16] . This is because, in a field as controversial as animal experimentation, it is often a question of perception! Animal rights activists see a rat in a cage, while scientists are seeking a cure for diseases! There is no doubt that the welfare implications for animals used in research are as varied as the benefits. Most observational research on animals conducted in their natural habitats should have minimal negative effects. Similarly, there is a broad consensus by both those for and against animal research that animals used for laboratory research should not experience unnecessary pain, suffering, deprivation of food and water, isolation or distress. Apart from ethical and legal considerations, pain and distress cause changes to the body that could interfere with the outcome of some research on animals. Consequently, minimizing pain and avoiding distress to experimental animals contributes to sound science. Animals that are used as disease models are likely to experience the symptoms typical for the disease. If part of the symptoms involves pain, it is important for the scientists involved in this type of research to look for ways of minimizing the pain. In the contentious world of animal research, the crucial question that needs to be answered by most protagonists of animal research remains how useful animal experiments are to prepare the way for trials of medical treatments in humans [6,12,17,18] . This is because generally public opinion is behind animal research only if it helps to develop better drugs or leads to increased understanding of a disease and therefore will lead to effective preventive methods or new drugs to combat such diseases. The abolitionist view is currently difficult to justify. While attempts continue to be made to find suitable replacements for the use of living creatures, it would appear that for many years to come research involving the use of live animals will continue to be required [6,12] . It has been rightly argued by Watts [6] that at present 'you could phase out the use of animals if you were prepared to put more risks on to humans'. It would also appear that our world is moving to the peril of such a view. The many cases of bladder cancer seen after the introduction of pioglitazone is a prime example. To summarize, all are agreed that there is a need to reduce rather than eliminate animal experimentation. At the same time, efforts must continue to find suitable replacements for animal experimentation as well as more refined techniques that avoid the use of intact live animals [37,[58][59][60][61][62][63] . Legal Issues Affecting Animal Experimentation and Views of Animal Rights Activists Briefly, due largely to pressure groups opposed to animal experimentation, there is now legislation in place by International Regulatory Agencies, National Regulatory Agencies and Institutional Regulatory Boards whose role is to satisfy certain criteria before permission is granted to a researcher intending to conduct experiments involving laboratory animals [7][8][9][10][11][12][13] . These criteria include the following: • There is no other method of answering the question the experiments purport to address • The animal's welfare is protected • There is no undue stress to the animal • There is adequate food and water • Suitable light and comfort are provided for the animals • Pain must be minimized by using anaesthetic agents, if necessary In other words, the humane treatment of experimental animals must be ensured [1,8,11,12,15] . The current legal position is possibly best summed up in recommendations by a committee set up by the Academy of Medical Sciences, UK and chaired by Sir David Weatherall in 2006. The committee recommended that '... the use of non-human primates is impossible to abrogate at present (2007), there is a case for their use provided it is the only way of solving important scientific or medical questions and high standards of welfare are maintained'. (www. acmedsci.ac.uk/imagesproject/nhpdownl.pdf). Views of Animal Rights Activists The term 'animal rights activists' as used in this article has been previously defined (vide supra). The views of animal rights groups deserve special mention in this review and regarding the current legal position on research involving animals as enunciated above. It is well known that people have different views regarding the use of animals for companionship, food, clothing and research [1,5,9,10,12,14] . A widely held view is that people may use animals for these purposes if in return they provide them with shelter, adequate food and treatment. Animal rights activists believe it is wrong for people to remove animals from their natural habitats or interfere with their lifestyle. Some animal rights activists even oppose the eating of meat, meat products or eggs, drinking milk, wearing leather or fur, or keeping animals in zoos! Some also object to having animals as pets. They are also opposed to animal research as a matter of principle regardless of any potential benefits for humans and other animals [7,9,13] . Furthermore, some animal rights activists try to end practices they oppose vehemently by trying to influence public opinion and go as far as trying to get laws passed to stop all animal research. Worse still, some, particularly in Europe and the UK (Animal Liberation Front), have resorted to extreme measures like threatening researchers and members of their families, vandalizing laboratories, properties and cars and planting bombs to intimidate researchers into discontinuing their work [1,14,16] ! It is heartening to note that even in Europe there is now legislation in place to protect scientists engaged in animal research against bitterly antagonistic attacks by groups with such extreme views [12,14] . To hold that people have an ethical responsibility towards animals in their care is to support animal welfare and most scientists engaged in animal experimentation subscribe to this notion as well [1,12,[14][15][16] . Species of Animals Used as Experimental Models This topic will be discussed very briefly. There are many different species of animals that are currently used in research. Most of these are vertebrates and the majority of procedures involve the use of very small laboratory animals like mice and rats for reasons stated previously. Other mammals used on a very small scale include rabbits, pigs, dogs and primates (monkeys and chimpanzees) [27][28][29][30][63][64][65] . Research involving the use of primates has almost ceased in most parts of the world and only few centres in the USA (e.g. National Institute of Health) holding licenses to use these large animals continue to use them as experimental animals [16] . Apart from being large, primates are expensive to maintain, are potentially dangerous animals and have a very long generation time. The broad spectrum of animals used as models include the following: The Use of Invertebrate Animals as Models Invertebrates like the fruit fly Drosophila , the nematode worm Caenorhabditis elegans and some species of snails (molluscs), yeasts, bacteria and viruses are also used in experimental research, albeit on a very limited scale. The genetic modification of these animals has been found to provide useful information regarding the fundamental biological role of genes [66] . However, studies in these invertebrates cannot address questions that concern the 59 effects of gene modification on physiological disease processes or the development of organs that are only found in vertebrates or mammals [16,66] . Because of their close association with the environment and diversity of habitats, invertebrates are more at risk for adverse responses to environmental pollutants [67] . They are therefore often used in studies examining the effects of exposure to environmental contaminants. The use of these invertebrates in experimental research does not generate the same controversies as those involving the use of vertebrate animals. The Use of in silico Models In silico models involve the use of computer simulation to predict biological events. The use of in silico approaches has increased the ability to predict and model the most relevant pharmacokinetic, metabolic and toxicity endpoints, thereby accelerating the drug discovery process [68] . On the other hand, in silico polymerase chain reaction (PCR), also referred to as digital PCR, virtual PCR, electronic PCR or e-PCR, refers to computational tools used to calculate theoretical PCR results using a given set of probes to amplify DNA sequences from a sequenced genome [69,70] . Many software packages are available offering differing balances of feature set, ease of use, efficiency and cost. One of the most widely used is e-PCR, which is freely accessible from the National Center for Biotechnology Information (NCBI) website. The use of these techniques in drug discovery, PCR and other applications is in its infancy. The disadvantages of using results from in silico models are about the same as those of using results from in vitro studies described previously. If results obtained from in silico models are borne out in clinical trials, then hopefully this technique may contribute directly to a reduction in the need for animal experimentation. Methods of Induction of Cancer These will be described very briefly. Studies involving different cancers are major reasons for embarking on animal research. Cancers in experimental animals can be induced using any of the following techniques [62][63][64][65] : • Spontaneous induction of cancer: prostate cancer is seen in dogs older than 8 years; 80% of transgenic adenocarcinoma of mouse prostate (TRAMP) mice older than 16 weeks develop prostate cancer spontaneously; cattle fed on bracken fern in the Balkans and Turkey develop bladder cancer; 24 and 51% of male and fe-male Wistar rats, respectively, develop spontaneous bladder cancers, indicating the role of hormones in the induction of some cancers; the incidence of spontaneous bladder cancer is about 28% in the male brown Norway rat; ageing macaque monkeys often spontaneously develop colon and breast cancers [65] . • Hormonal induction: Noble rat given subcutaneous steroid and cholesterol combination will develop prostate cancer or the use of oestrogens, e.g. 75% of male Syrian golden hamsters given 20 mg of diethylstilboestrol subcutaneously will develop prostate cancer • Transplantable: Dunning R-3327 or Pollard rat prostate adenocarcinoma system • Human xenografts: nude mice, SCID mice • Chemicals: nitroso-N-methyl-N-dodecyclamine (NNN) and N-methyl-N-nitrosourea (NMU) given by intravesical route will produce bladder cancer in Wistar rats; nitrocompounds (2 acetyl aminofluorene (AAF) and N-butyl-N-(4hydroxybutyl)-nitrosamine (BBN), NNN and NMU given intravenously to rats or dogs will induce bladder cancers, while nitroso compounds such as intravenous dimethylnitrosamine (DMN) will produce bladder cancers in about 63% of Wistar rats • Irradiation: about 7% of Wistar rats given 660 rad will develop bladder cancers • Genetic engineering: knockout 'PTEN' gene induces prostate cancer in old mice • Others (viral infection): herpes virus induces cancer in frogs Examples of diseases suitable for animal experimentation include benign diseases like ischaemic/reperfusion injury (e.g. torsion of testis, renal transplantation and myocardial ischaemia), benign prostatic hyperplasia, and malignant diseases like cancers of the liver, colon, prostate and bladder. Conclusions Animal research remains justifiable today for the following reasons: (1) it is less costly and takes a shorter time to conduct compared to clinical trials; (2) it involves less ethical constraint than human clinical trials; (3) it opens avenues for investigations of the genetic, aetiological, morphological and natural history aspects of some diseases like cancers, and (4) it provides a unique opportunity for the development of new concepts which are difficult to obtain through clinical trials or computer simulation. Animal models and clinical trials must work in concert to assist in the search for new and more effective 60 treatment of diseases. Initial testing of new drugs on experimental animals reduces the risk of dangerous side effects in humans, although they cannot guarantee that such drugs will be safe for everyone who might use them subsequently. However, no efforts should be spared to find suitable and equally reliable replacements for animal research or reduce the need for animal research. For the future, it is likely that a combination of animal models, cell lines and computer simulations will, hopefully, allow researchers to develop a wide-ranging tool kit for modelling diseases.
2018-01-02T03:07:13.318Z
2013-11-08T00:00:00.000
{ "year": 2013, "sha1": "dd05a721552eb4c8290128d07ff823e9425e9c42", "oa_license": "CCBYNC", "oa_url": "https://www.karger.com/Article/Pdf/355504", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd05a721552eb4c8290128d07ff823e9425e9c42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251038420
pes2o/s2orc
v3-fos-license
Acid Rain and Flue Gas: Quantum Chemical Hydrolysis of NO2 Abstract Despite decades of efforts, much is still unknown about the hydrolysis of nitrogen dioxide (NO2), a reaction associated with the formation of acid rain. From the experimental point of view, quantitative analyses are hard, and without pH control the products decompose to some reagents. We resort to high‐level quantum chemistry to compute Gibbs energies for a network of reactions relevant to the hydrolysis of NO2. With COSMO‐RS solvation corrections we calculate temperature dependent thermodynamic data in liquid water. Using the computed reaction energies, we determine equilibrium concentrations for a gas‐liquid system at controlled pH. For different temperatures and initial concentrations of the different species, we observe that nitrogen dioxide should be fully converted to nitric and nitrous acid. The thermodynamic data in this work can have a potential major impact for several industries with regards to the understanding of atmospheric chemistry and in the reduction of anthropomorphic pollution. Introduction It has been acknowledged for several decades that acid rain and other environmental issues have an anthropomorphic origin. [1][2][3] It is estimated that two thirds of sulfur oxides (SO x ) and one fourth of nitrogen oxides (N x O y ) are produced in the generation of electricity from of fossil fuels, and several other industries have been pointed out as further contributors to the production of those pollutants. When released to the atmosphere, these gases react with water resulting in several acids in gaseous and particulate forms. Upon accumulation in cloud water, the latter precipitate in the form of acid rain. The effects of acid rain are manifold, [4,5] for instance degradation of human patrimony and the deterioration of soil and freshwater ecosystems by modification of their chemical composition. The latter may take such proportions that aqueous ecosystems may become unsuitable for sustaining varied lifeforms. The Clean Air Act [6] was a major contribution for controlling static and ambulant sources of air emissions, particularly the ones related to SO x . The regulation of N x O y began later, [7] so that these remain a severe problem that must be tackled. Nitrogen dioxide, NO 2 , is one of the most problematic members of the family of N x O y . By means of hydrolysis, it is accepted to be the major contributor for the formation of nitrous and nitric acids in the atmosphere. [3,8] The latter, nitric acid, was furthermore associated with the springtime ozone hole. [9] When in the presence of amines, this reddish-brown gas yields highly carcinogenic nitrosamines, [10,11] which are also tightly regulated. However, NO 2 also plays an important role for human society, since it is an important intermediary in the industrial synthesis of nitric acid, [12] of paramount relevance to produce fertilizers. The reaction of NO 2 with water has been studied for over a century, yet, despite decades of effort, the reaction's mechanism is not entirely clear. [13] From the experimental point of view this is hindered by 1) the formation of stable fogs and condensates; 2) the reactions' rate, which strongly depends on conditions; 3) the existence of alternative pathways that regenerate NO 2 . [8,[14][15][16][17] The violence of this reaction makes it furthermore extremely hard to measure virtually any physical or thermodynamical property for the system, so that, e. g., a search for the Henry constant of NO 2 in water over different databases barely leads to any result. [18,19] Perhaps the most accurate measurement to date of Henry constants comes from the work of Lee and Schwartz. [20] These authors tabulate however a single value at 22°C. Irrespective of the application, complex networks of reactions have been proposed, which are often also based in broadly estimated and inaccurate data (c.f. Figure 1 for the network herein considered). In this work, the equilibrium thermodynamics of the hydrolysis of nitrogen dioxide is studied using Coupled Cluster theory with a full treatment of Singly and Doubly excited configurations, as well as with a perturbative treatment of Triple excitations (CCSD(T)). In order to avoid the limitations of finite size atomic basis sets, we extrapolate our results to Complete Basis Set (CBS). Coupled Cluster has been shown to be one of the few single-reference ab initio methods with the ability to accurately describe the complex electronic structure of the species involved in this system. [21] Other wavefunction and DFT methods have been employed by others, though these are not reliable enough for general application on NO 2 and related molecules. These methods also lack the required accuracy for generating high-level thermodynamic data. [21] Solvation corrections are obtained by means of the COnductor like Screening MOdel for Real Solvents (COSMO-RS). COSMO-RS uses quantum chemical molecular charge densities to calculate chemical potentials and other properties of molecules in solution. This is a very high-level method, which proved several times to deliver extremely accurate chemical potentials in solution. [22,23] The CCSD(T)/COSMO-RS results here presented yield state of the art thermodynamics in gas phase and in liquid water for the NO 2 system. For the sake of consistency, the data we calculated is compared against the best experimental observations we collected. With this information, a two-phase reactive equilibrium is solved to determine the concentrations of the most relevant species according to several conditions. Our work will help to better understand the behavior of this important pollutant in gas phases and in liquid water. This, in turn, will lead to better design of industrial gas treatment facilities, further reducing NO 2 pollution. Results and Discussion Gibbs free energies in gas phase and in an aqueous solution for the two most relevant processes in the hydrolysis of NO 2 are given in Table 1 and Figure 2. The complete list of temperature dependent thermodynamic data in both phases is provided in the supplementary material. Data is always provided in the form of fits, which are valid for the temperature range of 273.15-373.15 K. This means that the quantities in those equations should not be directly interpreted as enthalpies nor entropies. Gas phase enthalpies and entropies are given separately in another section of the supplementary material. Again, we provide linear fits based on the calculated data. Nonlinear terms have a minimal impact for the temperature range we studied, particularly for enthalpies. Unless otherwise stated, we discuss the data considering atmospheric pressure. Figure 2 shows a benchmark for the change in Gibbs free energy for the dimerization of nitrogen dioxide in the gas phase. All the DFT data is consistently calculated using quadruple zeta basis sets (def2-QZVPP). This data is compared against CCSD(T)/CBS and experimental results. [24] Of all DFT methods, only the Minnesota functional M06-2X gets reasonably close to the ab initio and experimental data. Nevertheless, the deviations with respect to experimental data are still about 3 kJ/mol, which renders the M06-2X data too inaccurate for studying equilibrium thermodynamics. Interestingly, the widely used density functional B3LYP even fails to predict the spontaneity of the reaction. The dotted line shows the deviation between CCSD(T)/CBS and the experimental data. This is an approximately constant line with value of 1 kJ/mol. Durham et al. [25] used literature values of kinetic constants to estimate the Gibbs free energies for the dimerization of NO 2 in water. Their calculation results in the value of À 27.5 kJ/mol at 25°C, which deviates by 11.4 kJ/mol from our calculations. Using the values reported by Huie [10] for the forward reaction, the Gibbs free energy for the same reaction lowers to À 22.1 kJ/mol, which is closer to our results. Similarly, England and Corcoran [14] report kinetic data that yields a Gibbs free energy for reaction 1A of À 5.6 kJ/mol at 25°C. This is 13.6 kJ/mol higher than ours. Saramaki and coworkers [26] report however data in contradiction to England and Corcoran's. Consequently, we cautiously consider the accuracy and meaningfulness of this comparison. Further, we detail the thermodynamics of the reactions in Figure 1. To aid the reader, reference is made to the specific Reaction Comparing CCSD(T)/CBS data against several density functionals (using the basis set def2-QZVPP) and experimental data taken from [24]. reactions and the tables in the supplementary material. The thermodynamics for the main reactions studied are summarized in Figure 3. The dimerization of NO 2 can form three main species, [21] symmetric N 2 O 4 (s-N 2 O 4 ), and two asymmetric conformers named trans (t-N 2 O 4 ) and cis (c-N 2 O 4 ). In the gas phase, given sufficiently large temperatures, NO 2 is not expected to dimerize. The formation of c-N 2 O 4 stops being spontaneous at 140 K, t-N 2 O 4 at 225 K, and the symmetric isomer at temperatures above 332 K. Our values for the equilibria are in good agreement with the literature, [21,27] and they differ by approximately 1 kJ/mol with respect to experimental values. [24] When in water, all conformers of N 2 O 4 are stabilized. Enthalpies for the formation of these species are lowered and in the case of asymmetric dimers there is furthermore a less penalizing entropic term. s-N 2 O 4 is still expected to be the dominant species for the whole temperature range of liquid water (cf. supplementary material). The relative stability of s-N 2 O 4 towards t-N 2 O 4 decreases however with temperature, such that we expect a difference of about 1.5 kJ/mol between t-N 2 O 4 and s-N 2 O 4 at water's boiling point. The curves for the formation and hydrolysis of c-N 2 O 4 are parallel to the respective data for t-N 2 O 4 , with a shift to larger values. Based on the calculated data we may build "thought experiment" solutions of NO 2 dimers in water. These are expected to be mainly composed by s-N 2 O 4 and to a lesser extent by the respective asymmetric forms. Expectedly, the weights of the latter species increase with temperature, though the relative stability of the trans and cis forms should remain approximately constant. At water's boiling point, the expected composition is approximately 5 % c-N 2 O 4 , 35 % t-N 2 O 4 and 60 % s-N 2 O 4 . The gas phase hydrolysis of NO 2 (the equivalent to green large, dashed arrows in the gas phase; 1G in Table 1) is nonspontaneous for temperatures above 286 K. If we consider dimers of NO 2 , one observes that the gas phase hydrolysis of asymmetric conformers is favorable for the whole temperature range studied. For s-N 2 O 4 there is however a thermodynamic impediment, since the hydrolysis reaction is non-spontaneous in gas. Different authors proposed reaction pathways that connect t-N 2 O 4 and c-N 2 O 4 to the hydrolysis products in the gas phase. [26,27] The main finding was that reaction barriers were lower for the former than for the latter. Furthermore, transition states involving s-N 2 O 4 were all energetically inaccessible. Taking these observations in consideration, then the gas phase hydrolysis of NO 2 is hindered by 1) the kinetics of hydrolysis of t-N 2 O 4 , 2) the unfavorable thermodynamics for forming the dimeric species and 3) high activation barriers for the formation of t-N 2 O 4 . [21] In the presence of water there is a shift of the calculated Gibbs free energies for lower values. The latter are negative for the whole temperature range studied, irrespective of the starting species (NO 2 or any of its dimers). Though spontaneous, the hydrolysis of s-N 2 O 4 shows the incorrect temperature dependence, [14] for which we conclude that the reaction must be kinetically unfavorable. The hydrolysis process is however spontaneous for any of the asymmetric conformers of N 2 O 4 and these show the correct slope. Even though we did not optimize any transition state for this work, we may argue that from the statistical mechanical point of view, the solvation correction to the chemical potentials should be identical for t-N 2 O 4 , c-N 2 O 4 or any of the respective transition states. The lowering of the activation barriers in solution for the formation of t-N 2 O 4 and c-N 2 O 4 should therefore be similar to the respective lowering of the reaction's Gibbs free energy. With our data it seems then plausible that conclusions for the gas phase reaction transpose also to aqueous solutions. The difference in behavior between the different media should be an effect of how the solvent stabilizes the different species. The decomposition of nitrous acid (long dashed-black arrows; 2G and 2A in Table 1) in water is favorable for temperatures higher than 403 K. HNO 2 is therefore not expected to decompose in liquid water. The situation differs however in the gas phase, since the reaction is spontaneous for temperatures above 303 K. Irrespective of the phase, the main driving force for the reaction is the change in entropy. Whenever HNO 2 and nitrite are present in water (Gibbs free energies of solvation for nitrous acid and its pK a are given in the supplementary material), then the gas phase decomposition of nitrous acid may be neglected altogether. This is particularly true for less acidic conditions, where nitrous acid is undissociated. In strongly acidic media the situation changes as there are large amounts of undissociated HNO 2 in water and an equilibrium between two phases should be established. Though the decomposition of HNO 2 is non-spontaneous in water, the reverse reaction is feasible, which allows the conversion of nitric acid into nitrous acid. This reaction is therefore important to consider from the equilibrium point of view. Based on the description above, one should not expect that equilibrium concentrations of nitrate and nitrite (or the respective acids) match. In less acidic media, it is to anticipate that some nitrate is converted to nitrite, and in strongly acidic media the opposite should take place. Association reactions between NO 2 and NO to form N 2 O 3 or ONONO (red and blue full arrows; 6G, 6A, 6G' and 6A') are not entropically favorable in any phase considered. The driving force is the change in enthalpy. These reactions are spontaneous in gas for temperatures below 337 K and 314 K, respectively. In water, the reactions remain spontaneous beyond water's boiling point (568 K) and 366 K (respectively). The reaction of any of N 2 O 3 conformers with water to yield nitrous acid (8G, 8A, 8G' and 8A') is only possible in the gas phase at extremely low temperatures. For N 2 O 3 itself, the threshold temperature is so low (5 K) that the reaction is for practical purposes never favorable. For ONONO, the reaction is feasible for temperatures up to 165 K. These reactions are thus not relevant for atmospheric processes. In the presence of liquid water, the conversion of N 2 O 3 into nitrous acid is hindered by both enthalpy and entropy considerations. The decomposition of ONONO is however always spontaneous due to a strong enthalpic gain and an almost zero entropic penalty. Finally, we observe the reactions of nitric oxide with nitric acid to form nitrogen dioxide and water (reactions 9G and 9A in table 1 of the supporting material). These reactions show in both phases an entropic driving force. For temperatures above 279 K, the gas phase reaction becomes spontaneous. In water, the reaction is never spontaneous. Therefore, except at sufficiently low pH values, in which undissociated nitric acid may exist, this reaction may be neglected. This is particularly true in pH-controlled solutions. Although most applications involve large cocktails of chemicals, we used the calculated thermodynamic data to understand the hydrolysis of nitrogen dioxide from the equilibrium point of view. A detailed description of the set of reactions selected for determining the equilibrium is described in the supplementary material, along with the respective reasoning for our choices. Other details relevant to the thermodynamic model are also provided. Figure 4 shows the evolution of equilibrium concentrations for the most relevant species according to several conditions. We study 1) temperature effects at pH 7 and fixed initial concentration of nitrogen dioxide (n 0 NO2 ); 2) the effect of different n 0 NO2 at 50 Celsius and pH 7; 3) pH effects at 60 Celsius and fixed initial concentration of NO 2 . A common feature of all the studies is the complete conversion of NO 2 into nitrite and nitrate. Though there is a residual amount of nitrogen dioxide in the gas phase, the remaining concentration in water is even lower. Temperature effects are as expected from the thermodynamics of the main reaction: the consumption of NO 2 at 10 Celsius is about one order of magnitude larger than at 80 Celsius. In practice, this is however irrelevant because the reaction is complete. The calculated values can be well understood using the definition of the respective equilibrium constants and a direct application of Le Chatelier's principle. HNO 3 is a very strong acid. Consequently, this species is expected to shift the chemical equilibrium towards the products. The quadratic dependence of the equilibrium constant on the activity of NO 2 , guarantees the almost complete consumption of this species. The pH effect on the equilibrium concentrations of NO 2 is also the expected: since the reaction generates two acids, the yield of removal of the gas increases with the total pH of the system. This means that the higher the pH, the less NO 2 will be available in any of the phases. Roughly estimated, one may say that one unit in pH corresponds approximately to a change of one order of magnitude in the equilibrium concentration of gaseous and aqueous NO 2 . In agreement with our previous observations, the equilibrium concentrations of nitrite and nitrate differ for several of the conditions studied. This is due to the association reaction between NO and HNO 3 to form nitrous acid (the reverse of the reaction with long dashed black arrows, or 2A in Table 1). Increasing the temperature brings the concentrations of nitrite and nitrate closer together, though the amount of nitrite is always one to two orders of magnitude larger than that of nitrate. Though temperature effects are rather weak, the pH may significantly affect the respective equilibria. At about pH = 6.5 the concentrations of anions begin to converge and at pH = 5.75 these are already matching. During the transition period, the concentration of HNO 2 shows a plateau, whereas that of HNO 3 changes slope. Furthermore, the equilibrium concentration of gaseous nitric oxide is also affected. This is not immediately visible due to its already high initial concentration (Middle) the temperature is fixed at 50 Celsius, pH = 7 and the initial concentration of NO 2 (n 0 NO2 ) is varied. The initial amount of NO and water is not modified. (Right) the equilibrium number of moles for several species at different pH values in systems at 60 Celsius and with initially 10 À 4 mol of NO 2 , 5 � 10 À 4 mol of NO and 0.5 mol of water. and the logarithmic nature of the representation. The aqueous concentration of this gas is however unaffected. Other than this, NO is a rather inert species in the system and its solubility in water is well represented by Henry's law. Increasing the initial concentration of NO 2 decreases the gap in the equilibrium concentrations of anions. This is because the association of NO and HNO 3 to form HNO 2 has a cubic dependence on the latter, and it is quadratic on nitric acid. It thus leads to a less favorable ratio between the concentrations of products and reagents. Atmospheric processes or even simulation of processes involving flue gas involve the presence of other species in the system, for instance SO x or carbon dioxide. Though explicitly accounting for the effects of any of these components in our model system is beyond the scope of this work, we can infer indirectly on the influence that carbonic acid and its conjugate bases should have on the hydrolysis of NO 2 . The increased presence of carbon dioxide typically acidifies water. Rainwater with carbonic acid might have pH values around 5.5, [1] which is close to the lower limit of our study. Though the pH effect on the equilibrium concentration of NO 2 is quite pronounced (based on the slope), the practical effect is rather weak. The yield of the hydrolysis reaction at pH = 5.5 is approximately 30 times smaller than at neutral pH. The net effect of carbon dioxide is however to change the yield from 99.9999 % to 99.997 %. Even in the limit of rainwater at pH = 5, the effect would be of no practical relevance (yield of 99.99 %). On the other hand, in highly acidic media, nitrous acid decomposes to form nitric acid, which leads to the previously discussed overaccumulation of nitrate (or nitric acid). Because carbonic acid and its conjugate bases are efficient buffers, the presence of dissolved CO 2 may keep the system from extreme acidification, thus hindering the over acidification of rainwater. Conclusions In the present contribution, we studied the hydrolysis of nitrogen dioxide from the perspective of equilibrium thermodynamics, using high-level quantum chemical methods and accurate solvation models. With complete basis set CCSD(T) data we calculated gas phase Gibbs free energies for a vast set of reactions of relevance for the formation of acid rain. With COSMO-RS we determined corrections that allowed us to estimate temperature dependent thermodynamic data in aqueous phases. We verified that most processes are viable only in aqueous phase and except for a few decomposition reactions, the gas phase is reasonably inert. This is in excellent agreement with experimental observations. With this thermodynamic information, we calculated equilibrium concentrations for the main species at several conditions. These model scenarios are consistent with the experimental data available in the literature. The effects of dissolved carbon dioxide are analyzed based on pH effects on the hydrolysis reaction. The latter may be potentially beneficial by acting as a buffer, thus keeping the system's pH fixed. This case-study shows furthermore how high-level quantum chemistry, in conjunction with accurate solvation models, may help complementing gaps in experimental data. CCSD(T) extrapolated to complete basis set can consistently capture in a qualitative and quantitative manner the intricate electronic structure of nitrogen dioxide and related species. With detailed understanding and high-quality thermodynamics for one of the processes with major environmental impact, the data herein supplied may be useful for reducing pollution associated to, e. g., the automotive and chemical industries. Computational Details Geometry optimizations were performed using the B3LYP [29] functional with the def-TZVP basis set as available from TmoleX 4.4.0 and TURBOMOLE 7.3. [30][31][32][33] We found this the most suitable basis set in terms of the quality of the resulting equilibrium geometries and calculated vibrational frequencies. [34] Energetics were improved using extrapolated complete basis set energies at the CCSD(T) level. We used the method of Halkier et al. [35] for Hartree-Fock energy extrapolation and the method of Helgaker et al. [36] for correlation energies. CCSD(T) calculations using the augmented variants of Dunning basis sets [37,38] were performed with ORCA 5.0.2. [39][40][41][42] For the purpose of basis set extrapolation, we used the basis sets aug-ccpVDZ, aug-ccpVTZ and aug-ccpVQZ. All CCSD(T) calculations were performed on top of the optimized B3LYP/def-TZVP geometries. T 1 diagnostics were run and match the observations of others, [27,28,43,44] i. e., all species are borderline/show a slight multireference behavior. Increasing the basis set's size leads to more favorable T 1 diagnostics. In the evaluation of the several density functionals we performed single point energy calculations using the def2-QZVPP basis set on top of geometries optimized using the same method but the def-TZVP basis set. For geometry optimizations we used energy convergence criteria of 10 À 7 E h and gradient norms of at most 10 À 4 E h /a 0 . All reaction energies were calculated based on single-molecule calculations. Vibrational and geometrical information originated always from the def-TZVP calculations. The optimized structures were consistent with tabulated data. [34] Besides requiring harmonic frequencies to estimate thermodynamic quantities, we used vibrational frequency analysis to confirm that the optimized structures corresponded to minima in the Potential Energy Surfaces. All the harmonic frequencies used in this work are unscaled. We observed larger deviations between calculated and experimental frequencies for high-frequency vibrational modes. These do not affect significantly the calculated thermodynamic properties at the temperatures of interest. The statistical mechanical calculation of enthalpies, entropies and Gibbs energies was performed with the recently developed ULYSSES package. [45] Thermodynamic quantities were evaluated using an interpolation of the free-rotor with the harmonic oscillator, as originally defined by Grimme. [46] A consistent partition function was furthermore used in the calculation of enthalpies. The interpolation is controlled by a single parameter, w 0 , which determines at which frequency the harmonic oscillator and the free-rotor models mix. By default, we take w 0 = 75 cm À 1 . Finally, all ideal gas properties are calculated using the standard temperature and pressure reference state. parametrization. For t-N 2 O 4 , c-N 2 O 4 and ONONO there was previously no COSMO-file available in the database. We ran the respective calculations using the B3LYP/def-TZVP optimized geometries. Plots were generated using python's matplotlib [48] and the fits of thermodynamic functions were done using scipy's curve fit. [49]
2022-07-26T06:17:05.597Z
2022-07-25T00:00:00.000
{ "year": 2022, "sha1": "4e64918714a7ada9511837deb260fa9cb1f62d1e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cphc.202200395", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d522b553d3f8c54739674302f18dbc76e398751d", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
190000487
pes2o/s2orc
v3-fos-license
A Koll\'{a}r-type vanishing theorem Let $f:X\rightarrow Y$ be a smooth fibration between two complex manifolds $X$ and $Y$, and let $L$ be a pseudo-effective line bundle on $X$. We obtain a sufficient condition for $R^{q}f_{\ast}(K_{X/Y}\otimes L)$ to be reflexive and hence derive a Koll\'{a}r-type vanishing theorem. Introduction Let f : X → Y be a fibration between two complex manifolds X and Y . Kollár proved a vanishing theorem of the associated higher direct images R q f * (K X/Y ) in his remarkable paper [11]. Theorem 1.1. (Kollár, [11]) Let f : X → Y be a surjective map between a projective manifold X and a projective variety Y . If A is an ample divisor on Y , then for any i > 0 and q 0, It is meaningful to consider the similar properties of the adjoint bundle K X ⊗ L for a line bundle L on X. When L is endowed with a smooth (semi-)positive Hermitian metric, it was shown in [14,19] that the Kollár-type vanishing theorem also holds. So we are interested in the singular case in this paper. In fact, such a vanishing theorem was first developed by Kawamata as follows. Theorem 1.2. (Kawamata, c.f. Theorem 2.86 in [7]) Let f : X → Y be a surjective map between a projective manifold X and a projective variety Y . Assume that L is a line bundle on X which is numerically equivalent to a Q-divisor D with simple normal crossings and satisfying ⌊D⌋ = 0. If A is an ample divisor on Y , then for any i > 0 and q 0, Similar results also appeared in [12,16]. In the proof of these Kollártype vanishing theorems, there are two key ingredients: the injectivity theorem and the torsion-freeness of the higher direct images. The injectivity theorem is due to Kollár [11]. It was generalized by Esnault and Viehweg [6], and was further strengthened by Ambro [1]. Furthermore, there is an analytic version of the injectivity theorem in [9], which is the start point of this paper. [9,13]) Let F and L be two line bundles on a compact Kähler manifold X with (singular) Hermitian metrics h F and h L , respectively, of semi-positive curvature. Assume that there exists an R-effective divisor ∆ on X such that h F = h a L h ∆ for a positive real number a and the singular metric h ∆ defined by ∆. Then for a section s of L satisfying sup X |s| h L < ∞, the multiplication map induced by s is injective for an integer q with 0 q dim X. Here and in the rest of this paper, we use I (h) to denote the multiplier ideal sheaf [4] associated with a (singular) metric h. One may ask whether R q f * (K X/Y ⊗ L) is torsion-free when L is pseudo-effective. In general, we cannot get the desired result because the singular part of L after being pushed forward to Y cannot be controlled. However, when the singularity of L is sufficiently mild, we can establish the following theorem. Theorem 1.4. Let (L, h) be a pseudo-effective line bundle on a complex manifold X so that there exists a section s of some positive multiple mL satisfying sup X |s| h m < ∞. Assume that Y is a complex manifold, and f : X → Y is a smooth, Kähler fibration with connected compact fibres such that for any y ∈ Y , h y := h| Xy is well-defined and H q (X y , K Xy ⊗ L y ) = H q (X y , (K Xy ⊗ L y ) ⊗ I (h y )). Then R q f * (K X/Y ⊗ L) is reflexive. Combining the two theorems above, we then can prove the following Kollár-type vanishing theorem. Theorem 1.5. Let f : X → Y be a surjective fibration between two projective manifolds X and Y . Let (L, h) be a Q-effective line bundle on X so that I (h) = O X . If A is an ample divisor on Y , then for any i > 0 and q 0, Obviously, Theorem 1.2 can be considered as a special case of Theorem 1.5. However, it is difficult to derive Theorem 1.5 directly from Theorem 1.2. Indeed, for a Q-effective line bundle L with I (L) = O X , there exists a log-resolution g :X → X such that the pullback of L by g is a Q-divisor D onX with simple normal crossings and satisfying ⌊D⌋ = 0. Therefore if one would like to apply Theorem 1.2 to get Theorem 1.5, it is inevitable to prove the degeneration of the Leray spectral sequence which is usually non-trivial. One can see some of the difficulties in [19] where the smooth case is proved. Hence we use our method stated above to prove Theorem 1.5. We also remark that Theorem 1.5 can be used to prove the positivity of R q f * (K X ⊗ L). Indeed, the canonical vanishing theorem says that for a nef vector bundle E and an ample line bundle A on Y , Hence Theorem 1.5 implies that the higher direct images have the nef property in the sense of cohomology. In fact, we have the following result. Theorem 1.6. Under the same assumptions as in Theorem 1.5, if A is an ample and globally generated line bundle and A ′ is a nef line bundle on Y , then the sheaf R q f * (K X ⊗ L) ⊗ A m ⊗ A ′ is globally generated for any q 0 and m dim Y + 1. The paper is arranged as follows: In section 2 we first prove an embedding theorem, and then prove Theorem 1.4. In section 3 we prove Theorems 1.5 and 1.6. Acknowledgment. The author sincerely thanks his supervisor Professor Jixiang Fu for discussions. Thanks also go to Shin-ichi Matsumura, who kindly provided some comment about the reference of this paper. Finally, he is very grateful to the referee for many useful suggestions on how to improve the paper. Embedding Theorem Let f : X → Y be a smooth, Kähler fibration between two complex manifolds X and Y , and let L be a pseudo-effective line bundle on X. In order to prove that R q f * (K X/Y ⊗ L) is reflexive, we will embed it into another reflexive sheaf so that the embedding is split. We first consider how to embed the cohomology group H q (X y , K Xy ⊗ L y ) for any y ∈ Y by the definition of the higher direct images. If L is semi-positive, it was done in [14,19]. In this case, since f is Kähler, it gives a (1, 1)-form ω f on X such that ω y := ω f | Xy is a Kähler form on each fibre X y . Hence, for a cohomology class [u] ∈ H q (X y , K Xy ⊗ L y ), one can take the harmonic representativeũ of the class [u], and get an element * ũ ∈ H 0 (X y , Ω n−q Xy ⊗ L y ). Here * is the Hodge star operator defined by ω y . In other words, we have a map One can easily check that L q y • S q y = id. So the map S q y is split. Now assume that L is merely semi-positive in the sense of current. So the classic harmonic theory cannot be used. In order to handle this case, we use Demailly's analytic approximation in [5]. Briefly, we approximate the original singular metric by a family of singular metrics which are smooth on a Zariski open set W . We use these metrics to define a family of maps for y ∈ W like the S q y above, and then take the limit to get the desired map. The existence of the limit is guaranteed by the assumption on the singularity of the line bundle L. More precisely, we have the following result. Proposition 2.1. Let (L, h) be a pseudo-effective line bundle on a compact Kähler manifold X. Assume that there exists a section s of some positive multiple mL satisfying sup X |s| h m < ∞. Then for any integer q with 0 q n, there exists an injective morphism Proof. Let ω be a Kähler form on X. By Theorem 2.2.1 in [5] we can approximate h by a family of singular metrics {h ε } ε>0 with the following properties: Thanks to the proof of the openness conjecture by Berndtsson [2], one can arrange h ε with logarithmic poles along Z ε according to the remark in [5]. Moreover, since the norm |s| h m is bounded on X, the set {x ∈ X|ν(h ε , x) > 0} for every ε > 0 is contained in the subvariety Z := {x|s(x) = 0} by property (b). Here ν(h ε , x) refers to the Lelong number of h ε at x. Hence, instead of (a), we can assume that (a') h ε is smooth on X − Z and has logarithmic poles along Z, where Z is a subvariety of X independent of ε. Now let W = X − Z. We can use the method in [3] to construct a complete Kähler metric on W as follows. Take a quasi-psh function ψ( −e) on X such that it is smooth on W and has logarithmic pole along Z. Then Ψ := log −1 (−ψ) is bounded on X. Defineω = ω + 1 l i∂∂Ψ for some l ≫ 0. It is easy to verify thatω is a complete Kähler metric on W andω 1 l ω. Let L n,q (2) (W, L) h ε,ω be the L 2 -space of the L-valued (n, q)-forms u on W with respect to the inner product · h ε,ω defined by Then we have the orthogonal decomposition (1) L n,q (2) (W, L) hε,ω = Im∂ H n,q hε,ω (L) Im∂ * ω hε where H n,q hε,ω (L) = {u|∂u = 0,∂ * ω hε u = 0}. We give some explanation for decomposition (1). Usually Im∂ is not closed in the L 2 -space of a noncompact manifold even if the metric is complete. However, in the situation we consider here, W has the compactification X and the forms are bounded in L 2 -norm. In fact, by Claim 1 in [8], we have the isomorphism Im∂ from which we can see that Im∂ as well as Im∂ * ω hε is closed. Hence decomposition (1) holds. Isomorphism (2) was constructed in [8] by the standard diagram chasing and L 2 -extension technique. Such ideas more or less appeared in [18] and other related papers. Here we give the sketch of its proof. Take a finite Stein cover U = {U i } of X. By Cartan and Leray, we have the isomorphism where the right hand side is theČech cohomology group calculated by U. By the standard diagram chasing, we have a homomorphism On the other hand, for w ∈ L n,q (2) (W, L) h ε,ω ∩ Ker∂, we denote w 0 i 0 := w| U i 0 ∩W and solve the∂- Since∂(δw 1 ) = 0, we obtain w 2 such that∂w 2 = δw 1 on each U i 0 i 1 ∩ W . Here for convenience, we have put U i 0 ...iq = U i 0 ∩ · · · ∩ U iq . By repeating this procedure, we obtain w q so that∂w q = δw q−1 . Put v := δw q = {v i 0 ...iq }. Notice that v i 0 ...iq is an L-valued (n, 0)-form on U i 0 ...iq ∩ W with bounded L 2 -norm and δv = 0. Therefore we have a homomorphism β : It is not difficult to check thatᾱ andβ induce the desired isomorphism. Now we define the map S q . We use the de Rham-Weil isomorphism (X, L) h,ω Im∂ to represent a given cohomology class by a∂-closed L-valued (n, q)form u with u h,ω < ∞. We denote u| W simply by u W . Sinceω 1 l ω, it is easy to verify that |u W | 2 hε,ω dVω C|u| 2 hε,ω dV ω , which leads to the inequality u W hε,ω C u hε,ω . Here C is a constant used in a generic sense. Hence by property (b), we have u W hε,ω C u h,ω which implies u W ∈ L n,q (2) (W, L) hε,ω . By decomposition (1), we have the harmonic representative u ε of [u W ] in H n,q h ε,ω (L) and hence * ωu ε ∈ H 0 (W, Ω n−q W ⊗ L ⊗ I (h ε )). Since * ω u ε h ε,ω = u ε hε,ω C u h,ω , there exists a subsequence of u ε , which is still denoted by u ε , and a current v such that lim ε→0 * ωu ε = v ∈ L n−q,0 in the sense of the weak L 2 -topology. Moreover, for any test form w on W , we have by the definition of the weak convergence. Here∂ * is the formal adjoint operator. Hence v is actually a holomorphic form on W with v hε,ω C u h,ω . By the well-known extension theorem (such as Proposition 1.14 in [17]), we can extend v to the whole space X to get an element, which is still denoted by v, in H 0 (X, Ω n−q X ⊗ L). (The weight functions here are singular, but since they are bounded above the extension result is still true.) We define S q ([u]) = v. Hence the L 2 -norm of u ε with respect to the metrics h ε 0 andω is uniformly bounded. So there exists a subsequence of u ε and a u 0 ∈ L n,q (2) (W, L) hε 0 ,ω such that lim ε→0 u ε = u 0 in the sense of the weak L 2topology. We claim that u 0 is zero. In fact, by the definition of S q , we have lim ε→0 * ωu ε = S q (u) = 0. Hence from the identity * ω u ε hε 0 ,ω = u ε hε 0 ,ω we have lim ε→0 u ε hε 0 , ω = 0 which implies u 0 = 0. We then claim that Hence u W is orthogonal to the space H n,q hε 0 ,ω (L) Im∂ * ω hε 0 . We now use (3) to prove u ∈ Im∂ ∩ L n,q (2) (X, L) h,ω . In fact, we have the following commutative diagram: Here j is induced by the restriction from L n,· (2) (X, L) h,ω to L n,· (2) (W, L) hε 0 ,ω , and f i , i = 1, 2, is the de Rham-Weil isomorphisim to theČech cohomology group. (Here f 2 is just theβ constructed before, and one can also consult [8] for more details). The bottom equality is obtained from the property (c) that I (h ε 0 ) = I (h). It follows that u goes to zero through the map j. Therefore u goes to zero through f 1 . Now the embedding theorem we need is a direct consequence of Proposition 2.1. Corollary 2.1. Let (L, h) be a pseudo-effective line bundle on a complex manifold X. Assume that there exists a section s of some positive multiple mL satisfying sup X |s| h m < ∞. Let Y be a complex manifold, and f : X → Y a smooth, Kähler fibration with connected compact fibres. If for any y ∈ Y , h y := h| Xy is well-defined and H q (X y , K Xy ⊗ L y ) = H q (X y , (K Xy ⊗ L y ) ⊗ I (h y )), then there exists a natural injective morphism Proof. For any y ∈ Y , replacing X in Proposition 2.1 by X y , we have the morphism (4) S q y : H q (X y , K Xy ⊗ L y ) → H 0 (X y , Ω n−q Xy ⊗ L y ). It naturally induces an injective morphism from R q f * (K X/Y ⊗ L) y to f * (Ω n−q X/Y ⊗ L) y , which is still denoted by S q y . Then we get the desired morphism S q as y varies. Moreover, it can be proved that this injective morphism is actually split. Proposition 2.2. With the same assumptions as in Corollary 2.1, there is a surjective morphism Proof. We first define at each point y ∈ Y the morphism L q y : H 0 (X y , Ω n−q Xy ⊗ L y ) → H q (X y , K Xy ⊗ L y ) Remember ω f is the (1, 1)-form on X given by the Kähler fibration f and ω y = ω f | Xy . We need to prove that for any [u] ∈ H q (X y , (K Xy ⊗ L y )), L q y • S q y ([u]) = [u] with the morphism S q y in (4). When replacing X in Proposition 2.1 by X y , we have the notations such as W y andω y which correspond to W andω in Proposition 2.1 respectively. Denote S q y ([u]) by v and takeω q Hence it is clear that S q y • L q y (v) = v by tracing the definition of S q y . So L q y (v) = [u] by the injectivity of S q y . Now all of the morphisms L q y as y varies naturally induce the desired morphism Remark 2.1. This proposition also tells us that the definition of S q y is actually canonical, i.e. it does not depend on the choice of {h ε } andω. By the preparation above, we can prove Theorem 1.4 by using a result in [10]. Proof of Theorem 1.4. Set Q q y := KerL q y . Since S q y as well as L q y is split, we have the short exact sequence By Corollary 1.7 in [10], f * (Ω n−q X/Y ⊗ L) y is reflexive, and hence Q q y is torsion-free. Therefore R q f * (K X/Y ⊗L) y is normal and so reflexive. Vanishing Theorem As is stated in Theorem 1.4, R q f * (K X/Y ⊗ L) is reflexive if the singularity of L is tame enough. In this section, we consider the positivity of the higher direct images. We first prove Theorem 1.5 by using Theorems 1.3 and 1.4. There are some issues we should pay attention to before we start the proof. Firstly, in general if f is a surjective morphism, it is generic smooth, namely it is smooth on a Zariski open subset W ⊂ X. Secondly, if I (h) = O X , it is only assured that I (h y ) = O Xy for all y ∈ Z := {y ∈ Y | h y := h| Xy is well-defined}. Therefore the conditions in Theorem 1.4 are not fully satisfied under the assumptions of Theorem 1.5. Fortunately, this is enough for our purposes. Proof of Theorem 1.5. By asymptotic Serre vanishing theorem, we can choose a positive integer m 0 such that for all m m 0 , for i > 0, q 0. Fix an integer m such that m m 0 and O Y (mA) is very ample. We prove the theorem by induction on n = dim X, the case n = 0 being trivial. Denote A ′ = f * (A) and let H ′ ∈ |mA ′ | be the pullback of a general divisor H ∈ |mA|. It follows from Bertini's theorem that we can assume H is integral and H ′ is smooth (though possibly disconnected). Moreover, we can let H be disjoint with Y − Z ∩ W . Then we have a short exact sequence which is induced by multiplication with a section defining H ′ . We get from this short exact sequence a long exact sequence Hence the long exact sequence (6) can be split into a family of short exact sequences: for all q ≥ 0, On the other hand, applying the inductive hypothesis to each connected component of H ′ , we have that for all i 1 Furthermore, by the choice of m we also have for all i 1 Now by taking the cohomology long exact sequence from the short exact sequence (7), we find for every i > 1 This proves the theorem for the cases where i > 1. To prove the case where i = 1, we denote B l := H 1 (Y, R q f * (K X ⊗ L ⊗ O X (lA ′ ))). By identity (8) for i = 1, we have B m+1 = 0. Hence we consider the following commutative diagram: Here the horizontal maps are the canonical injective maps coming out of the Leray spectral sequence, and the vertical maps are induced by multiplication with sections defining H ′ and H respectively. By Theorem 1.3 the map ψ is injective, and hence the composition ψ • φ is also injective. So B 1 = 0 and we finish the proof of the theorem for the case where i = 1. Using Theorem 1.5, we can prove the positivity of the higher direct images. We first review the definition and a basic result of the Castelnuovo-Mumford regularity [15]. [15]) Let X be a projective manifold and L an ample and globally generated line bundle on X. If F is a coherent sheaf on X that is m-regular with respect to L, then the sheaf F ⊗ L m is globally generated. After this, we can prove Theorem 1.6 as a corollary of Theorem 1.5. Proof of Theorem 1.6. It follows from Theorem 1.5 that for every i 1 Hence the sheaf R q f * (K X ⊗ L) ⊗ A m ⊗ A ′ is 0-regular with respect to A. So it is globally generated by Theorem 3.1.
2019-06-19T10:27:49.000Z
2019-06-18T00:00:00.000
{ "year": 2019, "sha1": "318285e4d2ce3980b942b15d73442e1186ae5434", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1c9e71f45cc3b9ffd277283a2864656701122ba7", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
56322959
pes2o/s2orc
v3-fos-license
Disentangling the role of geometry and interaction in many-body system dynamics: the emergence of anomalous dynamics from the underlying singular continuous spectrum We study the dynamical properties of an interacting many-body systems with an underlying non-trivial energy potential landscape which can induce a singular continuous single-particle energy spectrum. Our aim is to show that there is a non-trivial competition between the quasi-periodicity of the lattice - i.e. the geometry of the system- and many-body interaction, which give rise to anomalous propagation properties. Introduction -The discovery of quasicrystals in 1982 [1] and of protocols to produce large and stable samples [2] has triggered the theoretical studies aiming at understanding the origin of their unusual physical properties. Among others, their peculiar transport features, such as the resistivity increase with both decreasing temperature or increasing the sample purity [3], have attracted an increasing attention [4]. It was soon realized that this behavior is strictly linked to the singular continuous (SC) nature of the single-particle spectrum, with the accompanying critical eigenfunctions [5], whose scaling properties can account for anomalous transport and diffusion, and can partially explain the unusual behavior of these materials [4]. Before the discovery of quasicristalline materials, the mathematical concept of SC spectrum [6] was thought to be a purely mathematical lucubration with no physical counterpart [7]. The SC part is not easily accessible, and, often, its presence is inferred after removing the absolutely continuous (AC) and pure-point (PP) parts from the whole spectrum, provided a set of non-zero measure is left over. The role of SC spectra in the dynamics of non-interacting systems has been investigated in Ref. [9] and its link to anomalous propagation of correlations and in the spreading of an initially localized wave-packet has been investigated in Refs. [8,10]. A particularly interesting, exemplary physical model where the nature of the spectrum plays a crucial role is the Aubry-André model (AAM), which describes particles hopping in a one-dimensional "quasi-periodic" lattice. It has been proven that displays a metal-to-insulator transition [11,12] with the spectrum being AC and PP in the metal and insulating phases, respectively, whereas it is purely SC at the transition point. The model has been realized with ultracold atoms loaded in a bichromatic optical lattice [13][14][15]. Due to the presence of interactions, such system displays a non-trivial phase diagram [16][17][18], with the appearance of a mobility edge [19,20] and of a many-body-localized phase separating an ergodic from a localized one [14,15]. Inspired by recent experiments [14,15] reporting the observation of the dynamical slowing down of an interacting gas loaded in an incommensurate bichromatic lattice, we aim at providing an explanation for these observations based on the nature of the single-particle energy spectrum of the AAM model. Our results, valid for at small interactions, point towards the phase diagram sketched in Fig. 1, in terms of the interaction strength, U, and of the onsite potential of the AAM, λ. We have found that a threshold value λ c exists, slowly decreasing with increasing U, such that for λ λ c the singleparticle spectrum has a PP nature, with a frozen dynamics and a localized system; on the other hand, for λ ≤ λ c , the dynamics is ergodic with typical time scales of the order of that of a single particle with an AC spectrum. These two extreme behaviors are separated by an intermediate region where the dynamics is still ergodic but on time scales much larger than the typical single-particle time. This is where the slowing down of the dynamics, reported in Ref. [14], occurs. Our findings imply that such a behavior is due to a non-trivial competition between the underlying order induced by induced by the potential energy landscape, and the many-body interactions. The model and physical quantities -We consider a gas of spin-1/2 particles in one dimension, described by the Fermi-Hubbard model: where n is the onsite energy, U the on-site interaction between particles with different spin in the s-wave approximation,ĉ † n,σ (ĉ n,σ ) are fermion creation (annihilation) operators at site n with spin σ andn n,σ =ĉ † n,σĉn,σ the corresponding num-ber operator. We work with open boundary conditions not to enforce any artificial periodicity. The AAM is obtained by setting n = Jλ cos(2πτn) [21] with τ = √ 5 + 1 /2. If not otherwise stated, we consider the same initial state as in the experiment [14], with two particles with opposite spin on even sites; the subsequent time evolution is then generated by the Hamiltonian in Eq.( 1). In the case of ultracold gases, this corresponds to a quench where at the initial time t = 0 both nearest-neighbour tunneling and onsite interaction are brought to finite values on a time scale much shorter than the tunneling time J −1 , but large enough not to excite transitions to higher bands [22]. At the single particle level, the dynamics of the system is encoded in the lesser and greater Green's functions defined as G < s,s (t; t ) = i ĉ † s (t )ĉ s (t) 0 and G > s,s (t; t ) = −i ĉ s (t)ĉ † s (t ) 0 respectively, where the average is over the initial state, and we use the notation s = n, σ. Information on the spectral properties, instead, are encoded in the spectral func- where Ω = T r σ [G > −G < ] and the trace is over the spin degrees of freedom. To compute these quantities we resort to the nonequilibrium Green's functions technique, by solving numerically the Dyson equation for the single-particle Green's function. Our approach closely follows the one of Refs. [23,24] and is an extension of the self-consistent approach presented in Ref. [25] for bosonic systems. The self-energy entering the Dyson equation is calculated in the second-Born approximation [26]; more details on the numerical implementation will be presented elsewhere [27]. Geometry-induced anomalous diffusion -We start by looking at the role of the geometry of the potential energy landscape, which affects the spreading of the correlations due to its influence on the nature of the single-particle energy spectrum. The spreading of correlations in a non-interacting system with a continuous energy spectrum (corresponding to extended eigenstates) is ballistic with a maximum velocity determined by both the energy spectrum and the initial state but it is always finite and bounded from above by the Lieb-Robinson bound [28]. In the case of a discrete energy spectrum (with exponentially localized eigenstates), the spreading is suppressed and correlations develop only in a finite region whose size is proportional to the localization length, thus going to zero in the thermodynamics limit. In order to quantify the the spreading of the correlations we use the variance of the probability distribution defined as P i (t) = |G < i 0 ,i (0; t)| 2 similarly to what has been done in Ref. [25]. Due to the absence of interaction, the spin degree of freedom factorizes is irrelevant; therefore, we can consider spinless fermions when U = 0. In Fig. 2 panel a), we show the probability distribution P i (t) for the AAM with L = 200 sites in the metallic (extended) phase (λ = 0.8) and at the transition point (λ = 1). It can be clearly seen that, below the transition point, the spreading is ballistic, whereas, at the transition point, it acquires a (possibly anomalous) diffusive behavior. By focusing on the variance σ(t) of P i (t), and assuming a power law behavior, σ(t) ∝ t α for Jt 1, we looked at the behavior of the exponent α for different system sizes and different values of λ. The results are shown in Fig. 2 panel b), where we can see that for the AAM, the expansion tends to be ballistic in the thermodynamic limit as α = 1 for λ < 1. On the other hand, at λ = 1 the exponent drops to α ≈ 1/2, thus signaling the presence of a diffusive behavior. For λ > 1, we observe a shrinking of the region where the power-law behavior is in order as the system size increases. The residual, observed expansion, therefore, can be attributed to the tails of the exponentially localized eigenstates (due to the finite size). It is interesting to compare this behavior with that of a model which does not show any transition but has a pure SC energy spectrum. We have choose the on-site Fibonacci model (OFM) obtained by setting n = Jλ( n + 1 − n ) in Eq.( 1) whose spectrum has been proved to be SC [30]. The results are shown in Fig. 2 panel c). We see that increasing λ results in a stronger deviation from the ballistic behavior which on its own might seem surprising due to the non-interacting nature of the system we are considering. On the other hand, this behavior can be traced back to the critical nature of the eigenfunctions together with the SC nature of the spectrum [4,10,30], shared by other aperiodic structures [31,33]. We can drawn two main conclusions from the above observations. The AAM for λ < 1(λ > 1) behaves as any noninteracting system with a continuous (discrete) energy spectrum, inducing ballistic (suppression of) propagation of correlations. At the transition point, on the other hand, the spreading turns diffusive, a behavior usually arising in the presence of interactions and/or phase boundaries (as is the case for the AAM at λ = 1). In the OFM, the system shows anomalous diffusion despite the absence of any phase transition and / or crossover between different phases. On the other hand, the AAM and the OFM share a common feature: the nature of the single-particle spectrum at the transition point for the AAM and that of the OFM for any finite value of λ is singular continuous with critical eigenstates, which manifest as an anomalous diffusive behavior. Interplay between interaction and geometry -The presence of interaction can alter the transport properties in a substantial way; for example, when the non-interacting single particle eigenfunctions are extended states, the spreading turns from ballistic to diffusive as the interaction strength is increased [22,25]. On the other hand, we have just shown that an anomalous diffusion can arise in a non-interacting system solely due to the properties of the underlying geometry of the potential energy landscape. A natural question to ask is then how the dynamical properties due to a non-trivial underlying geometry are affected by the interactions. To answer this question, we look at the dynamics of a many-body interacting system described by the Hamiltonian in Eq.( 1) both for the AAM and the OFM. In order to investigate the properties of such a system, we resort to experimentally accessible physical quantities [14,15,22] so that our results can be benckmarked with experimental results. Let us introduce the particle imbalance, defined as: is the number of particles at the even (odd) sites at time t and N tot is the total number of particles in the system. This quantity is a good witness of the diffusion properties: for a system in a delocalized (ergodic) phase, we expect that, regardless of the initial state, ∆N(t) → 0 at long enough times (Jt 1) as all particles will be equally redistributed among different sites. On the other hand, in a localized phase ∆N(t) → C(λ, U) 0 at long times. Refs. [14,15] have reported that, in the AAM, this is indeed true away from the non-interacting critical point. Close to λ = 1, instead, ∆N → 0 with a power law behavior. The latter is a signature of a non-trivial interplay between the effect of interaction and geometry that we want to investigate here in more detail. Fig. 3 reports the imbalance ∆N(t) for the AAM and for two different interaction strengths a) U = 0.6J and b) U = 1.4J. The top (bottom) row shows ∆N(t) for values of λ < 1 (λ ≥ 1). We see that, for λ < 1, ∆N(t) → 0, whereas, for λ ≥ 1, there are two appreciably different behaviors. There exist a critical value of the interaction U c (λ) such that for U < U c (λ) ∆N(t) → C 0 whereas for U ≥ U c (λ) ∆N(t) → 0 with a power-law behavior. In order to characterize such a behavior we fitted the imbalance with a power-law of the form ∆N(t) = at −β . In Fig.4 we show two density plots reporting the exponent β and the constant a, respectively. We see that, for λ < 1, we have β ≈ 0 and a ≈ 0, thus signaling that there is no power law and that ∆N(t) vanishes, as expected in the delocalized phase. For λ > 1 and for small enough interactions U, we have β ≈ 0 and a 0 thus signaling localization at long times. A non trivial power law is found for λ 1 and U > U c (λ), witnessed by the exponent β being 0. It is also interesting to note that as U increases, a power law can be found also for values of λ 1; this is easily understood in a mean field picture: the many-body interaction is responsible for an effective increase of the local energy at each site due to the presence of a particle with opposite spins. The key point is now to understand whether or not the power-law behavior is a legacy of the transition at U = 0, or it has a deeper origin. In order to shed light onto this puzzle, we looked for comparison at the interacting OFM. In Fig. 5, we show the imbalance ∆N(t) for two values of the interaction U and for each of them we considered several values of the parameter λ. We see that for small values of λ and at long times ∆N(t) = 0, whereas for larger λ a power-law behavior emerges once again, similarly to what we have seen for the AAM, and sharing similar features; namely that the power-law exponent (not shown) increases with increasing U at fixed λ. Another interesting aspect emerges as we look at the spectral function A(T, k, ω) for Jt 1 shown in the panels c) and d) of Figs. 3 and 5. We see that the role of the potential λ is to open gaps in the spectrum, whereas interaction tends to both broaden the peaks in momentum and to close gaps in the energy spectrum. Therefore, in the AAM for λ > 1, the two parameters λ and U compete in broadening the spectral function in momentum, but have contrasting effects on the energy gaps. Notice that, in the case of the OFM, we would have expected peaks [34,35] located at specific values of the momentum related to the strength of λ; these are not visible due to the small number of sites considered here. From Fig. 5 panel c) and d), we can see that the appearence of the power-law beavior in ∆N(t) is accompanied by the start of a broadening in momentum, which, in that case, is not due to localization, but most likely to the merge of the broadened peaks mentioned above. We therefore conjecture that the power-law behavior arises as a non-trivial competition between the geometry of the underlying energy landscape and the two-body interaction. In the reminder of this letter, we will search for further evidences of a link of this behavior to the nature of the single particle energy spectrum. SC spectrum in interacting systems -By looking at the spreading of particles, we have observed three markedly different behaviors for the AAM: i) a complete delocalization over the time scale of the single particle hopping, which occurs for small interaction and for λ < λ c ; ii) a localized region, where spreading is suppressed even in the presence of interactions in the parameter region λ > λ c and U > U c (λ) > 0; iii) a power law decay of ∆N(t) at long times in a region λ ≥ λ c and U ≤ U c (λ) > 0. A similar behavior has been observed in the OFM, where depending on the strength of the on-site potential (and, therefore, of the singularity nature of the spectrum), there exists a region in the (U, λ) plane where ∆N(t) displays a power-law decay, and which separates two regions where the imbalance either shows long-living oscillations, or it decays to zero. We now want to link this behavior to the nature of the single particle spectrum by introducing two quantities: the time averaged imbalance correlation, C(τ) = ∆N 2 (t) −1 ∆N(t)∆N(t + τ) , (where the brackets denote a time average over t) and its Cesáro (time) average C 2 (τ) T = T −1 T 0 dτ C 2 (τ). By following the discussions in Refs. [9,25,32,37], we can use them to investigate the nature of the spectrum of the signal in ∆N(t), using the Ruelle-Amrein-Georgescu-Enss (RAGE) theorem and Wiener lemma which state that if lim τ→∞ C(τ) 0, then there can be no AC component in the spectrum; if lim T →∞ C 2 (τ) T = 0, then there can be no PP component in the spectrum [32]. If both conditions hold, then the spectrum must be purely SC in nature due to Lesbegue classification theorem of positive measures. From Fig. 6 we can see that for λ < 1 C(τ) → 0 at long times, and at the same time C 2 (τ) T → 0 for large T , thus signaling the presence of an AC component (due to the extended eigenstates) but a lack of a PP one in the spectrum of the signal ∆N(t), as expected. On the other hand, for large enough λ, we have the opposite behavior; namely, C(τ) 0 and C 2 (τ) T 0 for large T , witnessing the presence of a PP component in the spectrum, reminiscent of the discrete spectrum for λ > 1 in the non-interacting case. For intermediate values of λ and U, we observe that there are pairs of values (U, λ) for which C(τ) → 0 as a power law and at the same time C 2 (τ) T → 0, therefore signaling the absence of both AC and PP components in the spectrum; due to the Lesbegue decomposition of positive measures, we therefore conclude that in that region the spectrum is SC in nature. These regions correspond to the ones were we observed a power-law behavior in the decay of ∆N(t) in the previous discussion. To rule out the role of the transition at λ = 1 and U = 0, we refer again to the OFM which has no transition at all in the non-interacting case. In Fig.7, we recover the same qualitative behavior we just discussed for the AAM; namely, that there are regions where both C(τ) → 0 and C 2 (τ) T → 0, signaling the presence of a SC spectrum. This is particularly interesting because, as a side result, we obtained that SC spectra are robust when many-body interactions are added, thus leaving hope of observing the unusual properties of quasicrystalline materials also in moderately interacting systems. As a side but important remark, we want to stress here that, whereas in the AAM in order to observe these features in a clear way one has to resort to large enough system sizes, due to the reduced region of existence of the SC spectrum in the non-interacting case (which is only one point in the whole parameter space), in other quasicristals, such as the Fibonaccilike ones, this region is much wider and more robust against finite size effects. Indeed, whereas for the AAM we had to consider system of L = 40 sites, for the OFM we were able to observe them already for L = 20 sites. Conclusions -To summarize, we have investigated the dy-namics of the interacting Aubry-André model, focusing on the diffusion of particles initially held in an uncorrelated state with an inhomogeneous spatial distribution. We have found that, in the AAM, the anomalous behavior, which occurs only at the transition point in the non-interacting case, is found to occur in a finite region of the plane (U, λ) in the presence of interaction. Aiming at showing that this behavior is not a legacy of the non-interacting transition point, we looked for a comparison at the interacting on-site Fibonacci model, which lacks transitions of any type in the non-interacting case. We have found that an anomalous redistribution of particles occurs for this model as well. In order to explain this behavior, we resorted to the Lesbegue decomposition of positive measures to show that the power-law behavior is a consequence of the singular continuous nature of the single particle spectrum of the system. We conjecture that, in the interacting case, the power-law behavior is linked to anomalous diffusion similarly to what happens in non-interacting systems with quasicrystalline geometries [4]. In analogy to the non-interacting case, we expect that the microscopic mechanism behind such a behavior has to be sough in the critical nature of the single particle natural orbitals of the reduced density matrix of the system at stationarity. Acnowledgements -NLG acknowledges financial support from the Turku Collegium for Science and Medicine (TCSM). Numerical simulation were performed exploting the Finnish CSC facilties under the Project no. 2001004 ("Quenches in weakly interacting ultracold atomic gases with non-trivial geometries").
2018-09-27T14:07:06.000Z
2018-09-27T00:00:00.000
{ "year": 2018, "sha1": "bf9ffe50c77eb2146a6caaec8fa12be1e303452d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bf9ffe50c77eb2146a6caaec8fa12be1e303452d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
139538058
pes2o/s2orc
v3-fos-license
Effect of Flue Gas Recirculation on Reheated Steam Temperature of a 1000MW Ultra-supercritical Double Reheat Boiler In view of the problem that the reheat steam temperature is lower than the design value in the operation of a 1000MW ultra-supercritical double reheat boiler in a power plant, this paper puts forward the method of flue gas recirculation (FGR) to improve it. Two schemes are put forward:the extraction of flue gas from the economizer or the draft fan to the bottom burner of the furnace. Indifferentload conditions,using the standard method of boiler thermodynamic calculation,the influence of different FGR schemes on boiler operation parameters is calculated. The results show that above FGR schemescan obviously improve thereheat steam temperature; with the increase of the amount of recirculating gas, the rise of the reheat steam temperature increases; with the decrease of the load, the influence of FGR on steam temperature increase; the scheme of extracting recirculating flue gas from the economizer outlet has little effect on the boiler efficiency, which is more suitable for the boiler, and the reasonableFGR rate is about 10%. Introduction In response to the requirements of the current national exploration and promotion of efficient clean coal and electricity technology, the ultra-supercritical doublereheat power generation technology is being paid more and more attention [1][2][3]. The units with the ultra-supercritical double reheat technology can increase the thermal efficiency by about 2% more than the single reheat unit [4].But the reheater series and the reheat steam heat absorption of the double reheat boiler have both increased, Yan et al. [5] pointed out that because of the doublereheat technology, the heat absorption ratio of 1000WM capacity unit's superheated steam decreased and the reheat steam's heat absorption proportion increased. That makes the structure of the thermal system of the double reheat unit more complex, and the coordinated control of the reheat steam becomes more difficult. And it may lead to the phenomenon that the reheat steam temperature is lower than the design value when the unit is running, which reduces the operation economy of the unit. In view of the difficulty of the problem of the reheat steam temperature of the double reheat unit below the design value. Many scholars have carried on the related research.Dang et al. [6] has studied the problem of low reheat steam temperature of the 660MW ultra-supercritical boiler by remoulding the heating area of reheater.Zhang et al. [7] has studied the effect of FGR on reheat steam temperature and set a 600MW single unit as an example to calculate. In summary, the transformation of reheater heating area and the use of FGR can effectively solve the problem of low temperature of reheat steam, but the method of remolding the heating area of reheater is influenced by the structure and layout of the boiler. Therefore, based on the original temperature regulation methods of reheat steam,flue gas recirculation is proposed to increase reheat steam temperature in this paper. Two FGR schemes are developed for the different extraction points of recirculating flue gas and will be adopted to calculate thermodynamic parameters of boiler, and then the effect of different recirculation rate will be analyzed to obtain a reasonable scheme. Overview of the boiler and FGR schemes This is a 1000MW ultra-supercritical double reheat tower boiler. The temperature regulation methods of reheat steam are swing nozzle and flue gas baffle. The main design parameters of the boiler are shown in table 1. Problems in operation The temperature of single-reheated and secondary reheated steam are lower than the design value under the actual working conditions. The specific parameters are shown in table 2. The principle of FGR The principle of the FGR technology of steam temperature regulation: The gas recirculation fan extracts a part of low temperature flue gas from the outlet of the economizer or induced draft fan and sends into the furnace. Thereby, it reduces the level of flue gas temperature inside the furnace and increases the amount of the flue gas in the furnace. Finally, the reheater and other heat exchangers can be enhanced and the reheat steam temperature is improved. 2.3.Concrete scheme of flue gas recirculation According to the structure of the boiler and the arrangement of the heating surface, the following FGR schemesare formulated: Scheme 1: the recirculating flue gas is extracted from the exit of coal economizer and sent to the bottom of the burner at the bottom of the hearth. Scheme 2: the recirculating flue gas is extracted from the outlet of the induced draft fan and sent to the bottom of the burner at the bottom of the hearth. The different FGR extraction locations correspond to different gas temperatures. The gas temperature from the economizer outlet is about 379 degrees, and thegas temperature from the outlet of the draught fan is about 117 degrees. TheFGRschemes are shown in Figure 1. In view of above FGR schemes, under the BMCR and 75%THA load conditions, this paper calculated and analyzed the influence of different schemes on boiler parameters when there was no recirculation, 5%, 10% and 15% FGR rate. The thermodynamic calculation method for boiler with FGR According to the standard method of thermal calculation of boiler unit, the thermal calculation of boiler unit is realized through VB software programming. Because of the use of FGR, the flue gas volume, composition and gas enthalpy from the return point to the extraction place have changed. When the boiler thermodynamic calculation is carried out, it is necessary to recalculate the gas characteristic parameters and the enthalpy of the flue gas. The Where, V is the flue gas volume (m 3 kg -1 ); r is the FGR rate (%) and I is the flue gas enthalpy (kJkg -1 ). Then, the flue gas temperature after mixing is solved: Where, r θ is the mixed flue gas temperature (℃); ( ) r VC is the average thermal capacity of the mixed flue gas (kJ (kg℃) -1 ). It is necessary to pay attention to the amount, the flow rate and the characteristics of flue gas and the parameters of the enthalpy meter should be based on the actual flow of flue gas after mixing the heated surface. Comparison of the results of flue gas recirculation schemes under the BMCR load condition The influence of the ratio of different recirculation flue gas on the theoretical combustion temperature, the flue gas temperature of the furnace outlet, the temperature of the single-reheated and secondary reheated steam are shown in the following figures. As shown in the figures, the two schemes put the low temperature flue gas into the bottom area of the furnace, which will have an impact on the combustion and heat transfer in the boiler furnace. And the final result is that the furnace outlet gas temperature will decrease. Compared with the two schemes, the flue gas extraction point of scheme2 is the induced draft fan, which results in a lower gas temperature. For every 1% increase in recirculation rate, the furnace outlet of scheme 1 is reduced by 0.4℃ and the theoretical combustion temperature reduced by 25℃, while scheme 2 reduced by 0.6℃ and 28℃. Due to the increase of flue gas volume and flue gas flow velocity on the convection heating surface, the flue gas heat release coefficient and convective heat transfer increase, which eventually leads to the rise of the temperature of the outlet working medium. Comparison of the results of flue gas recirculation schemes under the 75%THA load condition The influence of the ratio of different recirculation flue gas on the above parameters under 75%THA load condition is shown in the following figures. show that under 75%THA load, for every 1% increase in recirculation rate, the single-reheat steam temperature of scheme 1 increased by 2℃ and the secondary reheated steam temperature increased by 1.8℃, while scheme 2 increased by 2.2℃ and 2℃. The above results show that under the condition of BMCR and 75%THA load, when the FGR rate is about 10%, scheme 1 and 2 can make the steam temperature of single-reheated and secondary reheated reach the design value. With the increase of recirculation ratios, the drop of the furnace outlet gas temperature and the rise of reheated steam temperature are increased. And at low load conditions, the influence of the recirculation of the flue gas on the reheated steam temperature is increased. Compared with the scheme 1, the exhaust gas temperature of scheme 2 increased, and the boiler efficiency decreased. Therefore, the scheme 1 is more suitable for this boiler's FGR scheme. Conclusion Through the above calculation and analysis, this paper draws the following conclusions: The reheat steam temperature of the double reheat boiler can be significantly improved by the FGR scheme of sending the recirculating flue gas into the bottom of the furnace. With the increase of the amount of recirculating flue gas, the rise of reheated steam increases and with the decrease of load, the influence of recirculating flue gas on reheated steam temperature increases. The scheme of extracting flue gas into the bottom of the furnace from the economizer is the suitable FGR scheme for the boiler, and the reasonable FGR ratio is about 10%, which can provide a reference value for solving the low boiler reheat steam temperature and the operation optimization of the unit.
2019-04-30T13:07:27.046Z
2018-05-01T00:00:00.000
{ "year": 2018, "sha1": "d1742fac39c609bf9e06718034b8958b95ab8380", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/146/1/012043", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9258c6929e75866dd15e47f3274ca1b334c739b8", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
55962613
pes2o/s2orc
v3-fos-license
Content Analysis of the Discussion of the Atom in General Chemistry Textbooks Using Evaluation Criteria Based on the Nature of Science and Philosophy of Chemistry Evaluation criteria are adapted from previous textbook analyses on the nature of science (NOS) in general chemistry textbooks. These criteria are used to determine how certain NOS dimensions are mentioned and elaborated in those textbooks. Such dimensions emphasize that chemistry is (1) tentative, (2) empirical, (3) model-based, (4) inferential, (5) has technological products, (6) employs instrumentation, and (7) possesses social and societal dimensions. Three book chapters were read and evaluated: the first (on chemistry in general); the second (on atomic structure); and the sixth or seventh chapters (on the electronic structure of atoms). The relevant content in each textbook were rated using the following rubric: Satisfactory and Explicit (S, 2 points); Mention and Implicit (M, 1 point); and No Mention (N, 0 point). Silberberg (2009) has the highest score among the six textbooks with 12 points out of the maximum of 14. It was rated S for five criteria, the most among the six textbooks. Despite the presence of some N evaluations, all textbooks have mentioned some or all of the NOS dimensions formulated, resulting to M and S ratings. This study concludes that NOS dimensions are already present in various ways and varying degrees in each textbook. thinking skills; contribute to the fuller understanding of scientific subject matter; improve teacher education by assisting teachers to develop a richer and more authentic understanding of science; contribute to the clearer appraisal of many contemporary educational debates that engage science teachers and curriculum planners" (p. 11). At its core however, HPS asks a basic question: "What is this thing we call science?" This simple but central question leads to other hosts of questions, such as: "How is science a human and social endeavor?" What does it mean to 'do' science?" and "How does scientific knowledge differ from other kinds of knowledge?" As such, HPS moves beyond the laboratory setting, as well as scientists' own views of their field, to consider other ways of thinking and knowing that might inconspicuously impinge upon the scientific endeavor. Under HPS-based research is discussion on the nature of science (NOS). NOS research involves questions such as "what science is, how it works, how scientists operate as a social group and how society itself both directs and reacts to scientific endeavors" (McComas et al., 1998). As such, NOS challenges misconceptions and myths, people, including science educators and students, might have about science. This study also considers the emerging field of philosophy of chemistry, the academic intersection between standard philosophy of science and the scientific discipline of chemistry. The Stanford Encyclopedia of Philosophy defines two sets of issues and questions that philosophy of chemistry engages in: (a) conceptual issues unique in chemistry, in which they are clarified, articulated and analyzed (e.g. the nature of substance, atomism, the chemical bond, and synthesis). Such issues are subjected to philosophical rigor and perspectives; and (b) re-exploration of traditional topics in the philosophy of science specifically within the context of chemistry (e.g. realism, reduction, explanation, modeling, confirmation). These standard topics are discussed in view of specific chemistry examples and applications. (cf. Weisberg et al., 2011) Philosophy of chemistry could help students and teachers gain a deeper understanding of the nature of chemistry. The article "Chemistry Education: Ten Facets to Shape Us" by Vicente Talanquer (2013) in the Journal of Chemical Education mentions ten recent reconceptualizations and new perspectives (which he calls facets) on how chemistry teachers and students could better synthesize and make sense of chemical knowledge taught in the classroom. He calls his ninth facet as Philosophical Considerations, in which philosophy of chemistry is mentioned. Talanquer (2013) argues that issues and debates in philosophy of chemistry could help students and teachers be aware of the power, scope, as well as the limitations of concepts, laws and models we use in chemistry; utilize philosophical arguments as pedagogical tools; gain a much deeper understanding of the nature of chemistry; and be critically reflective of chemistry itself. A further line of study in science education consists of content analyses of textbooks based on their degree and quality of presentation of certain NOS dimensions. Many studies on the HPS and NOS are cognizant of "the role played by textbooks in developing students' informed NOS conceptions….Recent HPS-based research has shown increasing interest in analyzing textbooks and thus providing guidelines for future textbooks" (Niaz and Maza, 2011, p. 2). Chapter 44 of the International Handbook of Research in History, Philosophy and Science Teaching, authored by Mansoor Niaz (2014), reviews the current literature on evaluating and consequently suggesting the inclusion of HPS perspectives in science textbooks. These studies in promoting HPS perspectives in science education argue that HPS and NOS should not be an extra, but instead infused in various modes of learning in science education, including textbooks. Another study concurs, saying that "[t]extbooks, as one of the most important science teaching resources, should provide teachers with a sufficiently wide variety of examples to discuss the different dimensions of NOS" (Vesterinen et al., 2013(Vesterinen et al., , p. 1851. This area of science education research has the same motivation as that of Talanquer (2013) mentioned above -a moving away from conventional, even outmoded, ways of presenting and explaining scientific knowledge. Niaz (2014) further notes that under textbook analysis research, there are two types of studies presently done that entail evaluation of textbooks (p. 1413): (1) domain specific ["based on a historical reconstruction of a given topic of the science curriculum"]; and (2) domain general ["based on a series of nature of science (NOS) dimensions, which are in turn derived from the history and philosophy of science"]. This present study is of the second type -an evaluation of textbooks based on certain NOS dimensions. A textbook analysis of Mansoor Niaz and Arelys Maza in 2011 evaluated introductory chapters or prefaces of general chemistry textbooks. They devised nine criteria that elucidated certain elements or dimensions of NOS, some of which include "the tentative nature of scientific theories," that "observations are theory-laden," and that "scientific ideas are affected by their social and historic milieu." The following are the specific guidelines of Niaz and Maza (2011) for their ratings of S, M, or N, which this present study adopts:  Satisfactory (S): "Treatment of the subject in the textbook is considered to be satisfactory, if the criterion is described and examples provided to illustrate the different aspects." The said study also awarded numerical weights to each rating: S = 2, M = 1, N = 0. However, while Niaz and Maza's samples are general chemistry textbooks, their NOS criteria are still not specific to chemistry. The chapters that they analyzed (the Introduction, Preface, or first chapters) are also not yet explicit in terms of chemistry concepts. Thus, another relevant textbook analysis for this present study is that of Vesterinen et al. (2013). This study is particularly significant because it incorporates literature from philosophy of chemistry in its criteria for evaluation of NOS dimensions in chemistry textbooks. Their analysis lies in two successive rounds, each with its own criteria: (1) the four themes of scientific literacy (knowledge of science; investigative nature of science; science as a way of thinking; and interaction of science, technology and society). Focusing on the third theme ("science as a way of thinking"), (2) seven NOS dimensions were developed (tentative; empirical; model-based; inferential; technological products; instrumentation; and social and societal dimensions). Unlike the criteria of Niaz and Maza (2011), the criteria of Vesterinen et al. (2013) are more explicit in terms of chemistry concepts. These two studies just mentioned, Niaz and Maza (2011) and Vesterinen et al. (2013), would form the backbone for the methodology of the present work. As such, this study aims to specify that link between philosophy of chemistry and chemistry education through textbook analysis. The general objective of this project is to evaluate and analyze select general chemistry textbooks based on their presentation and discussion of the atom using criteria and perspectives from the nature of science and philosophy of chemistry, with the following specific objectives in mind: (a) to formulate criteria for evaluation, adapted from the textbook analyses of Niaz and Maza (2011) and Vesterinen et al. (2013); and (b) to evaluate select (six) college general chemistry textbooks using the above criteria, focusing on how the atom is presented and discussed. METHODS The main aim of this study is to evaluate and analyze select general chemistry textbooks KIMIKA • Volume 27, Number 2, July 2016 based on their discussion of the atom (in particular the discovery and development of theories concerning atomic and electronic structures), using criteria and perspectives from NOS and philosophy of chemistry. As already stated, the textbook analysis for this study appropriates the previous work done by Niaz and Maza (2011) and Vesterinen et al. (2013). In terms of scope, this study also evaluated the first chapters of the textbooks, thus making it similar to Niaz and Maza (2011). However, two additional chapters aside from the preface or introductory chapter were also read and evaluated -those pertaining to the historical development and application of the atomic and quantum theories. Such latter chapters discuss the historical and theoretical development of the concept of the atom -a topic that this study perceives could bring about possible philosophical considerations, as well as corrections to misconceptions that abound in teaching and learning about it. Atoms, as the fundamental unit of matter, can elicit philosophical and critical thinking questions (for instance, the real nature of orbitals, is a key concern in philosophy of chemistry). A more practical reason would be that, due to time constraints and given the focus of this study, this work cannot possibly attempt to evaluate all chapters of each textbook, as done by Vesterinen et al. (2013). While NOS criteria might be expected to be mostly present in the first chapter (due to its more general nature, it focuses more on "science" in general instead of a specific scientific field such as chemistry), this study deems it worthwhile to look at other chapters in the textbook and see how those chapters still have some vestiges of this more general discussion and how they can still carry and discuss the relevant NOS dimensions in specific chemistry topics. Six textbooks in general chemistry are chosen, all published in the United States, with copies present in the Ateneo de Manila University Department of Chemistry, and used by the department faculty in its undergraduate chemistry courses. The editions under consideration are the most recent ones that are presently available and accessible to the present study. Supplementary Table 1 lists the editions of these general chemistry textbooks, as well as the specific chapters to be analyzed. These textbooks are also widely-known and widely-used titles in university-level general chemistry courses in the Philippines and abroad. There are only three chapters considered and evaluated for this study: the first (which introduces science and chemistry in general); the second (which discusses atoms, molecules and ions, the atomic structure, as well as the development of the atomic theory); and the sixth or seventh chapters (chapters on quantum theory and the electronic structure of atoms, depending on the textbook). Additionally, the Preface is also read to elucidate each author's philosophy on the content and organization of their textbook. Usually, the chapter on the periodic table succeeds the chapter on quantum theory. While these topics are closely related, only the quantum origins of some periodic table properties are considered in this study. The methodology of this present study closely follows the presentations of Niaz and Maza (2011) and Vesterinen et al. (2013) in their textbook analyses. Since it already involves chemistry and philosophy of chemistry explicitly, the seven-point criteria suggested by Vesterinen et al. (2013) is adapted in this study. So far, it is the only NOS study that explicitly points to literature on the philosophy of chemistry as a source and justification for its criteria. These criteria are as follows: that chemistry is (1) tentative, (2) empirical, (3) model-based, (4) inferential, (5) has technological products, (6) employs instrumentation, and (7) possesses social and societal dimensions. However, there are many overlaps of these present criteria with the previous study of Niaz and Maza (2011), insofar as both studies created evaluation criteria on the nature of science as applied to general chemistry textbooks. The criteria for evaluating general chemistry textbooks were adapted from Vesterinen et al. (2013), but the use of numerical ratings equivalent to No Mention, Mention, or Satisfactory regarding relevant passages were taken from Niaz and Maza's study (as mentioned above in the Introduction). While literature points to textbook analyses already done on the discussion of the atom in textbooks, the novelty of the present work is using the seven-point criteria from Vesterinen et al. (2013) with relevant supplementing information from Niaz and Maza (2011). Vesterinen et al. (2013) did not use a grading scheme. However, one of their tables attributes an Explicit or Implicit label to certain passages. This present study sees the similarities in Niaz and Maza's use of Satisfactory and Vesterinen, et al.'s use of Explicit, as well as No Mention and Implicit, respectively. Hence, these two sets of rubrics are integrated in this study for a set of "hybrid" evaluation criteria. Depending on the quality of exposition and discussion of each NOS dimension, the following points are awarded by this present study to the textbooks being evaluated: Satisfactory and Explicit (S) = 2 points; Mention and Implicit (M) = 1 point; No Mention (N) = 0 point. A relevant excerpt from the textbook merits a Satisfactory and Explicit grade if it could move beyond mere one-sentence and/or the traditional and usual discussion of the topic at hand, even if the NOS dimension under question is stated. The relevant text should have explicitly included more explanations, illustrations, examples, nuances, and questions that elicit thinking for the students, and it should have informed them of alternative perspectives of looking at the topic at hand. For instance, the usual chemistry major might know and agree that her field is "empirical" and "model-based." However, this study hopes that textbooks (and the chemistry major) move beyond the standard notions of chemistry as "empirical" and "model-based," and instead nuance those terms to accommodate the scope, limitations and other ways of thinking about chemistry concepts. RESULTS AND DISCUSSION Despite the presence of some No Mention (N) evaluations, all textbooks have mentioned some or all of the NOS dimensions formulated, resulting to M and S ratings. Furthermore, it can be seen that the topic of the atom can elicit mention of all of the seven NOS criteria (at least for an M rating). Three textbooks (Brown et al., 2015;Chang, 2010;and Silberberg, 2009) received a mixture of M and S ratings, with no N. The other three have received an N rating in some criteria. Of the six textbooks evaluated in this study, Silberberg (2009) received the highest rating (12 points out of a perfect score of 14). Brown et al. (2015) and Hill et al. (2013) closely follow, with 11 and 10 points, respectively. The results of this study show that NOS dimensions are already present in various ways and varying degrees in each textbook. All textbooks in this study have manifested, in different degrees and combinations, the seven-point criteria used in this study. This confirms an observation made by Niaz (2014): "it is important to note that a small number of textbooks did provide material based on HPS that can further students' understanding of science. This shows that HPS is already 'inside' the science curriculum" (p.1435). The textbooks that merited S ratings are those that gave explicit discussions and/or provided additional text boxes on the NOS dimensions in question. Silberberg (2009), having the highest number of S ratings (5), possesses most of the content (as stated in the sevenpoint criteria) desired by this study. We know that textbooks form the background of any formal type of education, especially in educational institutions. Hence, there is hope that such dimensions and elements of the "nature of science" could be part of the education of both students and teachers, and more so be discussed in the classroom setting. Agreeing with Niaz and Maza (2011), the relatively high scores of Brown et al. (2015), Hill et al. (2013), and Silberberg (2009) say that while NOS is not an explicit and major objective in chemistry textbooks (so far, no textbook has included the elucidation of NOS as part of its Preface), certain passages inside those textbooks align with NOS dimensions. Zumdahl and Zumdahl (2014) Chemistry is Tentative. Only one textbook received an S rating -Chemistry for Changing Times by Hill et al. (2013), primarily because of the specific and explicit section in its Chapter 1 devoted to the scientific method (Science: Reproducible, Testable, Tentative, Predictive, and Explanatory). Under this section is a subsection entitled "Scientific Theories Are Tentative and Predictive." Excerpt 1.1 in Supplementary Table 2 is from that subsection. The other textbooks have some discussion of the scientific method. However, it is only in Hill et al. (2013) that the word "tentative" is explicitly stated in the context of scientific method. Three textbooks received M - Brown et al. (2015), Chang (2010), and Silberberg (2009). The relevant passage from Brown et al. (2015) is in Supplementary Table 2 (excerpt 1.2), while those from the other two textbooks are shown below. They all hint towards the tentative nature of theories and hypothesis (for instance, that they are not absolutely true and certain), but without making these more explicit. To be sure, Bohr made a significant contribution to our understanding of atoms, and his suggestion that the energy of an electron in an atom is quantized remains unchallenged. But his theory did not provide a complete description of electronic behavior in atoms. In 1926 the Austrian physicist Erwin Schrödinger, using a complicated mathematical technique, formulated an equation that describes the behavior and energies of submicroscopic particles in general, an equation analogous to Newton's laws of motion for macroscopic objects. The Schrödinger equation requires advanced calculus to solve, and we will not discuss it here. It is important to know, however, that the equation incorporates both particle behavior, in terms of mass m, and wave behavior, in terms of a wave function ψ (psi), which depends on the location in space of the system (such as an electron in an atom). [Chang (2010), p. 293, italics in the original] Whether derived from actual observation or from a "spark of intuition," a hypothesis is a proposal made to explain an observation. A sound hypothesis need not be correct, but it must be testable. Thus, a hypothesis is often the reason for performing an experiment. If the hypothesis is inconsistent with the experimental results, it must be revised or discarded. [Silberberg (2009), p. 13, italics in the original] Masterton et al. (2012) and Zumdahl and Zumdahl (2014) both received N because they did not have any discussion pertaining to the tentative nature of theories, especially in relation to the scientific method. This study rates the relevant excerpts from Zumdahl and Zumdahl (2014) on the scientific method under a different criterion (Criterion 7). They emphasized more the social dimension of the scientific method than its tentative nature. Masterton et al. (2012) did not discuss the scientific method altogether in its Chapter 1. Chemistry is Empirical. All textbooks have some discussion of the "empirical" criterion, with three textbooks each for the M and S ratings. The three textbooks receiving S do not explicitly state that "chemistry is an empirical science." However, this present study has considered the number of additional content that each textbook gives to experimentation. Since it is standard to teach the development of the atomic and quantum theories, it is expected that all textbooks have some discussion of historical experiments accompanying the various stages of those theories. Vesterinen et al. (2013) Three textbooks received S ratings, with excerpts shown below: Scientists do not merely state what they feel may be true. They develop testable hypothesis (educated guesses) as tentative explanations of observed data. They test these hypotheses by designing and performing experiments. Experimentation distinguishes science from the arts and the humanities. In the humanities, people still argue about some of the same questions that were being debated thousands of years ago: What is truth? What is beauty? These arguments persist because the proposed answers cannot be tested and confirmed objectively. [Hill et al. (2013), p. 5, italics and emphasis in the original] Hill et al.'s discussion is noteworthy, first because it is under the subsection entitled "Scientific Hypotheses are Testable." Second, in stating that experimentation is what separates the sciences from other fields, it comes very close to the intent of Criterion 2, even without mentioning the word "empirical" explicitly. This is reinforced in their text box What Science is Not, which emphasizes the distinguishing role of experiments in science. Chemical changes can be dramatic. In the account that follows, Ira Remsen, author of a popular chemistry text published in 1901, describes his first experiences with chemical reactions. The chemical reaction that he observed is shown in Figure 1.11. (Figure 1.11. The chemical reaction between a copper penny and nitric acid. The dissolved copper produces the blue-green solution; the reddish brown gas produced is nitrogen dioxide. While reading a textbook of chemistry, I came upon the statement "nitric acid acts upon copper," and I determined to see what this meant. Having located some nitric acid, I had only to learn what the words "act upon" meant. In the interest of knowledge I was even willing to sacrifice one of the few copper cents then in my possession. I put one of them on the table, opened a bottle labeled "nitric acid," poured some of the liquid on the copper, and prepared to make an observation. But what was this wonderful thing which I beheld? The cent was already changed, and it was no small change either. A greenish-blue liquid foamed and fumed over the cent and over the table. The air became colored dark red. How could I stop this? I tried by picking the cent up and throwing it out the window. I learned another fact: nitric acid acts upon fingers. The pain led to another unpremeditated experiment. I drew my fingers across my trousers and discovered nitric acid acts upon trousers. That was the most impressive experiment I have ever performed. I tell of it even now with interest. It was a revelation to me. Plainly the only way to learn about such remarkable kinds of action is to see the results, to experiment, to work in the laboratory.) [Brown et al. (2015), pp. 12-13] Brown et al. (2015) cited that interesting anecdote to show that certain chemical properties could only be observed through experiment. Aside from that excerpt, the authors also have two text boxes relevant for Criterion 2 -Measurement and the Uncertainty Principle, and Design an Experiment on the photoelectric effect. Aside from an explicit discussion of what a scientific experiment is (excerpt 2.1 in Supplementary Table 2), Silberberg (2009) has another relevant passage that explains the important role of quantitative and reproducible measurements in science. The following is an excerpt from Silberberg's discussion of Lavoisier and how careful measurements led this scientist to develop his own theory of combustion. Lavoisier's new theory of combustion made sense of the earlier confusion. A combustible substance such as charcoal stops burning in a closed vessel once it combines with all the available oxygen, and a metal oxide weighs more than the metal because it contains the added mass of oxygen. This theory triumphed because it relied on quantitative, reproducible measurements, not on the strange properties of undetectable substances. Because this approach is at the heart of science, many propose that the science of chemistry began with Lavoisier. [Silberberg (2009), p. 12, italics in the original] As mentioned, the remaining three textbooks all received M ratings. They have discussed the connection between theory and experiment in some way, however lacking the elaboration and creativity of the discussions above. Excerpt 2.2 in Supplementary Table 2 is from Zumdahl and Zumdahl (2014). Passages from the other two textbooks are cited below: Hypotheses that survive many experimental tests of their validity may evolve into theories. A theory is a unifying principle that explains a body of facts and/or those laws that are based on them. Theories, too, are constantly being tested. If a theory is disproved by experiment, then it must be discarded or modified so that it becomes consistent with experimental observations. Proving or disproving a theory can take years, even centuries, in part because the necessary technology may not be available. [Chang (2010), p. 9, italics and emphasis in the original] Like any useful scientific theory, the atomic theory [of Dalton] raised more questions than it answered. Scientists wondered whether atoms, tiny as they are, could be broken down into still smaller particles. Nearly 100 years passed before the existence of subatomic particles was confirmed by experiment. Two future Nobel laureates did pioneer work in this area. J. J. Thomson was an English physicist working at the Cavendish Laboratory at Cambridge. Ernest Rutherford, at one time a student of Thomson's, was a native of New Zealand. Rutherford carried out his research at McGill University in Montreal and at Manchester and Cambridge in England. He was clearly the greatest experimental physicist of his time, and one of the greatest of all time. [Masterton et al. (2012), p. 28] Chemistry is Model-Based. As with Criterion 2, all textbooks have some discussion of the "model-based" criterion, thus no textbook received an N rating; four received M ratings, and only Hill et al. (2013) and Silberberg (2009) receiving S. All textbooks have some discussion of models and specific models accompanying specific areas and historical periods in chemistry. To qualify for the S rating however, this study looked at how models as such are explicitly discussed in each textbook's discussion of the scientific method. Those who received S ratings either have explicit subsections discussing models and/or have devoted several paragraphs explaining what models do for science. Silberberg's excerpt is 3.1 in Supplementary Table 2, stating that the creation of models is an important aim for the scientific method. Silberberg's introduction to his chapter on quantum theory is also noteworthy in its summary of several competing models: [R]evolutions in science are not the violent upheavals of political overthrow. Rather, flaws appear in an established model as conflicting evidence mounts, a startling discovery or two widens the flaws into cracks, and the conceptual structure crumbles gradually from its inconsistencies. New insight, verified by experiment, then guides the building of a model more consistent with reality. So it was when Lavoisier's theory of combustion superseded the phlogiston model, when Dalton's atomic theory established the idea of individual units of matter, and when Rutherford's nuclear model substituted atoms with rich internal structure for "billiard balls" or "plum puddings." In this chapter, you will see this process unfold again with the development of modern atomic theory. [Silberberg (2009), p. 269] The following excerpt is from Hill et al. (2013): Scientists use models to help explain complicated phenomena. A scientific model uses tangible items or pictures to represent invisible processes. For example, the invisible particles of a gas can be visualized as billiard balls, as marbles, or as dots or circles on paper. We know that when a glass of water is left standing for a period of time, the water disappears through the process of evaporation. Scientists explain evaporation with a theory, the kinetic-molecular theory, which proposes that a liquid composed of tiny particles called molecules that are in constant motion….In the bulk of the liquid, these molecules are held together by forces of attraction. The molecules collide with one another like billiard balls on a playing table. Sometimes, a "hard break" of billiard balls causes one ball to fly off the table. Likewise, some of the molecules of a liquid gain enough energy through collisions to break the attraction to their neighbors, escape from the liquid, and disperse among the widely spaced molecules in air. The water in the glass gradually disappears. This model gives us more than a name for evaporation; it gives us an understanding of the phenomenon. [Hill et al. (2013), p. 6, italics in the original] Aside from the above passage is taken from the subsection "Scientific Models are Explanatory." Another noteworthy passage is its tabulated version of the postulates under Dalton's atomic theory vis-à-vis modern modifications of it, the only textbook to have done so. In turn, the other four textbooks receiving M ratings only described particular models, however still carrying the notion that models replace older models depending on the available experimental evidence. Excerpt 3.2 in Supplementary Table 2 is from Masterton et al. (2012), while the passage below is from Chang (2010): This was a most surprising finding [Rutherford's alpha particle experiments] for, in Thomson's model, the positive charge of the atom was so diffused that the alpha particles were expected to pass through with very little deflection… Rutherford was later able to explain the results of the scattering experiment, but he had to abandon Thomson's idea and propose a new model for the atom. [Chang (2010), p. 47] While Zumdahl and Zumdahl (2014) has an explicit subsection entitled "Scientific Models" (in its Chapter 1), the pertinent paragraph discussing the model is not that explicit, compared with Silberberg's and Hill et al.'s, as shown above. It only focused on the notion of models as human constructs. Chemistry is Inferential. For this criterion, attention was focused on two sets of discussions: (1) on chemistry as the science that bridges (through inference) the submicroscopic and macroscopic realms, and (2) how scientists actually use inference when they think and work. If the textbooks have at least excerpts pertaining to the first, then they are graded as M. Chang (2010), excerpt 4.2 in Supplementary Table 2, is rated in this way because it only has the first set of relevant points. Three textbooks rated as S (Brown et al., 2015;Silberberg, 2009;and Zumdahl and Zumdahl, 2014) all have discussions of the microscopic and macroscopic realms in chemistry, but they also provided additional relevant excerpts pertaining to the second set of expected discussion mentioned above. They have passages on the scientific method as not fixed and requiring much inference and creativity. Zumdahl and Zumdahl (2014) is cited as excerpt 4.1 of Supplementary Table 2. Lastly, there are two textbooks that do not have the first set of expected content - Masterton et al. (2012), and Hill et al. (2013). The former is rated as N for Criterion 4. As for Hill et al. (2013), this study decided to rate it as M because it has many passages that pertain to the second set of expected content. One such passage is as follows: Atoms are exceedingly tiny particles, much too small to see even with an optical microscope. It is true that scientists can obtain images of individual atoms, but they use special instruments such as the scanning tunneling microscope. Even so, we can see only outlines of atoms and their arrangements in a substance. If atoms are small, how can we possibly know anything about their inner structures? Although scientists have never examined the interior of an atom directly, they have been able to obtain a great deal of indirect information. By designing clever experiments and exercises their powers of deduction, scientists have constructed an amazingly detailed model of what an atom's interior must be like. [Hill et al. (2013), p. 61, italics in the original] Aside from that passage, it has sections on critical thinking and serendipity, respectively. Unfortunately, it has no relevant passage explaining the role of chemistry as bridging the submicroscopic and macroscopic realms. Chemistry has Technological Products. The evaluation of Criterion 5 poses a problem for this study because the textbook chapters under consideration are not explicitly on chemical reactions, synthesis, or organic chemistry. Those chapters will naturally have mentioned newly discovered or synthesized compounds. In contrast, the chapters evaluated here deal mostly with the general nature of science and chemistry, as well the historical development of atomic and quantum theories. Nevertheless, Criterion 5 is retained in this study to be complete and consistent with the application of the criteria from Vesterinen et al. (2013). For the purposes of this study, technological products pertain not only to the synthesis of compounds, but also to any discussion (in the chapters under evaluation) of any products or materials whose properties could be explained by understanding the concepts in those chapters. Only Zumdahl and Zumdahl (2014) received an S rating; the rest received M. These five textbooks all mentioned the neon light as an everyday object that illustrates the concepts of line spectra and atomic emission characteristics of certain gases. Brown This study rated Zumdahl and Zumdahl (2014) as S because it has long text boxes explaining the origin of certain products, such as on Post-It Notes and fireworks. While some textbooks mentioned fireworks using the same principle as neon lights (namely, that different colors of light result from unique emissions of ions), only Zumdahl and Zumdahl (2014) discussed the mechanism behind fireworks at length. Excerpt 5.1 in Supplementary Table 2 contains a part of their text box on fireworks. Chemistry Employs Instrumentation. Relevant passages under this criterion are the discussion of specific instruments used in chemistry. Those that merited an S rating are those that have extended explanations of the principles and the use of such instruments. There are three textbooks that received S ratings, all of them having such rating because of the relevant text boxes. Instruments are also mentioned in the main text, but these textbooks provided additional space for explanations of certain instruments. For instance, Brown et al. (2015) has text boxes for mass spectrometry and magnetic resonance imaging; Chang (2010) has additional content on lasers and electron microscopes; and Silberberg (2009) has text boxes on mass spectrometry, basic separation techniques, and spectrophotometry. Silberberg aptly titled these text boxes as Tools of the Laboratory. A part of Silberberg's text box on mass spectrometry is cited as excerpt 6.1 in Supplementary Table 2. Masterton et al. (2012) received M because it does not have any additional text boxes, although mass spectrometry is mentioned in one passage (see Supplementary Table 2, excerpt 6.2). Hill et al. (2013), and Zumdahl and Zumdahl (2014) received N ratings because discussions on specific instruments could not be found. Chemistry Possesses Social and Societal Dimensions. Criterion 7 focuses on how science is actually practiced, how the scientific enterprise has a human and social side. The production and transmission of scientific knowledge are oftentimes not clear-cut and absolutely objective, but resulted from many controversies and debates. All textbooks analyzed have relevant passages pointing to some social relevance of chemistry. The excerpts under this criterion point to the human and social dimension of chemistry topics such as the relevance of studying chemistry, the scientific method, as well as short biographical notes of certain scientists. Since a short history of the atomic and quantum theories is included in all textbooks, all of them were able to discuss in various ways particular scientists in the history of chemistry. Agreeing with Vesterinen et al. (2013), anecdotal passages were given an M rating. Their own results fail to see a Satisfactory and Explicit passage, saying that "portrayals of historical scientists and their work in the analyzed textbooks are mostly anecdotal and hardly provide reader with adequate descriptions of the larger cultural milieu in which scientific discoveries and innovations were made" (p. 1850). To reach the level of an S rating, this study looked for text boxes that elaborated certain social dimensions. For instance, Brown et al. (2015) has text boxes entitled Chemistry Put To Work. One such box refers to the relation of chemistry with the chemical industry, one of the desired content of Vesterinen et al. (2013) for Criterion 7. A segment of that text box is in excerpt 7.1 of Supplementary Table 2. Masterton et al. (2012), even if stating outright in their Preface that they tried to make their textbook as concise as possible, still provided text boxes that were rated satisfactory, such as Chemistry Beyond the Classroom (one on ethyl alcohol and the law, another on the changing color of lobsters when cooked) and Chemistry the Human Side (on Glenn Seaborg). Silberberg (2009), aside from text boxes (relevant ones for this criterion are titled Chemical Connections), also provided long biographical notes of certain chemists. Hill et al. (2013) has the most unique contributions with regards to Criterion 7, providing additional topics not usually discussed in standard chemistry textbooks such as risk-benefit analysis and green chemistry. They wanted their textbook to have an explicit green chemistry content and approach. All chapters in that textbook have page-long text boxes on specific aspects of green chemistry. CONCLUSIONS AND RECOMMENDATIONS This study aimed to formulate criteria for content analysis of general chemistry textbooks based on certain dimensions of the nature of science (NOS), informed by relevant research on NOS and history and philosophy of science (in particular, philosophy of chemistry). These criteria pertain to chemistry as being (1) tentative, (2) empirical, (3) modelbased, (4) inferential, (5) has technological products, (6) employs instrumentation, and (7) possesses social and societal dimensions. The second part of the study consisted of the application of these criteria to ascertain how and to what extent such criteria are mentioned, emphasized and elaborated in these textbooks. Despite the presence of some No Mention (N) evaluations, all textbooks have mentioned some or all of the NOS dimensions formulated, resulting to M and S ratings. Silberberg (2009) has the highest score among the six textbooks with 12 points out of the maximum of 14. Silberberg (2009) was rated S for five criteria, the most number among the six textbooks, namely: (2) empirical, (3) model-based, (4) inferential, (6) instrumentation, and (7) social and societal dimensions. Two textbooks follow closely: Brown et al. (2015) with 11 points, and Hill et al. (2013) with 10. Originally, this study aimed at examining whether there is explicit philosophical content in general chemistry textbooks, as established by certain NOS dimensions. As the research progressed, this study faced the reality that such textbooks are not intended to be texts for philosophy nor philosophy of science/chemistry, and the main audience remain to be chemistry if not other science majors. (Hill et al., 2013 is an exception because it was written for non-science majors.) Thus, this study granted certain textbooks with the rating of Satisfactory and Explicit not due to explicit philosophical content, but due to additional effort on the part of the authors to move beyond the standard discussion of textbook material. These "extras" are immediately and visually seen in the form of text boxes that focus on specific chemical concepts and applications, as well as other ways of thinking about chemistry. These text boxes are considered in this study aside from the actual text. The corresponding author worked alone in this project, a key limitation of this study. Most textbook analyses are done by more than one researcher. This is to ascertain some form of reliability in the evaluations. Published works on textbook analysis involve teams of evaluators and entailed computations of inter-rater agreements. There are deliberations as well as the quantitative measure of the inter-rater agreement between evaluators (Cohen's kappa statistic is calculated in many studies). If more than one researcher continues and improves this current study, then the inter-rater agreement could be computed. Such research would thus be more quantitative and reliable, given the increased number of evaluators. Another recommendation is that local chemistry educators (especially those involved in chemistry education research) should look into the line of research undertaken by this thesis and examine possible applications of studies advocating for an inclusion of HPS into various forms of chemistry teaching and learning. As this is a study that promotes interdisciplinary learning between chemistry and philosophy, possible implications and applications to our K-12 program could be assessed. NOTE This article is a condensed version of an undergraduate chemistry thesis, bearing the same title, completed and defended by the corresponding author at the Ateneo de Manila University during the first semester of 2015, under the guidance of the three co-authors. It was then presented as a poster during the 31 st Philippine Chemistry Congress last April 13-15, 2016 at Iloilo City with the theme "Chemistry Beyond Borders: Blurring Traditional Boundaries." The author, presently a senior high school chemistry teacher at Xavier School in San Juan City, has also discovered that the said institution already uses a textbook (Pearson Baccalaureate Higher Level Chemistry, 2nd edition, by Catrin Brown and Mike Ford, ISBN 9781447959755) where text boxes on the Nature of Science (NOS) and the Theory of Knowledge (TOK) are already interspersed throughout the text. Exemplar content desired by this study is explicitly found in those text boxes. This particular textbook is published under the auspices of the International Baccalaureate Diploma Program (IBDP). Xavier School, as an IB World School, is accredited to implement the IBDP in its senior high school. Interestingly, TOK is a required separate "core" course in the IBDP, however its key concepts (as well as that of NOS) are already applied and integrated in IBDP textbooks.
2018-12-07T18:54:23.928Z
2017-02-08T00:00:00.000
{ "year": 2017, "sha1": "17cac72c1c46704f47185070af3dedd0fa75c756", "oa_license": "CCBY", "oa_url": "http://kimika.pfcs.org.ph/index.php/kimika/article/view/219/189", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4a9517cba9c79e94cf4643c38a11ec8eb61e9c9e", "s2fieldsofstudy": [ "Chemistry", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
73353322
pes2o/s2orc
v3-fos-license
Descriptions of acute transfusion reactions in a Brazilian Transfusion Service Correspondence to: Fernando Callera Serviço de Hematologia e Hemoterapia de São José dos Campos Rua Antonio Sais 425 – Centro 12210-040 – São José dos Campos – São Paulo Tel/Fax: (0xx12) 39213766 – e-mail: fcallera@shhsjc.com.br Fernando Callera Anita C. O. Silva Alessandro F. Moura Djanete B. Melo Claudio M. T. P. Melo Acute transfusion reactions have been found to occur during or within 24 hours of transfusion. The aim of this work is to describe the main characteristics of acute reactions reported in a Brazilian transfusion service. A preprinted report form was used to evaluate the age and sex of the transfusion recipients, blood component requested, medical specialty involved and transfusion-related signs and symptoms, transfusionists performed a direct observation during the transfusion and in a period of four hours following transfusion. Data were prospectively collected for 90 days from 30 hospitals and health facilities supplied by the the Service of Hematology and Hemotherapy of São José dos Campos. Acute reactions were recognized as febrile nonhemolytic, allergic, fluid overload, transfusion-related acute lung injury (TRALI), anaphylatic and metabolic reactions. In a total of 8,378 transfusions, 46 acute reactions were recorded (5.5 per 1000 units transfused, 28 febrile nonhemolytic, 12 allergic, 5 anaphylatic and 1 fluid overload). TRALI and metabolic reactions were not detected. The majority (27) was associated with RBCs followed by PLTs 11, FFP 6 and partial units 2. The median age of the recipients was 43 years (3 months to 83 years, 23 males and 23 females). Overall, 12 (26.1%) events were recorded in oncology, 12 (26.1%) in medicine and 7 in intensive care unit departments. This study provides baseline acute transfusion reaction information for a specific period of time in a Brazilian transfusion service. Rev. bras. hematol. hemoter 2004; 26(2):78-83. Introduction The transfusion of blood components is usually a temporarily effective means of correcting red cell, platelet and coagulation factor deficits.Unfortunately, blood components are occasionally unsafe, which results in a spectrum of adverse reactions following transfusion.Acute noninfectious transfusion reactions represent a kind of untoward effect of blood transfusion.Acute reactions have been found to occur during or within 24 hours of transfusion and include acute hemolytic, allergic, febrile nonhemolytic, fluid overload, transfusion-related acute lung injury (TRALI), anaphylatic and metabolic reactions. 1,2These types of reactions may vary in severity from mild to fatal justifying the creation of systems of surveillance and alarm from blood collection to follow-up of the recipients. Efforts have been made by different transfusion services in order to analyze the distribution of the acute transfusion reactions, their frequency and the types of blood products involved. 3he Service of Hematology and Hemotherapy of São José dos Campos (SHHSJC), São Paulo, is a blood center that issues over 35,000 units of blood components annually and supplies 30 hospitals and health facilities.An estimated population of 1,000,000 people is served by the SHHSJC.In this region, the hospitals are non-teaching and deliver a full range of medical and surgical services including emergency, autologous bone marrow transplantation, hematological malignancy treatment and cardiothoracic surgery.In general, over 2,900 units of blood components are transfused per month by the SHHSJC. The aim of the present study was to describe the main characteristics of acute transfusion reactions reported in the SHHSJC. Materials and Methods Data were obtained from all hospital blood banks served by the SHHSJC.Acute transfusion reactions were defined as those occurring at any time up to 24 hours following a transfusion of blood components excluding cases of acute reactions due to incorrect blood component transfusion.A preprinted report form was designed to collect the following information: age and sex of the transfusion recipient, blood component requested, medical specialty involved and transfusion-related signs and symptoms.According to these signs and symptoms transfusions reactions were recognized as febrile nonhemolytic, allergic, fluid overload, transfusion-related acute lung injury (TRALI), anaphylatic and metabolic reactions.Allergic reactions were defined as rashes, dyspnea or angioedema without hypotension and anaphylatic reactions were defined as hypotension with one or more of the following: rash, dyspnea or angioedema 1 .Data such as date and time of the transfusion, date and time of the recipient monitoring and blood bank personnel name were collected for controlling deviations related to record documentation.The SHHSJC transfusionists were oriented to perform a direct observation during the transfusion and in a period of four hours from the end of the transfusion and to fill the preprinted report form independent of the presence or absence of any adverse event.Data were prospectively collected from August 1 st to October 30 th , 2003.All the adverse signs and symptoms were promptly investigated by a hematologist of the SHHSJC.We performed a descriptive statistical analysis in order to obtain values of mean, median and standard deviation. Results Over the study period the SHHSJC issued 8,528 units.3,713 blood component requests were analysed and a total of 8,378 units were recorded as transfused (2.25 units per request, 98.2% of the units issued).According to specialties, a great use of blood components was observed in adult intensive care unit (2663 units), surgery (1229 units), emergency (972 units), oncology (833 units) and medicine (831 units) departments.These five specialties accounted for 77.8% of all the units of blood components transfused.Besides, adult intensive care unit, surgery, emergency and oncology showed a high rate of units transfused per request, 2.73, 3.11, 2.80 and 2.78 respectively (Table 1). Overall, 56.7% of the recipients were males and 43.3% females.The number of recipients aged from 50 to 60 years old was higher in both female (13.7%) and male (15.9%) recipients (Table 2). A total of 46 transfusion reactions were recorded.Febrile nonhemolytic reactions were the most frequent (28) followed by allergic reactions (12).TRALI and metabolic reactions were not detected during the period study.Red blood cells (RBCs) accounted for 48.3% (4,048) of all transfused units, fresh frozen plasma (FFP) for 24.2% and platelets (PLTs) for 19%.The majority of the transfusion reactions were associated with RBCs transfusion (27) followed by PLTs transfusion (11) observed in male and 50% in female recipients.According to specialties 12 (26.1%)events were recorded in oncology and 12 (26.1%) in medicine departments.Intensive care unit accounted for 7 (15.2%) of the 46 transfusion reactions. Discussion According to specialties (Table 1), 77.8 percent of the blood components were transfused in patients from the adult ICU, surgery, emergency, oncology and medicine departments.The use of blood components in critically ill patients vary and this has been object of discussion.Vincent JL and coworkers 4 demonstrated, in a prospective multicenter observational study which included 3,534 patients from 146 western European ICUs, the common occurrence of anemia and the great use of blood transfusions (rate of transfusion during the ICU period of 37%).On the other hand, Rao MP et al. 5 , conducted a prospective observational study in order to assess the transfusion practice in 1,247 critically ill patients and showed that 666 (53%) were administered red cells, 202 (16%) platelets and 281 (22%) fresh frozen plasma.The authors considered appropriate use of blood components in these patients and concluded both that transfusion practice was consistent and that, in general, there was not an excessive use of blood components.We demonstrated that the adult ICU accounted for 31.7% of the units transfused.Our results were similar but it is important to consider the methods we used to collect such data.In our region, emergency, neurologic, cardiothoracic and oncology surgical patients are placed and evaluated in the ICU during the postoperative period increasing the ICU's proportion of blood components transfused when reported by specialties.rate of transfusion reactions per units transfused was 0.0055 (5.5 reactions per 1000 units transfused).Table 3 shows the different kind of transfusion reactions according to blood components.The characteristics of the recipients with recognized transfusion reactions are shown in Table 4.The median age of these recipients was 43 years old ranging from 3 months to 83 years.50% of the transfusion reactions were .transfusion use in operations and procedures of the digestive and cardiovascular systems.Among the hospitals supplied by the SHHSJC, three provide a range of surgical services including emergency, neurologic, oncology and cardiothoracic surgery.Additionally, two hospitals have oncology services including chemotherapy for hematological diseases and autologous bone marrow transplantation.These local characteristics may explain our results regarding the use of blood components in these specialties.Moreover, the use of blood components in Medicine department has been described.Marti-Carvajal and coworkers 7 designed a cross-sectional study to audit appropriate use of blood products in the main public tertiary-care hospital in Valencia, Venezuela.The authors demonstrated that the average number of transfusions per subject was 3.41 for medicine, 2.81 for emergency, 2.09 for obstetrics and 1.75 for surgery.We also demonstrated a frequent use of blood components in the Medicine department although, as described above, the highest use of transfusions was observed in the adult ICU and surgery departments.It is logical to assume that these differences depend on the kind of services provided by each hospital. According to age and sex the number of requests were similar in both male and female recipients with ages ranging from 50 to 60 years old.Overall, there is a prevalence of requests of blood components for males (Table 2).Among the recipients with recognized acute transfusion reactions the median age was 43 years (3 months to 83 years) and we did not observe any prevalence between male or female recipients (Table 4).In comparison, the Serious Hazard Of Transfusion (SHOT) Report 1 described 42 reports of acute transfusion reactions from 1998 to 1999 (24 males and 18 females); 38 reports described the age of the recipients (median 56 years, range 17 months to 92 years).SHOT data from 1999 to 2000 2 showed 32 reports of acute transfusion reactions as follows: 19 males, 13 females, median age of 52 years and range 1 month to 88 years.In this condition, it is important to consider that age and sex depends on the local characteristics of the population. As demonstrated in Table 4, the majority of the acute transfusions reactions were recorded in patients from oncology and medicine departments.Despite the number of transfusions of blood components in the ICU and Surgery the prevalence of acute transfusion reactions did not follow this trend.Since the patients with transfusion reactions were evaluated by a hematologist of the SHHSC it is reasonable to assume that our results are consistent.On the other hand, it is possible that in the operating room or in the ICU some acute reactions were not recognized as such perhaps because signs and symptoms mimic other clinical conditions.Oncology recipients showed a high proportion of acute transfusion reactions.A possible explanation is based on the inclusion of the hematological malignances in the group of oncology patients.These patients (acute leukemia and autologous bone marrow transplantation for example) undergo a temporary inability to produce blood cells and may use considerable amounts of blood components increasing their susceptibility to transfusion reaction.The association between the use of blood components in medicine and the proportion of acute transfusion reactions in this department seems to be consistent.In our region, the admission of patients to the medicine department is highly variable including cases in which the patients are severely ill and patients who suffer minor morbidities.We believe that this wide range of diagnoses and treatments in association with the lack of appropriate guidelines regarding blood component transfusions may increase the probability of using blood components in this specialty. In our study RBCs were linked to the majority of the acute transfusion reactions followed by PLTs.The most common reactions were febrile nonhemolytic reactions and the observed rate of acute transfusion reactions was 5.5 In the present study, we did not register any case of TRALI.Jonathan P. Wallis et al 10 carried out an observational study from 1991 to 2002 in the Freeman Hospital, UK.This facility has 787 beds and includes a regional cardiothoracic surgical unit and the regional liver and liver transplant units.Over 12 years, eleven cases of TRALI were recognized.On the other hand, Silliman CC et al 11 reported a series of 90 TRALI reactions in 81 patients and in order to examine the epidemiology of TRALI a nested case-control study was performed of the first 46 patients with TRALI compared with 226 controls who had received transfusion.The authors suggested that TRALI may be more frequent than previously recognized and demonstrated an overall prevalence of 1 case in 1120 cellular components transfused.Data from Sunita Saxena & Ira Shulman 8 did not demonstrate any case of TRALI too.Based on these studies, our findings seem to be consistent but it is important to consider that many clinicians and transfusionists remain unaware of TRALI reactions. In summary, the present study provides baseline acute transfusion reaction information for a specific period of time in a Brazilian transfusion service.To the best of our knowledge, there are few studies describing the main characteristics of acute transfusion reactions in our country.We therefore believe that reports from blood transfusion centers and health facilities regarding transfusions reactions should be stimulated in order to create rules and regulations pertaining to the practice of blood transfusion.Collection of such data is relevant and will assist in blood transfusion program planning based on the implementation of corrective and preventive measures in accordance with accepted international standards and local guidelines. Table 1 Use of blood components according to specialties. . The Table 3 Acute transfusion reactions according to blood components RBCs and 7 to PLTs) and in 33 reports from 1999 to 2000 2 (11 linked to RBCs and 13 to PLTs) over more than 2,500,000 units were transfused per year.The differences we have found in our study may be explained by the shorter period of investigation.Another point is that the reactions were recorded using a method based on the direct observation of the recipients during a period of time instead of the traditional incident report system.It is possible that using direct observation more adverse events were recorded increasing the rate of reactions per unit transfused.Moreover, excluding the SHOT reports, Sunita Saxena & Ira Shulman and Waller C et al. analysed other types of transfusion reactions also and we focused only on acute transfusion reactions.
2018-12-27T11:30:42.855Z
2004-01-01T00:00:00.000
{ "year": 2004, "sha1": "72a87eca9b7b38c23c61aeb6b89c0f04c141e384", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/rbhh/a/vCJDRGrhVTCNByfcgQRvkXr/?format=pdf&lang=en", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "72a87eca9b7b38c23c61aeb6b89c0f04c141e384", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Medicine" ] }
255951689
pes2o/s2orc
v3-fos-license
Viral metagenomic analysis of chickens with runting-stunting syndrome in the Republic of Korea Runting-stunting syndrome (RSS) in chickens, also known as malabsorption syndrome, which is characterized by mild to severe enteritis and diagnosed through typical histopathologic examination as well as clinical signs, results in considerable economic losses. Despite the many studies carried out over decades to determine the etiologic agents of RSS involved in the disease, several outbreaks remained without the elucidation of, potentially multiple, etiologies involved. We performed comparative analysis of viral metagenomes from four chicken flocks affected with RSS using next-generation sequencing. Primers for the detection of chicken enteric viruses were designed from the sequencing data obtained with metagenomics. Multiplex reverse transcription–polymerase chain reaction (PCR) and PCR were performed to detect a variety of etiological agents previously described in natural cases of RSS. The most abundant viral families identified in this study were Astroviridae, Picornaviridae, Parvoviridae, Caliciviridae, Reoviridae and Picobirnaviridae. Chicken astrovirus sequences were present in all four samples, suggesting an association between chicken astrovirus and RSS and chicken astrovirus as a candidate pathogen responsible for RSS. Picobirnavirus and the newly identified chapparvovirus were found in chickens in the Republic of Korea for the first time, and the genetic diversity of enteric viruses and viral communities was showed. Chicken astrovirus was consistently detected in broilers affected with RSS and the result of this study may contribute to knowledge of enteric diseases and viruses in chickens. Introduction Infectious intestinal diseases affecting young chickens and turkeys are characterized by mild to severe enteritis and result in considerable economic losses. Viral enteritis increases susceptibility to other diseases, decreases feed conversion efficiency, and prolongs the time to market [1,2]. Runting-stunting syndrome (RSS), also known as malabsorption syndrome, is diagnosed based on clinical signs (diarrhea, dehydration, growth depression, unevenness in size) and histopathological lesions (distension of the crypts of the small intestine, inflammatory cell infiltration in the lamina propria adjacent to affected crypts and necrotic cellular debris in crypts) [3,4]. Metagenomics provides the opportunity to simultaneously detect a large number of sequences from different microbes, including viruses [17]. Metagenomics has enabled a recent increase in understanding of viral diversity, and environmental and clinical metagenomics have contributed to the discovery of novel viruses [18]. In addition, studies are necessary to determine sequence differences by comparison with high-throughput data obtained from a variety of genomic regions, and samples through various protocols that differ in terms of sample preparation, cDNA synthesis, library preparation, sequencing and data analysis [19]. Here, Illumina sequencing was applied to explore viral communities in the small intestines and intestinal contents of four chicken flocks with RSS and one specific pathogen-free (SPF) chicken flock in the Republic of Korea. Samples and purification The small intestines and intestinal contents of broiler chickens from four flocks diagnosed with RSS (05D72, 07D11, 13D62 and 13Q45) were inspected in this study. The chickens were submitted to the Avian Disease Division (ADD) of the Animal and Plant Quarantine Agency (APQA) for disease diagnosis between 2005 and 2013 and diagnosed by an APQA diagnostic protocol on the basis of clinical manifestation and the presence of gross lesions. The small intestines (duodenum, jejunum and ileum) of broiler chickens in flocks 05D72 and 07D11 were collected after necropsy, processed promptly via blending into a 10% homogenate in sterile phosphatebuffered saline (PBS) containing 0.4 mg gentamicin per ml, and stored at − 80°C until analysis. The intestinal contents of broiler chickens in flocks 13D62 and 13Q45 and control (SPF) chickens were collected from the small intestine after necropsy, mixed with an equal volume of sterile PBS containing 0.4 mg gentamicin per ml, and stored at 2~5°C until analysis. RSS-positive and control samples were centrifuged at 3500 r.p.m. and 13, 000 r.p.m. for 10 min each. To remove large particles and bacteria, the supernatants of intestine homogenates were filtered with 0.8, 0.45, and 0.22 μm syringe filters, and feces filtrates were concentrated using an Amicon ultrafiltration apparatus (Amicon chamber 8400 with a 30 kDa MWCO UF membrane, Millipore, USA). Viral particles were pelleted by ultracentrifugation (30, 000 r.p.m., 5 h, 4°C), resuspended in 500 μL of 1 M Tris-Cl (pH 7.4) and treated with 2.5 units of DNase I (AMPD1, Sigma-Aldrich, USA) for 3 h at 37°C to eliminate free DNA. The samples were concentrated and washed twice using a Microcon 30 column (Millipore, USA). DNase activity was inhibited by adding 0.5 M EDTA to a concentration of 20 mM. High-throughput sequencing and analysis Total RNA was extracted from the purified samples using a Viral Gene-spin viral DNA/RNA extraction kit (iNtRON Biotechnology, Republic of Korea) according to the manufacturer's instructions. cDNA synthesis and PCR amplification of nucleic acid were carried out in a 50 μL mixture containing 5 μg of RNA, 0.5 μM random primer K (GAC CAT CTA GCG ACC TCC AC) and 0.5 μM primer KN (GAC CAT CTA GCG ACC TCC CAN NNN NNN N) as described previously [20] using the Access RT-PCR system (Promega, USA). The products were purified using an UltraClean PCR Clean-Up Kit (MO BIO, USA) and sequenced at Theragen Etex (Suwon, Republic of Korea). Sample libraries were prepared using an Illumina TruSeq DNA sample preparation kit (Illumina, USA), and DNA was fragmented using a Covaris adaptive focused acoustics device to generate double-stranded DNA fragments 300-400 bp in size. The ends were repaired, phosphorylated, and 3′end adenylated. Paired-end DNA adaptors (Illumina) were ligated, and the resulting construct fragments5 00 bp size were selected. Libraries were loaded onto a paired-end flow cell and sequenced as 101 bp pairedend, indexed reads on an Illumina HiSeq 2000 instrument. The raw read sequences were filtered using the following criteria: 1) the presence of ambiguous bases (letter N) in excess of 10%, 2) an average quality below 20, 3) more than 5% of nucleotides with quality inferior to 20, and 4) the presence of an adapter sequence (Supplementary data 1). Host genome (galgal4)-filtered reads were aligned with known viral, bacterial, and fungal genome database from NCBI using Burrows-Wheeler Aligner software. Data from each sample were assembled using MetaVelvet (k-mer size 51) and Bambus2. Homology-based (BLAST) classification based on the nucleotide sequence was performed using the 'nucleotide' database from NCBI to annotate the scaffolds. Sequence analysis Nucleotide and amino acid sequences were aligned using Vector NTI 10 software. The aligned fragments from Astroviridae, Picornaviridae, Parvoviridae, Caliciviridae and Picobirnaviridae were trimmed, based on lengths of 302, 1421, 3661, 7315, and 812 bp, respectively, using BioEdit software. Phylogenetic trees were generated by the neighbor-joining method using MEGA 4.0 software [21] with 1000 bootstrap replications. Assignment of genotypes to rotavirus A sequences was performed using the online tool RotaC [22]. Trial for whole-genome analysis of a parvovirus The design of primers for bridging contigs identified by Illumina, and the resequencing of these regions, were carried out at Cosmo Genetech Co., Ltd. (Seoul, Republic of Korea) using an ABI 3730 DNA sequencer. Nested PCR and rapid amplification of cDNA ends (RACE) [23] were attempted to obtain sequences of the 5′ and 3′ ends of a parvo-like virus genome. Detection of enteric viruses Sequences revealed through high-throughput sequencing indicated a variety of etiologies related to RSS. Multiplex reverse transcription (RT)-polymerase chain reaction (PCR) and PCR were used to distinguish these agents from the original chicken intestine homogenate using primer sets. Most primers were designed using CLC Main Workbench 6 with the high-throughput sequences obtained in this study, and the primers for rotavirus D used were chosen according to a published report [24], as described in Table 1. DNA and RNA from individual enteric samples were extracted using a Viral Gene-spin Viral DNA/RNA Extraction kit (iNtRON Biotechnology, Republic of Korea) according to the manufacturer's instructions. Multiplex RT-PCR for the detection of picornavirus, astrovirus and calicivirus was carried out using the PrimeScript One Step RT-PCR kit ver.2 (TaKaRa, Japan) following the manufacturer's instructions, with each primer at 0.5 μM and 2 μL of RNA. Thermocycling conditions were as follows: 50°C for 30 min and 94°C for 2 min, followed by 40 cycles of 94°C for 30 s, 50°C for 30 s, and 72°C for 1 min, and a final step at 72°C for 5 min. Multiplex RT-PCR for the detection of rotaviruses A, D and F was carried out using the PrimeScript One Step RT-PCR kit ver.2 (TaKaRa), with each primer at 0.5 μM and 2 μL of RNA. Thermocycling conditions were as follows: 50°C for 30 min and 94°C for 2 min, followed by 40 cycles of 94°C for 30 s, 60°C for 30 s, and 72°C for 1 min and a final step at 72°C for 5 min. To detect the parvovirus, PCR was performed using AccuPower PCR Premix (Bioneer, Republic of Korea) with the following conditions: 95°C for 5 min; 30 cycles of 94°C for 30 s, 55°C for 1 min, and 72°C for 1 min; and final incubation at 72°C for 5 min. The sensitivity and specificity of the primer pairs for each agent were tested by uniplex PCR. All PCR products were purified from agarose gels using a QIAquick gel extraction kit (Qiagen, USA) and sequenced directly using an ABI 3730 DNA sequencer at Cosmo Genetech Co., Ltd. using the respective PCR primers. These detection methods were used to identify the distribution of these enteric viruses in non-RSS-infected chickens. Intestinal samples were randomly collected from 86 non-RSS-infected broiler chickens less than 6 weeks old submitted to the ADD of the APQA for diagnosis. These specimens were diagnosed with a variety of diseases, viral diseases (infectious bronchitis, inclusion body hepatitis, infectious bursal disease, etc.), bacterial diseases (necrotic enteritis, bacterial arthritis, colibacillosis, etc.), or complicated disease. Multiplex RT-PCR and PCR were carried out as described above. Diagnosis of RSS The four flocks tested comprised 2-week-, 3-week-, 4week-, and 6-week-old broiler chickens with poor growth, retarded feathering, diarrhea and various clinical signs. Thin-welled intestines filled with undigested feed at postmortem examination and characteristic microscopic lesions (distension of the crypts in the duodenum and jejunum lined with flattened epithelium, exfoliated cells in the crypts and inflammatory cell infiltration in the adjacent lamina propria) were observed, but villous atrophy was not observed (Fig. 1a). One flock (07D11) was diagnosed with only RSS, whereas the other three flocks (05D72, 13D62 and 13Q45) had been infected with additional diseases, such as infectious bronchitis (IB), inclusion body hepatitis (IBH) and coccidiosis ( Table 2). IB was identified by virus isolation and RT-PCR, IBH was confirmed by PCR and histopathology (observation of intranuclear inclusions in the liver), and coccidiosis was confirmed by the presence of oocysts in the cecum using microscopic examination. Electron microscopy identified many small, round and nonenveloped viral particles of various sizes in all four samples (Fig. 1b). Picobirnaviridae (11.1%) sequences, but no sequences were assigned to Reoviridae. Sequences from the control sample differed from those of all other samples and contained no viral reads assigned to known avian viruses and only bacteriophage sequences (Table 2 Supplementary data 2). Enteric virus genome sequence analysis Sixty-six contigs from six viral families 251 to 7315 nucleotides (nts) in length were obtained from the four RSS-positive chicken samples. These genome sequences were deposited into GenBank under accession numbers KM254161-KM254224. Astroviridae Four contigs with similarity to genomes of the Avastrovirus genus were identified; specifically, chicken astrovirus sequences were found in all four samples. The contigs identified in samples 05D72, 07D11 and 13D62 were partial non-structural (NS) polyprotein 302, 305, and 2266 nt in length, respectively. The 13Q45 contig consisted of full NS and partial capsid protein sequences 5084 nt in length. The four NS sequences from the chicken astrovirus genome detected showed 84.2~86.7% nt homology to the Chinese strain (GenBank No. HM029238) [25] and were clustered differently from other Avastrovirus genus (avastrovirus 1, 2, and 3) in the phylogenetic tree (Fig. 2a). Parvoviridae Two nearly complete Aveparvovirus genome sequences 4782 and 5034 nt in length were identified from samples 07D11 and 13D62, respectively. Both contigs showed 92.7 and 97.1% nt identity with the ABU-P1 strain (Gen-Bank No. GU214704) and contained three ORFs that encode nonstructural (NS1, NP1) and capsid (VP1, VP2) proteins. A parvo-like virus (3661 nt sequence length) was obtained from sample 13Q45 and found to contain the complete NS1 CDS (Fig. 2c). Sequence comparison with the ABU-P1 strain revealed a high level of divergence with 46.6% nt identify and 13.6% aa identify. The DNA coding sequences was confirmed by Sanger sequencing of amplicons obtained by primer walking. Nested PCR and RACE were conducted to obtain the sequences of the 5′ and 3′ ends of a parvo-like virus genome in sample 13Q45 but yielded no products. In the Parvoviridae phylogenetic tree, this identified virus was clustered with the recently classified genus Chapparvovirus [26]. Reoviridae A total of 47 viral contigs belonging to three rotavirus species (rotaviruses A, D, and F) were detected in samples 07D11 and 13D62. All eleven genome segments of rotaviruses A, D and F were compared with those of the German strains 02V0002G3, 05 V0049 and 03 V0568. The sequences of contigs from rotavirus A obtained from sample 13D62 were similar to sequences of strain 02V0002G3 (GenBank No. FJ169853-169863), and each viral segment showed 91.2~96.3% nt identity with the corresponding segment in strain 02V0002G3, but the VP4 gene showed a low degree of identity (74.9%) and a different genotype (P31). The percent identities of rotavirus D segment-associated contigs with strain 05 V0049 (GenBank No.GU733443-733453) varied (79.5-95.5%). Rotavirus F segments from sample 13D62 showed between 72.6 and 91.7% nt identity to strain 03 V0568 (GenBank No. FJ169853-169863). The sequences of contigs from rotavirus species in sample 07D11 were similar to those of contigs in sample 13D62 (Table 3). Caliciviridae Two nearly complete chicken calicivirus genome sequences with lengths of 7865 and 7315 nt were found in samples 13D62 and 13Q45. Both contigs contained two coding open reading frames (polyprotein and the VP2 protein) and were closely related in a cluster of the genus Nacovirus (Fig. 2d). Picobirnaviridae Five picobirnavirus contigs were identified from samples 13D62 and 13Q45. Two or three distinct partial RNAdependent RNA polymerase gene sequences were found in each sample. Three sequences were clustered with genogroup I, and the other sequence clustered with genogroup II (Fig. 2e). Detection of enteric viruses in the non-RSS-affected flock using multiplex RT-PCR and PCR Intestinal samples from 86 non-RSS-affected broiler flocks were examined to detect enteric viruses using the developed multiplex RT-PCR method and PCR. Parvoviruses and chicken astroviruses were detected with very high positive detection rates of 75.6 and 62.8%, respectively (Table 4). Picornaviruses, caliciviruses and rotavirus species were also found to have low to moderate positive detection rates (7.0-39.5%). Discussion High-throughput sequencing data generated by Illumina sequencing indicated the presence of several different viruses in four chicken flocks affected by RSS. Chicken astrovirus, parvovirus, calicivirus and rotavirus have been identified as causes of gastrointestinal tract infections in poultry, such as RSS, which is also called malabsorption syndrome [27]. A few studies using viral metagenomics could not specify any particular pathogen from the viral community in the chicken flocks with RSS because the distributions of enteric viruses in diseased and healthy flocks were not significantly different [17,28,29]. Chicken astrovirus has been distributed worldwide for decades and is nearly ubiquitous constituent of the chicken gut [30]. We also found a high positive detection rate (62.8%) for this virus in the non-RSS-affected chickens. On the other hand, metagenomics analysis in this study showed that only chicken astroviruses were common to chickens that had been diagnosed with RSS via pathological lesions in the crypts of the small intestine. In addition, chicken astrovirus was not identified in the fecal virome of healthy chickens in Brazil [31]. Chicken astrovirus has been suggested as an etiological agent for RSS, but it could be detected in chickens without microscopic lesions. Recently, the pathogenesis of chicken astrovirus in broilers was revealed. Serial passages of the virus from chicken to chicken induced the increased virulence, as shown by decreased weight gain and the presence of histopathological lesions [32]. Kang et al. provided strong evidence of chicken astrovirus as an etiological agent of RSS, although the mechanism for its reproduction via bird-to-bird passage is not known. Our metagenomics results support the notion of chicken astrovirus as a major etiology of RSS. Chicken megriviruses of the Picornaviridae family detected in two samples tested in this study were shown to be similar to viruses identified from chickens with transmissible viral proventriculitis [33]. These cases were thought to be cases of RSS associated with proventriculitis [34], but the two cases did not exhibit histopathological lesions in the proventriculus. Viral sequences closely related to members of Gallivirus were identified, suggesting the diversity of chicken picornaviruses, but the role of these viruses in chicken disease is still unknown [35]. Parvovirus, calicivirus and rotavirus have been suggested as etiological agents of RSS in chickens [6,7,36] and were shown to be widely distributed in the non-RSS-affected chicken flocks in this study. As of yet, these viruses have not specified their pathogenesis and reproduced of the pathogenicity; therefore, further research is needed. Independently, we also identified the full NS gene sequence from chapparvovirus in a single chicken intestine sample. Chapparvovirus, which was recently classified, has been found in various vertebrate animals, such as birds, chickens, pigs, bats and dogs, throughout the world, and many studies have been performed to reveal the properties of this virus [37]. Although trials to identify the whole genome of this virus failed, we have the opportunity for novel viral discovery through environmental metagenomics. In this study, chicken picobirnaviruses from the Republic of Korea were firstly identified using metagenomics, and viral sequences were grouped into genogroups I and II. Avian picobirnaviruses in genogroup I have primarily been noted to date, and those in genogroup II are rarely detected in chickens [17,29,38]. We could not design a primer set to detect picobirnaviruses due to the high genetic diversity between chicken picobirnaviruses. The high level of picobirnavirus sequence diversity in various hosts and environmental samples suggests the evolution of heterologous strains [39]. Coronavirus was found in sample 05D72, which was taken from chickens diagnosed with IB and RSS, and adenovirus was also found in sample 13D62, which was taken from chickens diagnosed with inclusion body hepatitis and RSS. Although the viral reads were not high in number, these findings indicate that viral metagenomics can help to determine the clinical etiological agents of chicken diseases. Conclusions Chickens are a major protein source for human consumption, and their diseases are closely connected with public health concerns and economic losses. RSS in chickens is disease that impairs productivity in the Republic of Korea. In the present study, we used an unbiased metagenomic approach for viral pathogen discovery, the detection of novel viruses and the development of a molecular diagnostic tool to detect pathogens from the obtained sequences. We suggest that RSS in chickens, which is also called malabsorption syndrome, is caused by a chicken astrovirus. In addition, the multifactorial etiologies as a cause of RSS, as well as astrovirus enteritis could be ruled out through further studies of other enteric viruses, such as parvovirus, rotavirus and calicivirus. Additional file 1. Base quality and filter result of high-throughput data obtained from samples in this study. Additional file 2. Taxonomy annotation of viral reads obtained from samples in this study.
2023-01-18T15:04:24.355Z
2020-04-15T00:00:00.000
{ "year": 2020, "sha1": "0bd3aaa2f3ee6d82b6b94051755fb751ad66652c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12985-020-01307-z", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "0bd3aaa2f3ee6d82b6b94051755fb751ad66652c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
267539171
pes2o/s2orc
v3-fos-license
Contrast-Enhanced Mammography-Guided Biopsy: Preliminary Results of a Single-Center Retrospective Experience Background: CEM-guided breast biopsy is an advanced diagnostic procedure that takes advantage of the ability of CEM to enhance suspicious breast lesions. The aim pf this paper is to describe a single-center retrospective experience on CEM-guided breast biopsy in terms of procedural features and histological outcomes. Methods: 69 patients underwent the procedure. Patient age, breast density, presentation, dimensions, and lesion target enhancement were recorded. All the biopsy procedures were performed using a 7- or 10-gauge (G) vacuum-assisted biopsy needle. The procedural approach (horizontal or vertical) and the decubitus of the patient (lateral or in a sitting position) were noted. Results: A total of 69 patients underwent a CEM-guided biopsy. Suspicious lesions presented as mass enhancement in 35% of cases and non-mass enhancement in 65% of cases. The median size of the target lesions was 20 mm. The median procedural time for each biopsy was 10 ± 4 min. The patients were placed in a lateral decubitus position in 52% of cases and seated in 48% of cases. The most common approach was horizontal (57%). The mean AGD was 14.8 mGy. At histology, cancer detection rate was 28% (20/71). Conclusions: CEM-guided biopsy was feasible, with high procedure success rates and high tolerance by the patients. Introduction CEM-guided breast biopsy is an advanced diagnostic procedure in the field of breast imaging that exploits the unique features of Contrast-Enhanced Spectral Mammography (CEM) to obtain detailed information on the presence and nature of breast lesions. CEM is a remarkably promising method that is proposed as a viable alternative to breast MR, especially in terms of cost-effectiveness.The pathophysiological principle on which it is based is similar to MR, namely, it studies tumor neoangiogenesis.This examination allows one to highlight areas of the breast associated with hypervascularized lesions, such as neoplastic proliferations, by intravenous administration of contrast medium. CEM examinations are performed using a full-field digital mammograph system provided with a Dual-Energy option.After the injection of contrast media, a pair of mammographic images, low energy and high energy, is acquired in rapid succession.The two images are processed using subtraction algorithms with the production of a combined mammographic image (termed "recombined") to enable the possibility of analyzing the dynamics of enhancement of a suspected lesion, in a similar way to MR [1][2][3]. The main indications for CEM include preoperative staging, inconclusive findings at mammographic and ultrasound imaging, and evaluation of response to neoadjuvant chemotherapy, although these results are extrapolated by retrospective studies [4][5][6]. Magnetic Resonance (MR) is extremely sensitive for the detection of breast cancer.Some malignant lesions are detectable by means of those techniques able to recognize neoangiogenesis, like MR; that is the reason why, to date, MR has been the only option able to sample enhancing-only lesions [6][7][8]. MR-guided vacuum-assisted breast biopsy (VABB) has been demonstrated to be a safe and accurate technique, although it has some drawbacks: it is expensive, not feasible in patients with contraindications, and is not widely available.Moreover, the cancer detection rate as well as false-negative and underestimation rates vary considerably among the published studies [7,8]. Studies have demonstrated that CEM and breast MR have comparable sensitivity in detecting breast cancer.In particular, a leading study by Fallenberg et al. [9], which analyzed the correlation between the two techniques, involved 80 patients affected by breast cancer, with histopathological results as the gold standard.They demonstrated that CEM correlated better with anatomopathological results (Pearson's correlation coefficient of 0.733) with respect to MR (0.654).A study by Lobbes et al. [6] showed a very high concordance between CEM and MR on tumor size measurements, using surgical specimens as the gold standard, with MR performing slightly better, although the latter suffered from a slight overestimation of measurements, which was not of clinical impact.Van Nijnatten et al. [10] found that the two techniques had comparable results in the assessment of invasive lobular cancer extent, although MR was hindered by more false-positive results.The authors concluded that MR should still be performed for the disease extent in invasive lobular cancers, although CEM might be a valid alternative if breast MR is not available (absolute contraindications, patient suffering from claustrophobia).An important issue when dealing with breast cancer is the evaluation of the contralateral side; Houben et al. [11] evaluated the diagnostic performance of CEM to detect additional lesions in women recalled from screening.In 839 patients, CEM recognized 70 enhancing lesions.Among them, 54.3% were proven to be further foci of cancer, suggesting that CEM could be a feasible technique as a primary staging method, since additional foci of breast cancer can be easily detected, even when mammographically occult or difficult to detect. When used as a problem-solving tool in cases of inconclusive routinary breast examinations, CEM and breast MR have been shown to have comparable sensitivity.Jochelson et al. [12], in their study involving 52 women undergoing both CEM and breast MR, demonstrated that sensitivity was quite similar between the two techniques (96-100%), with less false-positive findings for CEM than for breast MR.In a multi-reader study with three different readers, Fallenberg et al. [13] evaluated 604 breast lesions (45% were malignant) and concluded that the diagnostic accuracy of CEM was significantly higher than of full-field digital mammography and similar to breast MR.Li et al. [14] analyzed 48 women with breast lesions, studied both with CEM and MR, showing that the two techniques had a sensitivity of 100% for breast cancer detection. CEM-guided breast biopsy is a relatively new procedure in the field of breast biopsy, which could become a valid alternative to MR-guided breast biopsy in all the cases characterized by neoangiogenesis. The purpose of this study is to describe a single-center retrospective experience of CEM-guided breast biopsy in terms of the procedural features and histological outcomes of the first cases undergoing this procedure. Data Collection and CEM Descriptors A total of 69 CEM-guided breast biopsy procedures were retrospectively analyzed, all performed at the Breast Unit of the Fondazione Policlinico Universitario Campus Bio-Medico in Rome during the period between March 2022 and October 2023.Patients included in the study had a suspicious (BI-RADS 4) or probably malignant (BIRADS 5) finding at contrast-enhanced mammography (CEM).Specifically, in our institution the main indications for CEM include preoperative staging, resolution of problems raised during mammographic and ultrasound screening, evaluation of response to neoadjuvant chemotherapy, and management of lesions of uncertain malignant potential (B3 lesions).Moreover, CEM is the examination of choice in patients with dense breasts and in those who have indications for breast MR in their conventional diagnostic workup but with absolute or relative contraindications to MR (pacemakers or other metallic devices not compatible with MR, claustrophobic patients, patients with a body volume not compatible with MR gantry). The study was approved by the ethics committee of our hospital, and all patients signed the informed consent. Exclusion criteria were contraindications to iodinated contrast media. CEM Protocol A digital mammography unit (Senographe Pristina, GE Healthcare system) equipped with a specific biopsy add-on unit was used to perform CEM procedures. Before starting the CEM examination, the patient must be informed about the procedure and possible adverse reactions to the iodinated contrast medium and must provide her consent to the procedure.After an adequate history, including allergic predisposition, as well as assessment of renal function values, a venous access with a 22-G needle was placed in the antecubital fossa.Via an injector, a dose of 1.5 mL/kg of iodinated mdc (300-370 mgI/mL) was administered at a rate of 2-3 mL/s; a bolus of 20 mL of saline was then administered to increase the release of contrast medium into the tissues and improve image quality.After the drug administration was finished, the connecting tube was detached from the patient, while the venous access remained in place until the end of the examination.Image acquisition began two minutes after the injection, striving to finish the examination within 8 min. During this time, the patient was monitored for any adverse reaction to the iodinated contrast medium.A delay of two minutes after injection is critical since, by beginning breast compression too early, there is a risk that the contrast medium is retained in the vessels outside the breast, preventing it from flowing in the amount needed to be visualized in the early images. Imaging involved classic CC and MLO projections for both breasts, at low and high energy.Generally, we started with the breast site of the neoplasm in order to be able to highlight early enhancement and reduce false-negative findings from early washout; then, imaging of the contralateral breast was performed.If enhancement was observed in the suspected side, an additional projection was performed after eight minutes to qualitatively assess the kinetics of enhancement and determine the likelihood of malignancy.It is important to emphasize that a small area of enhancement (<5 mm) visible only in the late stage is not considered suspicious but likely attributable to BPE. Low-energy radiograms were performed with the same kVp as digital mammography, that is, 25-33 kVp, and with the same rhodium or silver filter.High-energy acquisition, on the other hand, was performed with higher kVp values, between 45 and 49, optimizing to the Iodine K-edge and using a copper filter.The copper filter is the best choice because this material is relatively transparent to X-rays at the energies where they are attenuated by iodine, thus providing high contrast in the images.Recombined images were generated by the removal of background glandular tissue and sent to PACS, along with the low-energy images. Data on patient age, breast parenchyma density, presentation, dimensions, and lesion target enhancement were recorded.In particular, breast density was assessed with the low-energy image according to the American College of Radiology (ACR) BI-RADS ® lexicon.The type of enhancement was classified into mass enhancement and non-mass enhancement [15][16][17]. A mass is defined as a space-occupying lesion that displaces tissue. Morphological descriptors of an enhancing mass lesion include mass shape (round, oval, and irregular), mass margins (circumscribed and non-circumscribed, irregular or spiculated), internal enhancement pattern (homogeneous, heterogeneous, rim enhancement), and the degree of enhancement (subtle, moderate, and intense).The morphological features considered highly suggestive for malignancy were irregular shape, non-circumscribed margins, and heterogeneous internal enhancement; in particular, heterogeneous enhancement appears non-uniform with scattered areas of variable contrast uptake.Moreover, a lesion with moderate or intense enhancement was deemed suspicious of malignant transformation, and, as the literature has demonstrated, most frequently observed in invasive carcinomas (Table 1).Non-mass enhancement (NME) is defined as an area of enhancement clearly visible in the surrounding parenchyma but without space-occupying features.It may be characterized by scattered areas of glandular tissue or fat within it.It typically refers to an enhancing area different from background parenchymal enhancement, and its most common malignant causes are intraductal or diffuse cancer, particularly invasive lobular carcinoma.It can be focal, linear, segmental, regional, multiregional, or diffuse; specifically, the linear pattern is considered suspicious for malignancy, in particular for DCIS, although it may be the presentation pattern of some lesions of uncertain malignancy potential (B3), such as atypical ductal hyperplasia and lobular carcinoma in situ.Also, the segmental pattern is often observed in neoplastic conditions, representing the involvement of a single branching duct system.The internal enhancement pattern of NME can be classified as a homogeneous, heterogeneous, clumped, or clustered ring.In particular, the clumped enhancement is highly suggestive of malignancy, typically DCIS, as well as the clustered-ring enhancement pattern, which refers to a tiny ring enhancement within an area of heterogeneous NME.The neoplasms most often associated with this pattern are DCIS and invasive cancers associated with ductal carcinoma in situ, maybe because an intraductal cancer with a high degree of neoangiogenesis shows a washout pattern, whereas contrast medium that remains in the periductal stroma demonstrates a persistent and progressive kinetic pattern.A study showed that the specificity of this pattern for malignancy is about 63% [1].The features considered highly suspicious for malignancy and prone to be sampled were asymmetric NME with a focal, linear, segmental, or regional distribution and a heterogeneous or clumped internal enhancement pattern (Table 2).In the presence of a suspicious area of NME, the low-energy images were analyzed to search for microcalcifications, which may be associated with the area corresponding to the NME.An important advantage of CEM over MR is the possibility of recognizing breast microcalcifications in the low-energy views and evaluating their morphology and distribution and their conformity to the area of NME in the recombined images. All the biopsy procedures were performed by means of a 7-or 10-gauge (G) needle. The procedural approach (horizontal or vertical) and the decubitus of the patient (lateral decubitus or in a sitting position) were noted. Procedural success was defined by non-visualization of the lesion with enhancement after the biopsy. Procedural time was recorded, considering the time from the first mammographic image acquired to the scout visualization of clip placement, immediately before breast decompression.In addition, the incidence of any complications (intra-procedural bleeding, vasovagal reactions, allergic reactions, hematomas, or infections) was evaluated. CEM-Guided Breast Biopsy Procedure Before performing the procedure, the patient was adequately informed about the risks and benefits, and the patient's suitability was assessed based on renal function and any allergies to the contrast agent. The procedure began by choosing the best approach, considering the location of the lesion, by reviewing the previous diagnostic CEM examination, and the physical characteristics of the patient, in order to decide whether to take a medial or lateral approach.The thickness of the compressed breast was used to determine the approach of the biopsy needle, which can be vertical (compressed thickness over 3 cm) or horizontal (compressed thickness 3 cm or less).Medial or lateral approach was performed after calculating the shortest distance from the skin to the target. The principle of this technique is based on conventional stereotactic guidance, with the addition of the injection of iodinated contrast media at the beginning. After contrast injection, there is a wait of about 2 min before breast compression, which allows the contrast to be maintained in the lesion for optimal visualization.Moreover, the compression applied during the procedure reduces the washout, and the area of contrast enhancement can be seen for up to 10 min, enough to target the lesion. When the target is localized, it is compressed by means of a biopsy window, and a pair of low-energy and recombined images are obtained at angles of 0, +15, and −15 degrees.In the same way as for stereotactic-guided biopsy, the needle is pointed toward the target by the machine using a computerized coordination system. Begore firing, a local anesthesia was administered, and a pre-fire imaging of lowenergy and recombined views was obtained in order to evaluate whether the target remained in the correct position.Then, after confirming the correct positioning of the biopsy needle, it was fired through the target, and multidirectional samplings in a complete clockwise rotation were performed. Once the biopsy was completed, a stereotactic marker was placed to localize the lesion in the future (Figures 1 and 2). Results A total of 69 patients underwent a CEM-guided biopsy, with 2 of them showing two synchronous lesions in both breasts, while in 1 patient the procedure was not performed Results A total of 69 patients underwent a CEM-guided biopsy, with 2 of them showing two synchronous lesions in both breasts, while in 1 patient the procedure was not performed Discussion CEM-guided biopsy is a relatively new technique that may help in the characterization of suspicious enhancing breast lesions, as a valid alternative to MR. MR-guided biopsy is performed when an area of suspicious contrast enhancement, not detectable by means of mammographic and ultrasound imaging, is evident on post-contrast images.This is a feasible technique which is safe and is not hindered by the drawback of ionizing radiation.Nevertheless, it has some limitations related to the localization of the target, which can last very long, with an imaging time of 35-41 min and whole examination time of around 60-70 min [18,19].This because the choice of the best approach for the biopsy requires a careful review of the diagnostic MR images to understand the site, depth, distance from the nipple, and the two-view visualization.Other drawbacks are the high costs, the limited availability, and the expertise of the clinician; in fact, successful MRguided biopsies need skilled, experienced radiologists and technologists who are dedicated to breast imaging and breast biopsy and can problem-solve when faced with cases that require additional prebiopsy planning [20]. Previous studies about MR-guided biopsy success rate reported values ranging from 87 to 98%; in our series, the biopsy success rate was 97.1%.In two patients, the target lesion was partially masked by a severe background parenchymal enhancement, so a digital breast tomosynthesis (DBT) acquisition was performed in order to carry out a DBT-guided biopsy, resulting in an invasive ductal carcinoma NST.A possible explanation of this finding could be related to a focal area of background parenchymal enhancement secondary to noncyclical hormonal factors or maybe to fibrocystic changes.The non-visualization of a previously detected suspicious lesion has been reported in 8-13% of MR-guided biopsies [21][22][23]. In our series, the cancer detection rate was 28% (20/71), which is in line with results obtained by means of MR-guided breast biopsy, ranging from 18 to 61%.This is a critical point when dealing with MR-guided biopsy, because there are some drawbacks related to the inherent uncertainties in the accuracy of sampling; in addition, the radiography of the samples is not available, as in the case of MR-guided biopsies, and the biopsy needle cannot be monitored in real time as with ultrasound-guided biopsies. The CEM procedure involves doses of ionizing radiation.It is important to discuss cost, radiation, and potential risks with the patient, including allergic reactions to the contrast medium and the risk of renal failure related to iodine use.Careful risk management and clear communication are essential to ensure the safety and efficacy of the procedure. To date, there are only a few papers published on this topic, showing promising results on this technique [24][25][26]. In our study, the procedure was correctly performed in all cases, with a 100% success rate.The approach most often chosen was horizontal, which is safer in patients with small breasts, but with a slight increase in the time of the procedure, and the position most often used was the lateral decubitus, which helped in reducing anxiety in the patients. Few data are available about the AGD, since radiation exposure is a major drawback of CEM-guided biopsy. Alcantara et al. [25] reported a low median number of scout views before targeting, avoiding additional image acquisitions after tissue sampling.Cheung et al. [24] obtained an AGD of 14.3 ± 12.3 mGy; they used the recombined image to evaluate target location and then marked the skin before biopsy [6]. In the present study, the mean AGD was 14.8 ± 10.2 mGy, which is in line with Cheung et al.'s results [24].We centered the target chosen on the diagnostic CEM, marked the skin, and then administered the contrast medium.In this way, we correctly localized all the target lesions.The mean ADG in our series was lower than that of stereotactic biopsy (about 22 mGy) but higher than that of DBT (about 10 mGy). The most common complications were hematomas and vasovagal reactions, in line with data from the literature, that is, 1-5% for vasovagal reactions and 2-83% for hematomas, which are common events, but with a low clinical impact.The MR-guided VAB is characterized by a comparable complication rate of VAB under stereotactic guidance, despite higher technical requirements, in particular hemorrhages.The complication rate of this procedure lies within the well-established range of complication rates (2-14%) [27,28]. In a review which compared the technical performance of MR-guided biopsy and stereotactic-guided and ultrasound-guided techniques, involving 9113 VAB procedures, the authors reported that there were no cases of bleeding requiring surgical intervention, so it could be defined as a safe technique [19]. No severe allergic reactions were observed, mainly because every patient underwent a careful analysis of any possible previous allergy to iodinated contrast media. The main limitation of the current study is the small sample size, which enabled us to assess the accuracy of the technique as well as the data about AGD.Nevertheless, the AGD per exposure was always under the threshold of 3 mGy, set by the Mammography Quality Standards Act regulations.This issue requires further investigation. Conclusions In conclusion, this study showed that this technique is well tolerated by patients, is feasible because it is performed in a short time with high rates of success, and should be considered as a promising alternative to MR breast biopsy.Nevertheless, several studies are needed to demonstrate its application on a vast scale. Figure 1 . Figure 1.An asymptomatic 49 yo patient was referred for a CEM examination after a routine external center examination.Low-energy CEM CC view (A) and MLO (C) show an irregular area of higher density in the outer inner quadrant of the right breast, appearing as a 25 mm enhancing mass (circle) on recombined images (B,D).Right breast CEM-guided biopsy was performed (E-H) with horizontal needle approach.Scout-view imaging at 0 degrees (E).Pre-fire imaging with stereotactic pair at −15 and 15 degrees (F,G).Post-biopsy with marker placement (H).Pathology report: invasive ductal cancer NOS, G1, pT 1b pN 0. Figure 2 . Figure 2.An asymptomatic 45 yo patient was referred for a CEM examination after a suspicious finding in a routine external center examination.Low-energy CEM CC view (A) and MLO (C) show an irregular mass in the upper outer quadrant of the right breast, appearing as a 20 mm enhancing mass (circle) with satellite nodules at the periphery of a round area of radiolucency with regular margins on recombined images (B,D).Right breast CEM-guided biopsy was performed (E-H) with vertical needle approach with patient in sitting position.Scout-view imaging at 0 degrees (E).Prefire imaging with stereotactic pair at −15 and 15 degrees (F,G).Post-biopsy with marker placement (H).Pathology report: invasive ductal cancer NOS, G2. Figure 1 . Figure 1.An asymptomatic 49 yo patient was referred for a CEM examination after a routine external center examination.Low-energy CEM CC view (A) and MLO (C) show an irregular area of higher density in the outer inner quadrant of the right breast, appearing as a 25 mm enhancing mass (circle) on recombined images (B,D).Right breast CEM-guided biopsy was performed (E-H) with horizontal needle approach.Scout-view imaging at 0 degrees (E).Pre-fire imaging with stereotactic pair at −15 and 15 degrees (F,G).Post-biopsy with marker placement (H).Pathology report: invasive ductal cancer NOS, G1, pT 1b pN 0. Figure 1 . Figure 1.An asymptomatic 49 yo patient was referred for a CEM examination after a routine external center examination.Low-energy CEM CC view (A) and MLO (C) show an irregular area of higher density in the outer inner quadrant of the right breast, appearing as a 25 mm enhancing mass (circle) on recombined images (B,D).Right breast CEM-guided biopsy was performed (E-H) with horizontal needle approach.Scout-view imaging at 0 degrees (E).Pre-fire imaging with stereotactic pair at −15 and 15 degrees (F,G).Post-biopsy with marker placement (H).Pathology report: invasive ductal cancer NOS, G1, pT 1b pN 0. Figure 2 . Figure 2.An asymptomatic 45 yo patient was referred for a CEM examination after a suspicious finding in a routine external center examination.Low-energy CEM CC view (A) and MLO (C) show an irregular mass in the upper outer quadrant of the right breast, appearing as a 20 mm enhancing mass (circle) with satellite nodules at the periphery of a round area of radiolucency with regular margins on recombined images (B,D).Right breast CEM-guided biopsy was performed (E-H) with vertical needle approach with patient in sitting position.Scout-view imaging at 0 degrees (E).Prefire imaging with stereotactic pair at −15 and 15 degrees (F,G).Post-biopsy with marker placement (H).Pathology report: invasive ductal cancer NOS, G2. Figure 2 . Figure 2.An asymptomatic 45 yo patient was referred for a CEM examination after a suspicious finding in a routine external center examination.Low-energy CEM CC view (A) and MLO (C) show an irregular mass in the upper outer quadrant of the right breast, appearing as a 20 mm enhancing mass (circle) with satellite nodules at the periphery of a round area of radiolucency with regular margins on recombined images (B,D).Right breast CEM-guided biopsy was performed (E-H) with vertical needle approach with patient in sitting position.Scout-view imaging at 0 degrees (E).Pre-fire imaging with stereotactic pair at −15 and 15 degrees (F,G).Post-biopsy with marker placement (H).Pathology report: invasive ductal cancer NOS, G2. Table 1 . Mass descriptors of malignant lesions. Table 2 . Non-mass descriptors of malignant lesions.
2024-02-08T16:15:22.620Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "14914fde068ef56349060f575202f6ee2de71afc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/13/4/933/pdf?version=1707216010", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c596c4c5a86a72e43ee04d3429b8d512c19e252", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
9625106
pes2o/s2orc
v3-fos-license
Correlations in Many Electron Systems: Theory and Applications In this contribution we present calculations performed for interacting electron systems within a non-perturbative formulation of the cluster theory. Extrapolation of the model to describe the time dependence of the interacting systems is feasible and planed. The theory is based on the unitary operator $e^{iS}$ ({\it S} is the correlation operator) formalism which, in this paper, is treated non perturbatively within many-particle correlations. The application of the derived equations to few-body systems is realized in terms of Generalized Linearization Approximations (GLA) and via the Cluster Factorization Theory (CFT). To check the reliability of the model we present two different applications. In the first we evaluate the transitions energies in Helium-, Lithium-, Beryllium-, and Boron-like Oxygen. The calculation aims to a precise determination of the satellite transitions which play an important role in plasma diagnostics. In a second we investigate a non-perturbative method to evaluate the charge radii of the Helium and Lithium isotopes by using the Isotopic Shift theory. We have found that our model leads naturally to components of $e^--e^+$ pair in the two-electron wave functions of the Helium isotopes and three-electron wave functions of the Lithium isotopes. The possible connection of these terms to the QED leading diagrams is postulated. Introduction Deriving a non-perturbative and microscopic theory capable to describe the basic observable that characterize the dynamics of interacting electrons is a fundamental problem in the physics of atoms and ions. In general, one faces with two fundamental tasks, namely, the consideration of the correlation effects and the introduction of a cut-off parameter which, in order to obtain realistic and solvable systems, reduces the dimensions of the model Equation of Motion (EoM). The introduction of correlation effects in many body systems via the e iS Unitary-Model Operator (UMO) goes back to the early work of Villars [1]. The idea is to introduce a wave operator S which maps zero-order reference wave functions (usually Hartree-Fock wave functions) to exact many body wave functions. Extended applications of the method in nuclear physics were shortly after performed by Shakin [2]. The e iS method came to quantum chemistry with the coupled cluster method proposed by Coester [3], and Kümmel [4]. The coupled cluster Hamiltonian has been recently applied to the calculations of the electron affinities of alkali atoms [5]. Studies of correlation effects in atomic systems based on the coupled cluster theory have been performed by Das et al. [6]. Recently [7,8] the e iS method was applied within nonperturbative ap- proximations (Dynamic Correlation Model (DCM) and Boson Dynamic Correlation Model (BDCM)) to open shell nuclei. Applications of the method to open-shell electron systems were firstly applied to calculate the Hyperfine Splitting (HFS) constants of Lithium-like bismuth and uranium [10,12]. The resulting non-perturbative and relativistic electron Dynamic Correlation Model (eDCM) was applied to calculate the effect produced by the electron and nucleon correlations into the isotopic shift theory IS. Calculations for lithium atoms were presented in [13]. Additionally the method finds application in the evaluation of dielectronic satellite-spectra of Lithium-like ions [14,15,16]. These are a useful tool for diagnostic of laser produced plasma. The ratio of various components of the satellite lines have been shown to be sensitive to density and temperature. We start by describing free electron systems with a relativistic shell model in which the wave functions are solution of the Dirac's equation. The model vacuum consists in paired electrons to fill major shells. The electrons in excess are considered as valence particles. The interaction between the electrons is responsible for exciting the valence electrons and for causing correlation effects in the closed shells. In additions to this polarization mechanism we have also the polarization of the continuum states. This polarization effects named Boiling of the Vacuum (BoV), have been already introduced in [10]. As in Ref. [7] we start by defining the basic operators of the model and by determining the relative EoM. The complex excitations modes are classified in terms of electron Configuration Mixing Wave Functions (eCMWFs). The eCMWFs form an orthogonal base of coupled clusters in which the Pauli principle between the different clusters is taken fully in consideration. Extrapolation of the non-perturbative cluster model to describe the time dependent electron-laser interaction is feasible and planed. In this contribution we present two applications of the non perturbative eDCM. The first involves the evaluation of the relativistic transition energies and wave functions for the Oxygen ions ranging from the Helium-like to the Boron-like. In the second application we study the dynamics of fewelectron systems interacting with the excitation of the positron-continuum. The effect of this excitations is important in the determination of a non perturbative descriptions of the Mass Shift (MS) and Field-Shift (FS) which characterize the Isotopic Shift (IS) theory. Theory We start with a set of exact eigenstates {|ν } of the Dirac's Hamiltonian: which satisfies the dynamical equation In dealing with many electron systems one has to add the correlation effects caused by the two-body interactions: V (ij) Coul and V (ij) Breit to the Hamiltonian of Eq. (1). Shell model calculation can be then performed to calculate transition energies between the different levels. Shell model calculations represent however an approximation in that one usually treats the effects of only few shells. The neglected shells serve to re-normalize the interaction in the shells considered. The re-normalization of the Hamiltonian is generally introduced via correlation operators. In UMO the effective Hamiltonian is calculated by introducing the correlations via the unitary e iS operator. By using only two body correlation we can derive: where v 12 is the two body interaction and the Ψ αβ is the two particle correlated wave function: However in dealing with complex atoms the (S i , i = 3 · · · n) correlations should also be considered. The evaluation of these diagrams is, due to the exponentially increasing number of terms, difficult in a perturbation theory. We note that one way to overcome this problem is to work with e i(S1+S2+S3+···+Si) operator on the Slater's determinant of the different states by keeping the n-body Hamiltonian uncorrelated. After having performed the diagonalization of eigenvalue matrix obtained from the matrix elements of the n-body uncorrelated Hamilton's operator, we can calculate the form of the effective Hamiltonian which, by now, includes correlation operators of complex order. The amplitudes of the correlated determinant are the calculated in the EoM method which is illustrated in the following. If |0 denotes some physical vacuum and O † ν denotes the operator that creates the many-body eigenstate |ν such that O † ν |0 = |ν , O ν |0 = 0, and H|0 = E 0 |0 , then we have a set of EoM of the form In terms of the operators, the EoM can be written as In Eq. (6) the Hamiltonian has the general second quantization form where T is the kinetic energy operator and V int the interactions (V Coul +V Breit ), and the c † , c the general fermion operators. When they act on valence subspace, the c † and c creates and annihilates a valence electron, respectively. On the other hand, when they act on core subspace, the c † and c respectively annihilates and creates a hole state. Hence, the summation of the Greek subscripts leads to particleparticle, particle-hole, as well as hole-hole interactions. It is useful to determine the form of the central potential before the diagonalization of the model space is performed. This is because the matrix elements of the EoM can often be more easily calculated in a pre-diagonalization basis. The coefficients Ω br are simply the matrix elements of the Hamiltonian. To see this, we take the matrix element of both sides of Eq. (10) between the states s| and |0 . Upon using the orthogonality between the basis vectors (i.e. s|O † r |0 = δ rs ), one obtains If the model space consists of a finite number, N , of basis vectors, then going from Eq. (10) back to Eq. (6) is equivalent to associate the systems of coupled equations given in Eq. (11) to the eigenvalues matrix equation given below: where O represents the (N × N ) matrix Ω, 1 the N -dimensional unit matrix, and x are the projections of the model space into the basic vectors. Equations (10) and (11) indicate that the complexity of solving Eq. (12) depends on the complexity of the model space, {|b }, and the Hamiltonian, H. The following comparative review of the construction of model spaces in different structure theories should give a glimpse on the scope of the problem. Let O † m be the operator that creates n valence electrons outside the closed shells state |Φ 0 : In the simplest case where there is no closed shell excitation, the O † m satisfies the EoM, Eq. (10) with α m and α m ′ denoting the quantum numbers of the states |m and |m ′ , respectively. The inert-core approximation would be good only if the valence-core interaction is very small. Hence, the applicability of the inert-core approximation is very limited as the interaction between valence and core electrons will generally excite the shell-model ground state of the core and create, in the process, the particle-hole (ph) pairs. Inclusion of the excitation mode due to 1p1h in the model space is known as the Tamm-Dancoff approximation (TDA) [17]. If one defines then Eq. (9) takes the form The b † j2 creates a hole j −1 2 in |0 T DA by destroying a core electron of j 2 while a † j1 creates a valence electron of j 1 . The A † m creates therefore a state of n + 1 particles and 1 hole (or p n+1 h 1 ). The χ's are the configuration mixing coefficients and |0 T DA denotes the physical vacuum of the TDA. In the literature one often chooses |0 T DA = |HF , with |HF being the Hartree-Fock ground state of the ion. In this latter case, O † m ′ = 1 in Eq. (16). It is also possible to use a physical vacuum that already contains ph pairs. In the literature, the method of random phase approximation (EPA) [17] has been introduced to study the full effects due to the pre-existence of 1p1h component in the physical vacuum. Hence, in RPA and one can see that the term b j2 a j1 gives a null result if the physical vacuum |0 RP A does not contain preexisting ph pairs. (In the literature, the coefficients χ j1j2 and χ J2j1 are denoted by x m j1j2 and −y m j2j1 .) If the RPA is applied to closed-shell, then again O † m ′ = 1 in Eq. (17). The introduction of the excitations of the vacuum in the above mentioned approximation is however complicated by the fact that the TDA and RPA vacua are different then the vacuum of the single particle operators. In addition simple calculations can be performed only by prediagonalizing the many body Hamiltonian in the TDA and RPA subspaces. The coupling to the additional valence particles can afterwards be accomplished by considering only few collective states and by neglecting the full treatment of the Pauling principle. In the following we show that these complications can be overcome by extending the EoM method to the field of non-linear equations. Polarization of the closed shells versus continuum vacuum excitations In the eDCM, the model space is expanded to include multiple ph excitations. This dynamic mechanism includes either the excitations of closed electron shells or of positron-continuum states. More specifically [8], the eDCM states are classified according to the number of the valence electrons and of the electron particle-hole pair arising either from closed shells or from the positron-continuum. A state of N paired valence electrons and N ′ particle-hole closed shells electrons or e − − e + positroncontinuum states is defined by where J denotes the total spin and the α ′ s the other quantum numbers. The unprimed indices 1, . . . , n label the valence particle-particle pairs ( the valence bosons) and the primed indices 1 ′ , . . . , n ′ label the particle-hole pairs (the core electrons). The J i 's denote the coupling of the pairs and the coupling of the different J i is for simplicity omitted. The X's are projections of the model states to the basic vectors of Eq. (19). Within this definition the model space included either the excitation of the closed shells or the dynamics of continuum excitation which is taken into account through coupling the valence electron states to e − − e + states. The electron states defined in Eq. (19) are classified in terms of configuration mixing wave functions (eCMWFs) of increasing degrees of complexity (number of particle-hole or of e − − e + pairs), see Ref. [7]. Since the different subspaces should be rotational invariant we introduce the coupling of the particles and particle-holes in such a way that the first pair is coupled to angular momentum J 1 , the second to J 2 , the two pairs are then coupled to J 3 and so on until all the pairs are coupled to the total angular momentum J, e.g., and Introduction of Eq. (20) into Eq. (10) gives the following equations of motion in the eDCM: where |0 is the shell-model state. Furthermore, we have used the notation p x h y for the indices of Ω to indicate the relevant xp − yh configuration. The additional commutator equations here are not given. In order to obtain eigenvalue equations we need to introduce a cut-off parameter: the GLA [7], which consists by applying the Wick's theorem to the A † N +2 ′ (β N +2 ′ (J 1 J 2 · · · J N +2 ′ )J) terms and by neglecting the normal order. This linearization mechanism generates the additional terms that convert the commutator chain in the corresponding eigenvalue equation, as can be obtained by taking the expectation value of the linearized Eqs. (23) and 24) between the vacuum and the model states. Using the anticommutation relations and the Wick's algebra, one verifies easily that H can only connect states that differ by 1p1h. The eigenvalue equation, Eq. (11), at the second-order linearization level is given by Eq. (25) where the subscripts referring to particle-hole configurations were not written explicitly but are understood. Note that in Eq. (25) The self-consistent method of solving Eq. (25) is given in detail in Ref. [8]. Here, we mention among others that in solving Eq. (25) the two-body interactions of H automatically generates nonlocal three-, four-interactions and so on. The diagonalization of Eq. (25) can be performed only if one can calculate the many-body matrix elements. Calculations are feasible with the use of the Wick's algebra. However the number of terms to be evaluated increase exponentially and calculations are very slow. In this work, we perform calculations by using the CFT of Ref. [7,8,11]. We believe that with the mastering of the essence of the CFT, matrix elements involving even more complex forms of operators can be easily deduced from the results obtained here. Transition energies in Oxygen ions The eDCM finds applications to the calculation of the transition energies of the Oxygen ions. In Table 1 we give the energies for the Hydrogen-like Oxygen. The energies are calculate solving the Dirac's equation in a central Coulomb potential. For the 1s 1 2 the calculated energy is compare wit the ionization energy of Ref. [18]. energies of the Helium-like Oxygen states are then obtained by solving Eq. (25). The indices (α, β) are associated to a two electron states coupled to a good J quantum number. The energies of the first three J = 0 + states, obtained by diagonalizing a matrix with 55 components, are given in ) 0 -1047.5 Table 2. The first three levels of Helium-like O 6 + with J=0 + and the associated spectroscopic factors. The energies of Lithium-like states are then obtained by solving Eq. (25). The indices (α, β) are associated to a three electron states coupled to a good J quantum number. The energies of the first three J = 3 2 − states, obtained by diagonalizing a matrix with 350 components, are given in Table 3 together with the associated spectroscopic factors. In order to calculate the transition energies of the Beryllium-like Oxygen we assume the first 1s 1 2 shell full and we diagonalize Eq. (25) with the indices (α, β) running over the unoccupied single particle states and the indices (α ′ , β ′ ) over the 1s 1 2 closed shell. The resulting energies for the three J = 1 − states obtained by diagonalizing a matrix of order 750, are given in Table 4 together with the relative spectroscopic factors. Spectroscopic factor Orbital In order to calculate the transition energies of the Boron-like Oxygen we assume the 1s 1 2 shell full and we diagonalize Eq. (25) with the indices (α, β) running over the unoccupied single particle states and the indices (α ′ , β ′ ) over the closed shell. The resulting energies for the three J = 0 + states obtained by diagonalizing a matrix of order 614, are given in Table 5 together with the relative spectroscopic factors. Non-linear realization of the IS theory The knowledge of the theoretical and experimental mass-dependence (MS) of selected atomic transitions and the theoretical calculations of the volume effects (FS) gives the possibility to have a determination of the mean-square nuclear radii of short living isotopes [19]. Recent values for the nuclear charge radii of short-lived lithium and helium isotopes have been obtained from measurements performed at GSI, Vancouver [20], Argonne [21]. The measurements of the 2 2 S 1/2 → 3 2 S 1/2 , of the 2 2 S 1/2 → 2 2 P 3/2 , and 2 2 S 1/2 → 2 2 P 1/2 transitions together with the recently performed calculations [22] of the same transitions in lithium and helium atoms were in fact used to extract the difference of the nuclear charge radii of the short-living isotopes from the charge radius of the stable isotope. In this paper we propose to reevaluate the MS and the FS in a non perturbative approximation based on the application of the eDCM. We start to calculate the energies of the lithium atoms by diagonalizing Eq. (25) in a base formed by three electrons in the (s,p,d) single particle states which interact with the BOV states formed by exciting the e − − e + continuum states. Results of this calculation for the 2s and 3s states are shown in Table 6. Calculated energies of the 1s 2 2s and the 1s 2 3s in different models. According to Ref. [19] in order to evaluate the Mass Shift (MS) we have to add to the eigenvalue equation the additional term: ∇ i .∇ j and to rescale energies and distance with the reduced mass of the electron. The matrix element of the ∇ i .∇ j can be calculated as in Ref. [13] while the rescaling of the energies can be obtained by adding a R nucl . r i electron term to Eqs. (23) and (24) and to re-diagonalize the matrix given in Eq. (25). The correlations of the nucleus, which influence via the additional matrix elements given above, are in general approximated by a non relativistic perturbative calculations [22]. The FS term [28] factorize into a constant where the term δ r denotes the expectation value of the electron density at the nucleus multiplied by the isotopic variation of the charge radius. The polarizability of the nucleus, which influences the calculation of this constant, has been evaluated relative to the polarization of deuterium [27]. Since the FS is generally calculated in the point nucleus approximation, calculations performed within the DCM (nucleus) and the eDCM (electrons) correlation models could give a better insight in the FS calculation. Calculation of the IS for the isotopes of Lithium and Helium are under present calculation and will be reported soon. Transition energies in Lithium-like 235 U. The 2s-2p transition of Lithium-like 235 U is calculated in the eDCM. The result is given in Table 7 and compared with the QED calculation of Yerokhyn [29] and with the experimental result [30]. By using the resulting eCMWFS for the 2s 1 2 and 2p 1 2 we can calculate the hyperfine splitting (HF) of the two states. The calculation are performed by coupling the three electron wave functions to the ground state wave function of 235 U. For the nuclear ground state wave function we use a DCM which reproduce well within a large dimensional space the nuclear energies and moments of the 2f 7 2 valence neutron. Detailed calculation will be reported soon. Conclusion and Outlook The Better energies can be obtained by using the Harthee-Fock method. The approximation we have used gives to the calculated energies an error that can vary depending from the electron energy considered from 0.1 to few percent. A better estimation of the errors could however be given, as suggested by Drake, by evaluating elementary excitation processes in light atoms like Hydrogen. For this purpose we are investigating the two photon transitions in Hydrogen. This would allow to establish a connection between the present non perturbative method and the QED perturbation theory.
2014-10-01T00:00:00.000Z
2006-12-20T00:00:00.000
{ "year": 2006, "sha1": "68b8a530477c1ac8e450d6ddafd0aa2e2c2a395c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0612195", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b51abc3e799a569931353b1cd8e3fb85bef159b4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
218920424
pes2o/s2orc
v3-fos-license
A modified Johnson-Cook model for dynamic behavior of spray-deposition 17 vol.% SiCp/7055Al composites at high strain rates In this study, the dynamic impact tests of spray-deposited 17 vol% SiCp/7055Al composites at various strain rates were performed with a Split Hopkinson Pressure Bar (SHPB). In these tests, the strain rate was 392 s−1–2002 s−1, and the temperature was 293 K–623 K. Subsequently, the Johnson-Cook (JC) was used to describe the flow behaviors under high speed impact deformation, and its effectiveness was assessed. Results show that the stress values predicted by the JC model could be inconsistent with the experimental ones. A modified JC constitutive model of 17 vol% SiCp/7055Al composites was developed by modifying the strain rate hardening term and considering coupling effects of strain, temperature and strain rate. According to the comparison between the experimental data and the results assessed with the modified JC model, the proposed model could assess the stress-strain values more accurately, especially in the beginning of plastic deformation. This indicates that the composites exert the joint effects of strain rate hardening and temperature softening during high-speed impact deformation. In recent decades, domestic and foreign scholars have fabricated SiCp/Al composites based on spraydeposition and conduct relevant research [11][12][13]. However, existing researches mainly focus on mechanical properties, the interface effects on the mechanical properties and static deformation characteristics [14,15]. As a matter of fact, composite components are likely to experience dynamic impact loading in several applications, it is generally known that all materials exhibit different deformation characteristics under static and dynamic loading conditions [1,5]. Studying flow behaviors of SiCp/Al composites at high strain rates is critical to explain the dynamic characteristics of the material in their application [16]. However, the dynamic impact test can obtain the performance parameters and dynamic flow characteristics of the spray-deposition SiCp/Al composites at high strain rates. Under the varied loading, flow behaviors of the materials were affected by the strain, strain rate as well as temperature. On the whole, the flow behaviors of materials during hot deformation are complex, whereas the constitutive relationship can describe the stress-strain relationship of materials in a mathematical model [16][17][18]. The applicable constitutive model should be capable of expressing the dynamic characteristics of the materials under various loading conditions, which is a prerequisite for accurate numerical analysis of material deformation including finite element simulation [17]. Constitutive equations of materials have been primarily split into two categories [19]: Physically-based constitutive models and phenomenological constitutive models. As compared with physics-based models, however, phenomenological constitutive models involve fewer material constants and require limited experimental data; they are always prioritized by the users to assess the stress-strain values of materials. Besides, the phenomenological constitutive models have been successfully adopted to describe the sophisticated flow behaviors of materials under larger loading forming conditions [20,21]. The Johnson-Cook (JC) constitutive model has been extensively employed in phenomenological constitutive models for its simple form and simplified calculation [21][22][23]. The JC model initially proposed by Johnson and Cook in 1983, has been adopted for large deformation, high strain rates and high temperature of metals [23][24][25]. The JC model considering strain rate hardening, strain hardening and thermal effect; it can describe the flow behaviors of various materials under specific loading conditions. It is noteworthy that the JC model has been extensively used in impact dynamics research [26]. The original JC model only gives the expression of yield stress, and the material constants are easy to acquire from experimental. However, it is found that the original JC model has some deviations in the prediction of deformation behavior. To enhance the accuracy of the JC model, the model also has been widely modified in available literature. Among these proposed modified JC models, strain, strain hardening and temperature softening terms in the modified models have been applied most frequently. These researches primarily established JC models of alloys and partial composites. At present, the spray-deposition processed SiCp/Al composites have been rarely reported, let alone constructing the JC constitutive equation of spray-deposition SiCp/7055Al composites. Thus far, whether the existing related constitutive model is applicable to the SiCp/7055Al composites remains unclear. In the present work, dynamic uniaxial compression tests were performed on the spray-deposited 17 vol% SiCp/7055Al composites, the stress-strain data were obtained with a Split Hopkinson Pressure Bar (SPHB); besides, their effects on the flow behaviors were discussed. The JC model and the modified JC model constitutive equation of SiCp/7055Al composites were constructed to describe the dynamic behavior. The deformation behaviors of SiCp/7055Al composites under various strain rates at different temperatures were discussed. As revealed from the results, the predicted values of the original JC equation are greatly different from the experimental values, and the proposed modified JC equation is capable of precisely assessing the composites deformation behaviors. Experimental material In the present study, the commercial 7055 aluminum alloy was used. 7055 aluminum alloy have been widely used in aerospace, transportation and other fields because of its excellent mechanical properties, its chemical composition is given in table 1. SiC particles with the size of 15-20 μm acted as the reinforcement. The SiCp/ 7055Al composites were fabricated by spray-deposition. Before spray-deposition, to reduce the agglomeration of SiC particles, the SiC particles were heated for 10 h at 523 k to remove crystalline water and adsorbents. The spray-deposition process parameters included: Atomization temperature at 1023-1123 K; Nebulizer pressure under 0.6-0.8 MPa; The diameter of sedimentary disk as 530 mm; Matrix rotation speed at 150-250 r min −1 ; Powder-feeding pressure under 0.1-0.2 MPa. The volume fraction of added alpha-SiC particles was 17%, and the density of the composite was 92.3%. The size of the deposited cylindrical sample is 160×320 mm. Experience of dynamic tests The cylindrical impact sample size is 10 mm in diameter and 6 mm in height. The cylindrical bar specimens processed from the original spray-deposition 17 vol% SiCp/7055Al composites with EDM PW2UP wire cutter. And all the test specimens were processed to develop the coincident axis along the radial direction to ensure consistency. SHPB was adopted to perform the dynamic compressive tests, and the incident, reflected and transmitted waves were transmitted to the data processing system by using the strain gauge on the incident bar and the transmission bar. The strain gauge resistance of SHPB's data acquisition system is 120 Ω. To minimize the impact of shock waves on the results of the experiment, a small amount of vaseline was applied on both ends of the sample and 2 mm long rubber was applied on the other end of the incident bar. In accordance with the one-dimensional stress wave theory, the strain rate ( e), strain (ε) and stress (σ) of the tested material can be Where C and E are the elastic wave velocity and the Young's modulus in the bars, respectively. are the incident wave amplitude, the reflected wave amplitude and the transmitted wave amplitude, respectively; l 0 is the initial length of the specimen; A, A 0 are the bar cross-sectional area and the specimen crosssectional area. The dynamic test stress-strain curves can be obtained by eliminating the time term. The specimens were subjected to high-speed impact tests on SPHB at the strain rate ranging from 392 s −1 to 2002 s −1 , as well as at the temperatures of 293 K, 523 K, 573 K and 623 K, respectively. indicates that the precipitate of the materials phase is Al 2 Cu. In the meantime, some weak peaks of magnesium compounds (such as Al 2 CuMg and MgZn 2 ) were also found in the XRD pattern, but these could not be completely determined to be Al 2 CuMg and MgZn 2 . This may be due to the low content of Mg compounds precipitates in the spray-deposition composites as shown in figures 1(c) and (d). Flow behavior The strain rate ( e), strain (ε) and stress (σ) data of the high-speed impact experiments can be obtained by using the one-dimensional stress wave theory. The calculation formulas are (1), (2) and (3), as shown in section 2.2. The true stress-true strain curves that were obtained through SPHB tests are illustrated in figure 2. It can be seen from figure 2 that the stress values increased rapidly at the initial stage, and then with the increase of the strain, the flow stress gradually varied to the steady-state flow stage. The values of flow stress increase with the strain rates show the material has a positive strain rate sensitivity at varied temperatures. In the initial stages of dynamic deformation, strain hardening and strain rate hardening have an effect on the flow behaviors of 17 vol% SiCp / 7055Al composites. The true strain of the material are obviously improved under high strain rates, the higher the strain rate is, the larger the true strain. However, the true strain values are very low when the strain rate is around 400 s −1 , and there are no obvious plastic deformation stage. This was primarily because the amount of deformation was small, and the heat accumulation of the materials was not inadequate to achieve the significant increase in the dislocation motion. Johnson-Cook model The current JC model, considered the strain, strain rate and temperature effects on the plastic deformation mechanism of the metals. For its simple form and ease of use, the variables used have been already available in most calculation programs, and the model has been widely used in general the high-speed impact dynamics studies. The JC model is expressed as follow [25]: where s is the flow stress, A, B n, C and m are material constants, A is quasi-static yield stress (MPa) at reference temperature and reference strain rate, B is strain hardening parameter (MPa), n is strain hardening exponent, C represents the coefficient of strain rate hardening; m is thermal softening exponent. e p is equivalent plastic strain, and  e* is dimensionless equivalent plastic strain rate, which is expressed as /    e e e = . Where, T is deformation temperature, T m is melting temperature of the composites at normal conditions and T r is the reference temperature. T r can be room temperature, the lowest temperature of interest or the lowest temperature of the experiment, and T m * cannot be negative. In the present experiment, the reference temperature is T r =293 K and the reference strain rate is  e 0 =0.001 s −1 to evaluate the material constants of the JC model. The elastic modulus and yield stress (A) can be obtained by quasi-static test, as shown in figure 3. It is found that the SiCp/7055Al composites have no obvious yield point, therefore, σ 0.2 was taken as the yield point, and the yield stress of the material is 242.29 MPa. The melting point of SiCp/7055Al composite is T m =900 K, the DSC curve as shown in Then equation (6) can be denoted as: The parameters B and n could be obtained from the straight line fitted to the plastic deformation (after the yield point) of the quasi-static experimental data. The ln(σ-A)-lnε is plotted, and subsequently a linear fitting was performed, as shown in figure 5(a). The values of B and n could be calculated from the intercept and the slope of the fitting line, B =5383 MPa and n=1.33, respectively. Determination of constant C In equation (4), the second bracket on the right side of the equal sign indicates the strain rate enhancement effect, and the parameter C is the material strain rate sensitivity coefficient. At the test temperature of T=Tr=293 K, the relationship between the dynamic yield stress and the strain rate at normal temperature can be obtained as: deformation, the JC model cannot accurately predict and loses its application value. It is therefore revealed that the original JC model cannot well describe the flow behaviors of the composite in the range of high strain rates, temperatures and strains. As a result, the JC model cannot comprehensively reflect the deformation mechanical properties at high-speed of the composites, to obtain a better prediction, the JC model needs to be modified. Modified Johnson-Cook model The JC model refers to a strain rate dependent constitutive model, considering strain, strain rate and temperature separately. These parameters can be determined by a few experiments. However, a considerable number of theories and experiments reported that shear modulus is a function of pressure and temperature [16,[26][27][28]. Thus, if the model is not modified, it cannot appropriately describe the stress-strain relationship of spray-deposition SiCp/7055Al composites at high-speed impact, as shown in figure 7. In addition to the effects of strain, strain rate, and temperature on the flow stress, the phase transition, dislocation density and material structure during deformation also have new effects on flow stress [16,[29][30][31]. To remedy the defects of the JC model, the coupling effects of the flow stress of these three factors were considered. Some modified JC models were proposed in the literature [22,23,[32][33][34]. These modified JC models can well describe the stress-strain characteristics of theirs materials. The expression of the most widely used in these provided modified JC models is shown in equation (13). To determine whether this modified JC model is suitable for spray-deposition SiCp/ 7055Al composites, the equation (13) is verified. and  e ln * plots as shown in figure 9, in which the mean average slopes of the regression lines is the value of C 1 =0. 23. Rearrange equation (13) and take natural logarithm on both sides, could be obtained equation (17), expressed as follow: The reason for the excessive errors is that the SiCp/7055Al composites not only has the strain rate hardening effect but also the softening effect, which is not taken into account in equation (18) [ 16,23,[35][36][37]. For some materials, the combined effect of thermal softening and strain hardening on the flow stress should be considered during dynamic deformation [22,23,38]. In view of the influence of thermal softening effect, it is reasonable to retain the thermal softening exponent in the temperature softening item. Considering the interaction influences of the factors on the flow stress, a modified JC model was proposed, as shown in equation (19). Rearranging equation ( Analysis of constitutive equation accuracy To verify the reliability and practicability of modified Johnson-Cook model for the spray-deposition SiCp/ 7055Al composites at high strain rates, the experimental stress-strain values are compared with the stress-strain values predicted by the modified JC model, as given in figure 13. It can be found by comparing figures 7 and 13 that the proposed modified JC model exhibited accurate predictions with experimental results at different strain rates and varying temperatures. Since the modified JC model takes into account the coupling effects of temperatures, strain rate and strain rate softening effects on flow stress the accuracy of the proposed modified JC model are significantly improved. Note that in the initial deformation stage, the predicted values and the experimental values almost coincide at most conditions. However, when the strain is larger than 0.06, the predicted values gradually deviates from the experimental values, except at 523 k/971 s −1 . This is primarily due to the limited elongation of the material under the reference conditions. Thus, the prediction results are inaccurate once the impact deformation exceeds the material elongation. For the temperature at 523 K and the strain rate of 971 s −1 , the predicted values are more accurate when the strain is larger than 0.06, which may be due to experimental errors or changes in the microstructure of the material under this condition. Given this, the appropriate strain of the modified JC model should not exceed 10%. In fact, the addition of SiC particles greatly reduces the plasticity of the composites and improves the strength, so it is sufficient to predict the values of stress-strain in the initial stages of plastic deformation. To compare the prediction accuracy of the JC model and the modified JC model, an error analysis was performed, as shown in table 3. The error between the predicted values and the experimental values was calculated by equation (22). Table 3, clearly shows that the commonly used modified JC model (equation (13)) and the original JC model errors were noticeably larger than those of the proposed modified JC model in this paper. The standard deviations of the JC model were 10.97% and 9.06% at 523 k and 573 k, while the standard deviations of the proposed modified JC model were 4.68% and 2.23% at 523 k and 573 k. However, the standard deviations of the modified JC-1 model at 523 K and 573 K are 15.75% and 34.43%, respectively. The modified JC-1 is not suitable for spray-deposition SiCp/7055Al composites at all. The proposed model can make accurate predictions the flow stress of the composites. The proposed modified JC model was built, and its effectiveness was verified based on the experimental data of spray-deposition 17 vol% SiCp/7055Al composites at high-speed impact temperatures range of 293 K-623 K and in strain rates range of 392 s −1 -2002 s −1 . The proposed modified JC model models can effectively assess the stress of the composite in the mentioned ranges.
2020-05-07T09:10:54.642Z
2020-05-15T00:00:00.000
{ "year": 2020, "sha1": "473de32f41a41e5e63d93e6f63924d1e0f15f887", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/2053-1591/ab8fe9", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "1497871f316f28828599a55d656e51bcd433b2bd", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
254816367
pes2o/s2orc
v3-fos-license
Structural and dynamical insights into SilE silver binding from combined analytical probes † Silver has been used for its antimicrobial properties to fight infection for thousands of years. Unfortunately, some Gram-negative bacteria have developed silver resistance causing the death of patients in a burn unit. The genes responsible for silver resistance have been designated as the sil operon. Among the proteins of the sil operon, SilE has been shown to play a key role in bacterial silver resistance. Based on the limited information available, it has been depicted as an intrinsically disordered protein that folds into helices upon silver ion binding. Herein, this work demonstrates that SilE is composed of 4 clearly identified helical segments in the presence of several silver ions. The combination of analytical and biophysical techniques (NMR spectroscopy, CD, SAXS, HRMS, CE-ICP-MS, and IM-MS) reveals that SilE harbors four strong silver binding sites among the eight sites available. We have also further evidenced that SilE does not adopt a globular structure but rather samples a large conformational space from elongated to more compact structures. This particular structural organization facilitates silver binding through much higher accessibility of the involved His and Met residues. These valuable results will advance our current understanding of the role of SilE in the silver efflux pump complex mechanism and will help in the future rational design of inhibitors to fight bacterial silver resistance. Introduction Silver-based antiseptics have a wide range of applications in health care due to their high availability, broad spectrum, and safety for humans. [1][2][3] However, silver-resistant Gram-negative bacteria were reported and the resistance mainly relies on the presence of a plasmid-encoded specific silver resistance sil operon 4 which is homologous to the copper resistance cus operon. Both operons include the tripartite efflux pump CBA that acts as a proton antiport, an RS system that senses the metal present in the periplasm and enhances the operon gene expression accordingly, and a metal chaperon F that routes metal ions to the efflux pump. The SilE protein does not have any counterpart in the Cus operon but has a homolog in E. coli, PcoE, that displays 48% identity in sequence with SilE. A previous work has reported that the binding of PcoE with Cu + and Ag + induces its dimerization and partial structural folding while it has been shown that PcoE can bind up to 6 equivalents of Ag + before precipitation. 5 Gene deletion studies show a complete loss of bacterial silver resistance capacity when the SilE gene is deleted along with the Cus operon. 6 SilE has its own promoter and is early expressed while bacteria are exposed to silver. 4 Considering that new silver-based drugs are under development to fight multi-resistant bacteria, 1,7,8 the appropriate design of such new drugs relies on our understanding of the molecular mechanism underlying silver resistance. Thus, providing insights into silver-bound SilE structure and dynamics will improve our knowledge of the role of this essential sil operon component concerning the efflux pump machinery. In a previous work of Asiani et al., SilE was described as an intrinsically disordered protein (IDP) that acts as a ''molecular sponge'' dedicated to the early sequestration of silver ions. 9 Nevertheless, these previous studies concerning SilE and PcoE lack a clear identification of different silver binding sites at the atomic level and no experimental findings regarding a possible tertiary structure have been reported. Recently, we have improved our knowledge in the field by deriving NMR (nuclear magnetic resonance) structures of SilE mimicking peptides and shown that the sequences encompassing two successive motifs HxxM or MxxH fold into helices in the presence of silver ions. 10 In the same order of idea, we have determined low micromolar dissociation constants from the NMR titration analysis. From the vast landscape of biophysical techniques dedicated to the study of IDPs, NMR spectroscopy is particularly well suited especially in the case of complexes where disorder is at least partly retained. 11 Structure and dynamics can be explored at the atomic level even for low-affinity complexes or low-populated states. Although NMR spectroscopy is powerful on its own, the efforts toward our detailed understanding of the conversion between fully IDP and partially disordered complexes introduce significant technical challenges making an integrative structural biology approach necessary. For instance, the stoichiometry for multisite ligand binding is often difficult to assess by NMR spectroscopy. Moreover, spectral quality can significantly decrease due to dramatic line broadening when interconversion rates are at the time regime of chemical shifts. 12 In the present work, we have advanced our knowledge regarding the structural organization of SilE compared to previous studies. Therefore, we propose an original combination of analytical techniques to characterize the structure and dynamics of silver binding to SilE, including nuclear magnetic resonance (NMR) spectroscopy and circular dichroism (CD) for local structure and dynamics characterization; small-angle X-Ray scattering (SAXS) and ion-mobility-mass-spectrometry (IM-MS) for global shape analysis; liquid chromatography high-resolution mass-spectrometry (LC-HRMS) and capillary-electrophoresis inductively coupled-plasma mass-spectrometry (CE-ICP-MS) for assessing the stoichiometry of the metal-protein complexes. In this regard, we will show how these complementary techniques allow the characterization of the SilE binding sites at the atomic level. An assessment of the stoichiometry of the complexes formed but also the structural modifications that SilE undergoes when complexed with Ag + will be provided. In a final step, we will use a combination of AlphaFold structural prediction, SAXS, and NMR spin relaxation measurements to propose for the first time an insightful structure of the silver-bound SilE protein. Overall, this ensemble of results will allow an in depth overview of the role of SilE in the general context of the silver efflux pump machinery. Results and discussion The SilE sequence (Fig. S1, ESI †) used in this work was defined according to the wild-type sequence of the Salmonella Typhimurium SilE (UniProt accession number Q9Z4N3). The first twenty residues of the signal peptide that routes SilE to the periplasm have been removed. Therefore, the SilE 21-143 construct has been successfully expressed and isotopically labeled with E. coli cells. As indicated in blue in the sequence, SilE is composed of 6 HxxM, 2 MxxH, and one HxM motifs. As previously described by Chabert et al., 10,13 silver ions can bind His and Met residues and association binding constants have been calculated for each motif. Silver-bound SilE complexes exist at different stoichiometries As indicated above, the SilE binding sites are widely distributed over its amino acid sequence from M56 to G143, with a minor site around the position M40. Since the global stoichiometry of the silver-bound SilE complex needs to be determined, mass spectrometry is particularly well suited for this analysis. Solutions of SilE containing 0 to 10 equivalents of Ag + relative to SilE were analyzed using positive electrospray ionization high-resolution mass spectrometry (ESI-HRMS). The same stock solution of silver nitrate (oxidation state I) has been used throughout all titration studies presented hereafter and the accuracy of the Ag + solution concentration was assessed before any assay using a calibration curve and a silver ion selective electrode. The SilE protein was observed with a charge state distribution from 6 + to 21 + protonated adducts (Fig. S2A, ESI † SilE in the free state) for HR-MS and IM-MS experiments. With an increase in the concentration of Ag + solution, protonated [SilE:nAg + ] complex ions were also detected. As an example, Fig. S2B (ESI †) shows the ESI-MS spectrum of SilE with 5 equivalents of Ag + solution, where SilE protonated ions and the peaks corresponding to [SilE + nAg + (z À n)H] z+ complexes with n = 1 to 5 were detected (see the inset in Fig. S2B, ESI †). Some hybrid adducts with Ag + and 1 to 3 Na + ions were also observed in the MS spectra. To simplify and better observe the different stoichiometries of [SilE:nAg + ] complexes, all the recorded ESI-MS spectra were deconvoluted. The deconvoluted spectra, averaged over the end of the elution band in order to avoid chemical artefacts, are shown in Fig. 1(A) for the different molar equivalents of Ag + in solution. The number of Ag + bound to one molecule of SilE increases with the concentration of Ag + present in the solution. A high abundance of 2 to 6 Ag + per SilE molecule was observed, while complexes with 7 or 8 Ag + were only observed at low abundances (even with a higher concentration of Ag + ). However, higher concentrations of Ag + did not allow for the observation of more Ag + complexes, suggesting the saturation of the SilE complex sites. More quantitatively, the relative abundance of the different complexes [SilE-nAg + ] were determined by integration of the deconvoluted spectra over the full elution band and are plotted in Fig. 1(B) as a function of molar equivalents Ag + in solution. The free SilE protein is the most intense species detected for 1 to 6 equivalents Ag + in the solution ( Fig. 1(A)). However, starting from 4 equivalents of Ag + , the distribution shifts significantly towards the higher stoichiometries. Remarkably, only little 1 : 2/1 : 3 and 1 : 4 complexes seem to be formed before 3, 4, and 5 equivalents of Ag + are respectively added to the solution (while 1 : 5 and 1 : 6 already appear at 5 and 6 equivalents). After 4 molar equivalents of Ag + in the solution, the abundances of the 1 : 1 complex start decreasing while the 1 : 2 and 1 : 3 complexes soon reach a plateau and the 1 : 4 complex progressively takes over. Up to 10 equivalents of Ag + , the abundance of the 1 : 4 complex remains maximal, pointing towards 4 strong complexation sites. To further confirm the observed stoichiometry in ESI-HRMS, the samples were analyzed using CE-ICP-MS. From 1 to 4 equivalents of Ag + in solution, the quantity of Ag + bound to SilE linearly increases ( Fig. 1(C) and (D)). The nearly quantitative formation of the complex indicates a high affinity of SilE for Ag + . However, for higher concentrations of Ag + , no increase of the silver bound to SilE was detected but only free Ag + ( Fig. 1(D)). This result indicates that either we have reached the saturation of SilE binding sites or the additional labile sites exist. In the latter case, the potential sites could not be detected by using this CE-ICP-MS strategy. Indeed, the CE separation relies on the electric field applied along the capillary and the analyte must join the anode to be analyzed while silver ions are positive. We hypothesized that labile sites are prompted to dissociate while the migration front is quickly depleted from silver. To challenge the potential presence of such labile sites, the same experiments have been performed but with increasing electrolyte Ag + concentrations (1-3 mM). The results are shown in Fig. 1(E) and (F). Increasing the electrolyte Ag + concentrations led to an increase of the SilE-bound Ag + peak areas until a concentration close to 1 mM. For the higher Ag + electrolyte concentrations, the [SilE:nAg + ] peak area remains constant, which reflects the saturation of the protein. The quantification of the signal at the plateau was performed by external calibration and revealed the complexation of 7-8 Ag + per protein under these non-dissociating conditions. These results agree with those of our ESI-HRMS studies and further corroborate the maximum number of Ag + that can specifically bind to SilE. As a consequence, it can be assumed that SilE contains 4 kinetically inert complexation sites, which can be seen by directly injecting the complex, and 3-4 more labile sites for Ag + , which require an equilibrium state all along the separation to be detected. SilE is mainly disordered in the free state Using 3D standard NMR experiments, the backbone assignment of SilE has been performed (H N , H a , C a , C b , and N). The 2D 1 H-15 N HSQC NMR spectrum including amino acid numbering is shown in Fig. 2 Resonances are dispersed in a narrow spectral range from 8 to 8.7 ppm, which is the hallmark of an intrinsically disordered protein. 14 The observed chemical shifts result from the average of a conformational ensemble the protein may adopt if the exchange time scale is much faster than the chemical shift time scale. Consequently, the propensity to form different secondary structures can be assessed from the analysis of the backbone chemical shifts. 15 For the amino acid regions 76-88, 112-117, and 132-139, we observe a propensity for the transient a-helix structure between 10% and 27% while the rest of the SilE sequence appeared in a fully extended conformation ( Fig. 2(C)). To support these findings, we have set up NMR spin relaxation experiments that are closely related to the global and local motion of the protein. For the region that exhibits a propensity for the a-helix, the mean value of R 2 reaches 5.1 s À1 while it is 3.8 s À1 for the rest of the sequence (excluding N-and C-ter). Additionally, the average hetNOEs show values of 0.30 and 0.1 for the same regions, respectively. The latter one is a typical value for IDP. 16 This effect is less pronounced in R 1 experiments due to its higher sensitivity for local motion compared to global motion. The HetNOEs are mainly dependent on the local N-H bond rigidity or flexibility, while R 2 is also affected by the global motion. The N-and C-termini present highly negative hetNOE values indicating high flexibility. This corroborates that more rigid conformations are sampled by the regions 76-88, 112-117, and 132-139 compared to the purely intrinsically disordered behavior of the rest of the protein ( Fig. 2(D)). SilE has several silver binding sites To investigate the silver binding areas, we have performed NMR titration experiments, recording a series of 1 H-15 N HSQC spectra on 15 N-SilE by gradually increasing the Ag + solution concentration (Fig. S3, ESI †). Along the titration experiments, we can make different observations that correspond to different exchange regimes. First and foremost, a set of 40 peaks experienced a slow exchange rate. Second, 33 other peaks exhibit a very strong decrease of their signal intensity so that they completely disappeared from the different spectra under saturating conditions. At this level, we can rule out a paramagnetic effect due to Ag + binding. Indeed, Ag + has an electron configuration of 5s 0 4d 10 so that it does not show any unpaired electron and paramagnetic properties. 17 Consequently, signal disappearance may be ascribed to the intermediate exchange regime or oligomerization. 18 This statement is also corroborated by the very strong increase of the transverse relaxation rate in this region (see below) and is likely due to the significant contribution of R ex to the global R 2 . Therefore, we can rule out any oligomerization effect in the present case. The expected number of peaks was never recovered even with the addition of a large excess of Ag + (up to 50 eq.). As can be seen in Fig. S4 (ESI †), any attempt to recover these peaks was also unsuccessful either by modifying magnetic field (600 and 900 MHz), temperature (from 283 to 323 K), pH (5.2 to 7.8), ionic strength (20 to 300 mM NaF) or pressure (high pressure NMR experiments up to 2250 bars). Therefore, we have assigned the chemical shifts of SilE under silver saturating conditions (H N , H a , C a , C b , and N). All visible 1 H-15 N correlations correspond to residues 19-58, 76, and 93-143 ( Fig. 2(B)). Unfortunately, residues 59 to 92 are then out of reach for studying binding, structures, and dynamics by NMR. It is noteworthy that both spectra in the free and silverbound states are in stark contrast compared to the ones previously published where neither assignment nor structural information have been derived. 9 The NMR chemical shift perturbations (CSP) upon the addition of 6 equivalents of Ag + were calculated for each amino acid (Fig. 3). From S104 to A142, free and bound states are visible due to the slow exchange regime. The proportion of the bound state can be estimated from the NMR signal ratio between both states (Fig. S5, ESI †). For 1 equivalent of Ag + , 35% of the SilE region encompassing residues S104 to A142 is in its bound conformation and reaches 91% for 6 equivalents of Ag + . No significant changes occur for 6 to 9 equivalents of Ag + . Large variations of CSP are observed for residues E110 to N122 and residues H129 to S141. In these two regions, several His (H111, H118, H129, H136) and Met (M121, M132, M139) residues are present and their associated perturbations are in good agreement with the perturbations seen on the SilE-mimicking peptides. 10 Additionally, we have detected measurable shifts in the regions H80 to M90 and M59 to M72 before the signal disappearance where 3 HxxM and 1 MxxH motifs are present. The latter one ( 59 MDQH 63 ) harbors visible signals until 4 equivalents of Ag + while the others disappeared after the addition of 2 equivalents of Ag + (Fig. S5B, ESI †). It is also noteworthy that the 59 MDQH 63 motif is involved in a less stable helical formation. 9 The motif H38 to M40 also binds silver but displays significant CSPs from 5 to 9 equivalents of Ag + and their corresponding signals are in a fast exchange regime. Overall, we have identified two major areas of SilE that binds silver ions with different dynamics, one in a slow exchange regime and the second one in an intermediate exchange regime. The third one is the weakest area of binding and has fast exchange regime dynamics. All the identified binding motifs of the entire SilE protein are in perfect agreement with the ones found in previous peptide-mimicking SilE 10 and suggest that (i) the individual SilE-mimicking peptides may be used individually to describe the SilE/Ag + interaction and (ii) each helical binding motif can be treated as independent segments in the SilE protein. Although the general analysis of CSP is widely used to describe the protein/ligand binding interface, the discrimination between direct binding and a local conformational rearrangement upon ligand binding may be problematic. This is particularly true for IDP proteins which have no defined structure and possibly fold to stabilize their ligands. 19 To further confirm this binding site mapping onto the SilE sequence, we have performed a differential digestion analysis monitored by mass spectroscopy. The SilE protein in the free state or in the presence of 6 equivalents of Ag + was digested and the resulting peptides were analyzed by LC-MS/MS. Among the digested peptides, 3 peptides containing the 59 MDQH 63 , 69 HETM 72 , 108 MNEH 111 , 118 HEFM 121 , and 129 HQAM 132 motifs were detected as protonated adducts from the digested solution of SilE (Table S1A, ESI †). When digestion was performed on the SilE protein in interaction with Ag + , only the peptides containing the 59 MDQH 63 , 69 HETM 72 , and 108 MNEH 111 motifs were detected (Table S1B, ESI †). This result could indicate that the strong binding between silver ions and the two motifs, 118 HEFM 121 and 129 HQAM, 132 may disturb the digestion process due to a particular structural organization (or would not allow the protonated form of the peptide to be observed). In contrast, the 59 MDQH 63 , 69 HETM 72 , and 108 MNEH 111 motifs are observed in the protonated peptides under both conditions and do not perturb the digestion process when Ag + is present. Silver-bound SilE displays secondary structure folds A series of CD spectra were recorded for the free SilE and in the presence of 1 to 8 equivalents of silver ions relative to SilE (Fig. 4(A)). The ellipticity significantly decreases at 222 nm and increases at 190 nm upon the addition of up to 4 equivalents of Ag + in solution. These data demonstrate the formation of a-helices upon the addition of silver. We can also point out the presence of a clear isodichroic point at 206 nm indicating a two-step process between the free-form and the silver-bound form for SilE. From 5 to 8 equivalents of Ag + in solution, the ellipticity only slightly changes toward more a-helix folds. This structuration is further confirmed by NMR since new peaks appeared out of the 8 to 8.7 ppm area of the 1 H-15 N HSQC spectra upon the addition of 2 to 9 equivalents of Ag + in solution, which is a strong indicator of a protein structuration. The secondary structure propensities 15 calculated from the backbone chemical shift assignment for each residue of the silver-bound SilE in the presence of 6 equivalents of Ag + in solution further confirmed the a-helix folding upon silver addition (Fig. 4(B)). The folded area encompasses more residues compared to the free form (segments 110-123 + 128-142 vs. 112-117 + 132-139 without silver) and the a-helix propensity increases from 18% in the free state up to 65% in the bound state. To gain insight into the structuration of the SilE part that undergoes strong NMR signal attenuation, we designed a truncated SilE 57-95 . A series of CD experiments were performed and confirmed the formation of a-helix upon silver binding for this shorter construct (Fig. S6, ESI †) as observed for the fulllength SilE. We can also point out the presence of a clear isodichroic point at 206 nm indicating a two-step process between the apo-form and the silver-bound form of SilE 57-95 . To gain further insight into the structural changes and their diversity as a function of stoichiometry of the complexes, ion mobility coupled to mass spectrometry (IM-MS) spectra were recorded using solutions with 0 to 12 equivalents of silver ions relative to the SilE protein. Two factors lead to the sorting of the [SilE:nAg + ] complexes in IM-MS (with n the number of bound Ag + ): the mass of the complexes and their arrival times. The latter can be linked to the overall shape of the complexes. We detect species from n = 0 to 6, which is consistent with the HRMS data, considering that low-abundant complexes with n = 7 and 8 have not been detected due to the lower sensitivity of the IM-MS instrument compared to that of the HRMS instrument. The species with n = 1 was not detected with sufficient intensity, whatever the protein:silver ratio in the solution. The arrival time distributions (ATDs) were extracted for the different complexes detected. A selection of ATDs for different stoichiometries is shown in Fig. 4(C). All ATDs are undoubtedly bimodal indicating the coexistence of two ion populations with different mobilities. We interpreted these two populations as two conformational families since no evidence for isobaric SilE oligomers with higher charge states was found in the mass spectra. As generally observed for proteins, each peak is broader than expected from ion diffusion, 20 which indicates that each family is a collection of different conformers with a close structure. 21 The overall shape of the ATD would be compatible with a relatively slow interconversion between the two families during the course of the separation due to the plateau between the two peaks. 22 However, no sign of this process was observed when selecting either of the population in a tandem-IMS scheme. 23 Consequently, the two observed peaks can be attributed to the existence of two distinct conformational families, either pre-existing in solution or more likely resulting from structural reorganization upon desolvation. In the following, we focus on the relative intensity of the main two populations. The ATDs in Fig. 4(C) clearly indicate that the relative intensity of the two forms depends on the number of Ag bound in the complex. The peak at lower arrival times corresponds to species with a relatively more compact structure leading to a DT CCS He value of 1385 Å 2 and the second peak corresponds to an extended form characterized by a DT CCS He value of 1693 Å 2 . Both forms are nevertheless detected for the bare SilE, with a predominance of the extended form. As the number of bound Ag + increases, the relative population of the compact form increases to become predominant, as seen in Fig. 4(D). The data present the average ratio from ATDs for the same ionic species originating from solutions with different relative stoichiometries in Ag + . Importantly, the relative intensity of the two forms does not seem to be affected by the relative concentration of Ag + in the solution, but only depends on the stoichiometry of the complex. Consequently, the observed evolution is not driven by a change in the conformational preferences of SilE due to a change in its environment, but rather reflects a structural change induced by Ag + binding. As already mentioned, the gas phase structures of the complexes probably differ from their original solution structures. However, the observed ATDs can be understood as an indirect mapping of the conformational space sampled by the complexes in solution. 24,25 Following this line, each gas phase conformer corresponds to the end point of a distinct portion of the solution-phase conformational space, after conformational relaxation. The observed changes can then be attributed to a modification in the solution-phase conformational space of SilE upon silver binding. In this context, we tentatively interpret the relative increase of the population of the most compact conformer for complexes with an increased silver content as the global compaction of the complexes in solution. This interpretation is supported by the small angle scattering data (SAXS) of free SilE and SilE + 6 equivalents of Ag + in solution (Fig. S7, ESI †). 26 Through the Guinier plot analysis, we derived a radius of gyration R g = 37.7 AE 0.7 Å for free SilE and R g = 32.9 AE 0.2 Å for SilE + 6 equivalents Ag + . Therefore, we can conclude the compactness of the structure in the presence of silver as observed in IM-MS experiments. At this level, it is important to clarify the meaning of compactness according to IM-MS and SAXS techniques. The compactness is due to the formation of a-helices and is different from a globular structure which refers to a tertiary structure. Interestingly, HRMS enables the observation of an inverse relationship between the number of silver ions observed on SilE and the charge state under which it is detected. This is independent of the concentration of Ag + in solution, and is illustrated in Fig. S2B (ESI †) in the case of 5 equivalents of Ag + : at high charge states (18+, 19+), the dominant forms are SilE:Ag + complexes 1 : 0 and 1.1, i.e. relatively depleted in silver. In contrast, at low charge states (9+, 10+, 11+), the dominant complexes are 1 : 3 and 1 : 4. This indicates that SilE bound to many silver ions is less prone to bearing many charges. A general trend in MS is that disordered species can unfold and extend to accommodate more charges during electrospray ionization and tend to be observed with larger and higher charge state distributions. Along this line, the HRMS observation suggests that silver-bound SilE is generally less disordered than silver-free SilE, which is consistent with the NMR and CD data. This is also consistent with the IM-MS and SAXS conclusion on the preference for more compact structures for silver binding SilE. Overall, a strong indication is that silver binding yields important structural adjustments in SilE. Complementary experimental observations finally allow drawing a global multiscale picture of the changes in the SilE structure upon silver binding. At the secondary structure level, the NMR and CD data suggest that a-helices are formed as the relative silver concentration increases. Moreover, SAXS, IM-MS, and HRMS data show that this local structuration is accompanied by global compaction. Finally, the IM-MS and HRMS data provide evidence that this compaction is correlated with the number of silver cations bound to the complex, rather than in the relative silver concentration. Silver-bound SilE complexes adopt different conformations To understand how SilE may accommodate up to eight silver ions, we have built a putative model of SilE based on ColabFold, 27 running the AlphaFold prediction protocol. 28 Indeed, AlphaFold has demonstrated its capability to accurately predict protein structures by using deep learning methods to analyze coevolutionary information. Based on the SilE sequence, we have modeled five structures that adopt a compact or an elongated organization. All the predicted structures display four helical segments that perfectly match the structure of SilE-mimicking peptides (Fig. S8, ESI †). 10 However, we can rule out the AlphaFold structures that exhibit the weakest pLDDT for the helical regions ( Fig. S9A and C, ESI †). The rank1 structure also suggests that AlphaFold is quite confident about the relative position of a 1 with respect to a 2 or a 3 with respect to a 4 , due to its low PAE (Fig. S9B, ESI †). The relative positioning of different helices along the SilE sequence is also confirmed by our spin relaxation measurements of the silver-bound SilE (Fig. 5). From residues T21 to T52 and Q94 to L105, R 1 , R 2 , and hetNOE parameters exhibit similar or close values compared to the ones seen in the case of SilE in the free state. This result indicates that these parts remain as flexible as they were in the free form. For the predicted helical regions a3 and a4, our data show a slight decrease in the average R 1 from the free to the silver-bound state (1.67 to 1.53 s À1 , respectively). This value is concomitant with a strong increase of the average R 2 and hetNOE in the silver-bound state compared to the free state (from 5.0 to 10.7 s À1 and from 0.32 to 0.63, respectively). This effect could be explained by a drastic change in the microdynamic parameters with a strong increase of the order parameter S 2 along with a decrease in the local motion t loc (Fig. S10, ESI †). The modification of these two parameters undoubtedly reflects the rigidification and formation of helices a 3 and a 4 . Despite the absence of NMR signals in the a1/a2 regions, it is noticeable that there is a tendency for a significant increase of the R 2 and hetNOE parameters, further denoting the presence of helical formation and strong conformational exchange. This fact is further supported by the CD spectra recorded for the SilE 57-95 construct that concludes to the formation of an a-helix for this region (Fig. S6, ESI †). The local increase of the hetNOE values that exactly follow the secondary structure rather than a general increase led also us to conclude that silver-bound SilE has no defined tertiary structure. Therefore, the prediction of different helical regions added to our relaxation data is in good agreement with IM-MS and HRMS that conclude in higher compactness of SilE when bound to silver ions. At this level, it is important to highlight that the compactness is due to the silver-induced helical folding of SilE compared to the disordered state of the free SilE. Whether SilE forms a globular or elongated structure still presents an unresolved conundrum, but a realistic low-resolution tertiary structure may be inferred from the SAXS data. Indeed, Alphafold predicts the correct secondary structure of SilE with a perfect match of the four helices but is not suitable to establish the highly flexible tertiary structure of SilE in the presence of Ag + . 29 Therefore, we have used MultiFoXS 30 for computing N-state (N = 1 to 5) models of the [SilE:nAg + ] complex combined with a divide and conquer approach. As a starting point, we have used the compact rank 1 [SilE:nAg + ] complex structure predicted by Alphafold that provides the lowest pLDDT and we have progressively increased the number of possible flexible regions until reaching the full flexibility of the SilE intervening linkers (Table S2, ESI † for the detailed flexible regions under study). This strategy allows the sampling of an extensive conformational space accessible to SilE. In the case of the rank1 globular structure, the only allowed flexible part has been defined to be the N-terminus. As can be seen in Fig. S11A (ESI †), MultiFoXS could not find a suitable structural ensemble capable of reproducing the SAXS data and shows a w score of 37.4 (Table S2, ESI †). Additionally, the associated radius of gyration equals 24.6 AE 2.0 Å in complete disagreement with the one experimentally determined from the SAXS data (32.9 AE 0.2 Å). This important result indicates that the [SilE:nAg + ] complex does not adopt a globular structure in the liquid state and that the rank1 structure, predicted by AlphaFold, contradicts our experimental data. To maintain the positioning of a 1,2 and a 3,4 , we have added the possibility of SilE to remain flexible in a region spanning the linker between a 2 and a 3 . The w score decreases from a one-state model to a three-states model (from 7.3 to 6.3 respectively) while a further increase in the number of conformations to 4 and 5 states did not improve the w score. It is also noteworthy that the w score significantly decreases compared to the one obtained for the globular structure and therefore supports the fact that the [SilE:nAg + ] complex structure adopts a more elongated conformation in the liquid state (Fig. S11B, ESI †). Thus, we have added another degree of flexibility by allowing the linker between a 1,2 or a 3,4 region to move freely. While the w score significantly decreases, we did not observe any differences when flexibility affects the linker between a 1 and a 2 or a 3 and a 4 with a w score of 1.9 for this ensemble of structures in the case of a 3-state model. In both cases, the different ratios that represent the conformational sampling are rather similar ( Fig. S11C and D, ESI †). The derived average R g for the structure that presents a flexible a 1,2 intervening linker is 28.9 AE 1.4 Å compared to an average R g of 31.9 AE 4.0 Å for the structure exhibiting flexibility between the a 3,4 linker. Finally, the w score decreases to 0.9 for a 3-state model when the different linkers of the [SilE:nAg + ] complex structure are allowed to move freely (Fig. 6). For the 3-state model, the average R g of 33.6 AE 3.7 Å (Table S2, ESI †) is in good agreement with the experimental R g derived from the SAXS data (32.9 AE 0.2 Å). This result allows the accurate mapping of the conformational landscape of the [SilE:nAg + ] complex and supports the fact that it could be described by different conformations co-existing in solution, from elongated to intermediate states, as concluded by the IM-MS data analysis. Overall, the combined use of AlphaFold structure prediction with the NMR-derived structures of silver-bound SilE peptides and SAXS data support the fact that (i) the [SilE:nAg + ] complex secondary structure exhibits four helical regions and (ii) different structural organizations of SilE co-exist in solution where Met and His residues may be favorably oriented and accessible to bind silver ions. Conclusions Several metals remain essential to life in extant organisms and play a diversity of roles in many different physiological processes. Sometimes, the toxicity of a given element may vary significantly because of speciation, so different chemical species containing the same metal may have a very different impact on living organisms. In the case of silver, Gram-negative bacteria have developed a resistant system based on a specific efflux pump dedicated to the removal of silver ions. Within this system, SilE has been recognized as an essential component for the appropriate function of the efflux pump by sequestering silver ions. From the vast panorama of the known metal binding proteins, SilE displays specific features in the sense that it is disordered in its free state and is prone to bind several silver ions while folding upon binding. In the present article, we have advanced our knowledge regarding the unknown SilE structure, dynamics, and interaction with silver ions. By combining experimental methods and AlphaFold prediction, we have determined up to eight binding sites that localize in four well-defined helical segments. Beyond these identified binding sites, four of them exhibit a strong interaction with silver ions. In particular, we have demonstrated that SilE does not adopt a globular structure upon silver binding but rather samples a large conformational space from elongated to more compact structures. This significant result deviates from the hypothesis established previously 9 and leads us to ask this dangling question: ''How SilE may accommodate up to eight silver ions and what is the advantage of such a structural organization?''. To answer this crucial question, we must resort to the state of the art and recall that SilE is synthesized only during growth in the presence of silver 4 and is mandatory to avoid the disruption of the efflux pump machinery. At this level, we may hypothesize that SilE acts as a regulator that retains silver ions through His and Met residues and avoids metal saturation in the periplasm. Therefore, to be efficient, this process needs high solvent accessibility of these residues. To lend credence to our hypothesis, we have calculated the solvent accessible surface area (SASA) for (i) the globular single structure and (ii) the ensemble of the three structures found in the case of fully flexible SilE (Fig. 6). As seen in Table S3 (ESI †), the total percentage of SASA contrasts between the two cases. For the globular case, the average SASA on the interacting His and Met residues is 37% while it reaches 70% for the ensemble of elongated structures. Consequently, this structural organization facilitates silver binding through much higher accessibility of the involved His and Met residues while the globular structure strongly prevents close contact with silver ions. The most elongated structure is an amazing way to bind silver and allow the bacteria to remain active even at high silver concentrations. To the best of our knowledge, this is the first example of an intrinsically disordered protein that folds into helices to bind several metal ions at the same time. For a deeper understanding of the cytotoxic mechanism of silver and how its antimicrobial properties are used in medicine, we refer the reader to an extensive review. 3 Nevertheless, some Gram-negative bacteria have developed resistance to silver and impair the appropriate functioning of its antimicrobial properties. Thus, we may hypothesize that this resistance process is triggered by a high silver concentration in the cell. According to our aforementioned results, SilE can sequestrate up to eight silver ions. The SilCBA pump is also located in the periplasm and oversees the silver efflux out of the cell. Since we have identified two different binding affinities with silver ions (4 strong and 3-4 labile sites), it allows a fine tune of the silver release to the periplasm before extrusion by the efflux pump. This last hypothesis needs to be confirmed by further studies of the different interacting partners in the whole system. Author contributions MH coordinated and managed the overall project, and wrote the manuscript. MH analyzed NMR spin relaxation. YM engineered the SilE plasmid, optimized the production and purification protocols, and performed NMR assignments and titration analysis. CA and MM prepared the SilE samples and performed the CD analysis. AH and CD performed and analyzed the CE-ICP-MS experiments. FC and CCZ performed and analyzed the IM-MS experiments. MG performed the HRMS experiments and in collaboration with LMA analyzed the MS data. OW performed the MultiFoXs analysis by combining the AlphaFold simulation and SAXS data interpretation. YM, FC, LMA, MG, AH, CCZ and OW assisted in writing the manuscript, reviewed the final version, and approved the content and submission. Conflicts of interest There are no conflicts to declare. INFRANALYTICS FR2054 for conducting the research is gratefully acknowledged. The 900 MHz and the high-pressure NMR experiments have been performed at IMEC-IS-UCCS Lille and ICSN Gif sur Yvette, respectively.
2022-12-18T16:02:57.409Z
2023-01-09T00:00:00.000
{ "year": 2023, "sha1": "b9a5df9319f9f5fc6c87682a4134cdeeb8d707b9", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2023/cp/d2cp04206a", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "55e1eed2d5117c261d9f1944e13778b9743693b5", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
4458871
pes2o/s2orc
v3-fos-license
Caspase-1 regulates Ang II-induced cardiomyocyte hypertrophy via up-regulation of IL-1β Cardiac hypertrophy is a compensatory response to stress or stimuli, which results in arrhythmia and heart failure. Although multiple molecular mechanisms have been identified, cardiac hypertrophy is still difficult to treat. Pyroptosis is a caspase-1-dependent pro-inflammatory programmed cell death. Caspase-1 is involved in various types of diseases, including hepatic injury, cancers, and diabetes-related complications. However, the exact role of caspase-1 in cardiac hypertrophy is yet to be discovered. The present study aimed to explore the possible role of caspase-1 in pathogenesis of cardiac hypertrophy. We established cardiac hypertrophy models both in vivo and in vitro to detect the expression of caspase-1 and interleukin-1β (IL-1β). The results showed that caspase-1 and IL-1β expression levels were significantly up-regulated during cardiac hypertrophy. Subsequently, caspase-1 inhibitor was co-administered with angiotensin II (Ang II) in cardiomyocytes to observe whether it could attenuate cardiac hypertrophy. Results showed that caspase-1 attenuated the pro-hypertrophic effect of Ang II, which was related to the down-regulation of caspase-1 and IL-1β. In conclusion, our results provide a novel evidence that caspase-1 mediated pyroptosis is involved in cardiac hypertrophy, and the inhibition of caspase-1 will offer a therapeutic potential against cardiac hypertrophy. Introduction Cardiac hypertrophy is a common response of heart to a variety of stimuli, which could be divided into physiological hypertrophy and pathological hypertrophy [1,2]. Pathological hypertrophy, is a major change of heart disease and considered as a critical risk factor of heart failure and is often associated with arrhythmia [3,4]. The cardiac dysfunction reflects the distinct pathogenesis of pathological cardiac hypertrophy. Although many pathways and targets have been reported to be effective, pathological cardiac hypertrophy inevitably leads to the unfavorable outcomes of heart failure [5,6]. Therefore, it is important to find novel therapeutic targets for hypertrophy. Pyroptosis is a kind of caspase-1 or caspase-11-dependent programmed cell death [7,8]. Different from other programmed cell deaths, pyroptosis is a consequence of caspase-1 or caspase-11 activation in inflammasomes [9][10][11][12]. During pyroptosis, the activation of caspase-1 could cleave pro-interleukin-1β (IL-1β) to bioactive IL-1β [13]. Studies have demonstrated that pyroptosis is involved in various types of diseases, including hepatic injury, cancers, and diabetes-related complications [14][15][16]. For cancer cells, loss of caspase-1 gene expression was observed in human prostate cancer and human hepatocellular carcinoma. The activation of pyroptosis may promote cell death and thus exert anticancer properties [15,17]. Although the importance of pyroptosis was identified in different kinds of diseases, little was known about the role of pyroptosis in cardiac hypertrophy. To our knowledge, this is the first study to demonstrate that caspase-1-mediated pyroptosis plays a role in the pathogenesis of the cardiac hypertrophy and caspase-1 inhibitor AC-YVAD-CMK can mitigate cardiac hypertrophy induced by angiotensin II (Ang II). Ethics statement The study was approved by the ethics committee of Harbin Medical University, and all experimental procedures were approved by the Animal Care and Use Committee of Harbin Medical University. Our study was performed in accordance with the recommendations of the Guide for the Care and Use of Laboratory Animals, published by the US National Institutes of Health (NIH Publication number 85-23, revised 1996). Mice model of pressure overload-induced cardiac hypertrophy Pressure overload was imposed on the heart of mice by transverse aortic constriction (TAC). Adult mice (6-8 weeks) were anesthetized and placed in supine position and a midline cervical incision was made to expose the trachea. Then, the chest of mouse was opened and thoracic aorta was identified. A 5-0 silk suture was placed around the transverse aorta and tied around a 26-gauge blunt needle, which was subsequently removed. The chest was closed and the animals were kept ventilated until recovery of autonomic breath. Twenty-four mice were randomly divided into Sham and TAC groups. After treatment, the cardiac tissues were obtained for the following detection. Primary culture of neonatal mouse cardiomyocytes Neonatal mouse cardiomyocytes were isolated from 1 to 3 days old C57BL/6 mouse hearts. Briefly, hearts were rapidly removed from neonatal mice and washed to remove blood and debris. Whole hearts were then cut into small pieces and dissociated into single cells by digestion with trypsin, EDTA solution (Solarbio, Beijing, China). The suspension was collected and added to Dulbecco's modified Eagle's medium (DMEM) nutrient mixture (HyClone, Logan, UT, U.S.A.) containing 10% FBS (BI, Kibbutz Beit Haemek, Israel) to end digestion. The above steps were repeated until all the tissues were digested. The collected cell suspension was filtered and centrifuged at 1500 rpm for 5 min to obtain cells. After centrifugation, cells were suspended in DMEM (HyClone, Logan, UT, U.S.A.) with 10% FBS, and precultured in humidified incubator (95% air, 5% CO 2 ) for 1.5 h to obtain cardiac fibroblasts for their selective adhesion. Then, the suspended cardiomyocytes were plated in another dish. Culture medium was renewed after 48 h. Ang II-induced cardiomyocyte hypertrophy in vitro Ang II treatment was used to induce cardiomyocyte hypertrophy. In our experiments, cardiomyocytes were incubated with 100 nmol/l Ang II for 48 h. The serum-free medium containing Ang II was changed every 24 h. Cardiomyocytes were prepared for immunofluorescence staining, real-time PCR, and Western blot assays. For immunofluorescence staining, monoclonal antibody against sarcomeric α-actinin (Sigma, St. Louis, Missouri, U.S.A.) was added at dilutions of 1:200. Nuclear staining was performed with Hoechst (Sigma, St. Louis, Missouri, U.S.A.). Immunofluorescence was examined under a fluorescence microscope (Zeiss, Heidenheim, Baden-Wuerttemberg, Germany). The surface areas of individual cardiomyocytes were measured using Image-Pro Plus software, which was normalized to control group. To avoid human error, at least five independent zones were selected in one slide and the quantitation was performed blinded by two individuals. Real-time PCR The total RNA samples were extracted from cardiomyocytes or cardiac tissues using the TRIzol reagent (TaKaRa, Otsu, Shiga, Japan). Total RNA for 500 ng was reverse transcribed to cDNA using Reverse Transcriptase Master Kit (Toyobo, Osaka, Japan) according to the manufacturer's instructions. Real-time PCR was performed on ABI 7500 fast system (Applied Biosystems, Carlsbad, CA, U.S.A.) using SYBR Green I (Toyobo, Osaka, Japan). GAPDH served as an internal control. The relative quantitation of gene expression was determined using the 2 − CT method. Hematoxylin and Eosin staining Cardiac tissues were fixed in 4% paraformaldehyde followed by dehydration. The processed samples were embedded in paraffin and cut into 5-μm thick sections using tissue-processing equipment. The sections were deparaffinized and stained with Hematoxylin and Eosin (HE) for histological analysis. Immunohistochemistry Cardiac tissues were fixed with 4% buffered paraformaldehyde, dehydrated, and embedded in paraffin. Five-micrometer thick sections were deparaffinized, rehydrated, and rinsed in distilled water. Antigen unmasking was carried out by water vapor heating in citrate buffer for 20 min. All sections were immunostained with the primary antibody against caspase-1 and IL-1β at 4 • C overnight. After incubation with the secondary antibody, the sections were stained with diaminobenzidine. Statistical analysis All the experiments were repeated five times, and the results were from one representative experiment. Data are expressed as mean + − S.E.M. and were analyzed with SPSS 13.0 software. Statistical comparisons between two groups were performed using Student's t test. Statistical comparisons amongst multiple groups were performed using ANOVA, followed by Bonferroni's post hoc test. A two-tailed P<0.05 was considered statistically significant. Graphs were generated using GraphPad Prism 5.0. Cleaved caspase-1 and IL-1β expression levels were up-regulated in myocardium of mice in response to acute pressure overload The results of HE staining showed that TAC induces cardiac hypertrophy in mice ( Figure 1A). High expression of cleaved caspase-1 was observed in the TAC group compared with sham group. Consistently, the downstream factor that cleaved IL-1β was also up-regulated in the myocardium of TAC-operated mice ( Figure 1B). Real-time PCR assay showed that capsase-1 and IL-1β mRNA expression levels were up-regulated in TAC group compared with control group (Figure 1C,D). Correspondingly, Western blot assay further confirmed the high protein expression levels of cleaved caspase-1 and its downstream factors cleaved IL-1β protein in TAC group compared with control group (Figure 1E,F). These results verified that the activation of pyroptosis is associated with cardiac hypertrophy. Ang II up-regulates cleaved caspase-1 and IL-1β expression in cardiomyocytes We further examined the expression of cleaved caspase-1 and IL-1β in Ang II-treated cardiomyocytes. Neonatal mouse cardiomyocytes were exposed to Ang II (100 nmol/l) for 48 h. The caspase-1 and IL-1β mRNA expression levels were up-regulated in Ang II-treated cardiomyocytes (Figure 2A,B). Consistently, the cleaved caspase-1 and IL-1β protein expression levels were also up-regulated in Ang II-treated cardiomyocytes ( Figure 2C,D). Caspase-1 inhibitor down-regulates cleaved caspase-1 and IL-1β expression in Ang II-treated cardiomyocytes To further understand the correlation of pyroptosis and cardiac hypertrophy, caspase-1 inhibitor was used to detect the role of cleaved caspase-1 in cardiac hypertrophy. Cardiomyocytes were treated with vehicle, Ang II, and Ang II + AC-YVAD-CMK, respectively. The up-regulation of cleaved caspase-1 and IL-1β induced by Ang II was reversed by co-treatment with AC-YVAD-CMK in cardiomyocytes ( Figure 3A-D). Caspase-1 inhibitor attenuates Ang II-induced hypertrophy in cardiomyocytes We further investigated the involvement of pyroptosis in Ang II-induced cardiomyocyte hypertrophy. Immunofluorescence staining showed that after treatment with caspase-1 inhibitor AC-YVAD-CMK, the surface areas of cardiomyocytes were significantly decreased compared with Ang II-treated group ( Figure 4A,B). The mRNA expression levels of hypertrophy related markers, including atrial natriuretic peptide (ANP), brain natriuretic peptide (BNP), and β-myosin heavy chain (β-MHC) were also down-regulated after caspase-1 inhibitor AC-YVAD-CMK treatment in cardiomyocytes ( Figure 4C-E). Discussion Cardiac hypertrophy is an independent risk factor for cardiovascular events [18]. Therefore, exploring the molecular mechanisms of cardiac hypertrophy are vitally important. Pyroptosis is a caspase-1-dependent pro-inflammatory programmed cell death. Different from other programmed cell deaths, pyroptosis undergoes membrane blebbing and produces pyroptotic bodies prior to plasma membrane rupture [7]. Several studies demonstrated that pyroptosis plays roles in several types of diseases [16,19,20]. Caspase-1 plays an important role in regulation of cardiomyocyte biology. It was activated in hyperglycemia and doxorubicin-induced cardiac injury [16,21]. It also mediates cardiomyocyte apoptosis contributing to the progression of heart failure [22]. In addition, plenty of studies evidenced the important role and therapeutic potential of its downstream factor IL-1β in cardiac hypertrophy [23,24]. However, little was known about the role of caspase-1-induced pyroptosis in cardiac hypertrophy. The aim of the present work was to investigate the effect of cleaved caspase-1-mediated pyroptosis in cardiac hypertrophy. TAC was used to establish a mice model of cardiac hypertrophy and cleaved caspase-1 and IL-1β expression levels were detected. The result showed that cleaved caspase-1 and IL-1β expression levels were significantly up-regulated in hypertrophic myocardium from mice. Similar results were obtained in vitro. Subsequently, we observed the effect of caspase-1 inhibitor on cardiac hypertrophy. Co-administration of caspase-1 inhibitor AC-YVAD-CMK could attenuate the pro-hypertrophic effect of Ang II and inhibit the abnormal expression of cleaved caspase-1 and IL-1β. These findings suggest that cleaved caspase-1-mediated pyroptosis participates in cardiac hypertrophy. Therefore, inhibition of caspase-1 may be a new strategy for prevention and treatment of cardiac hypertrophy. Even though plenty of therapeutic targets have been identified, few of them were developed as a drug. The development of caspase-1 inhibitor was largely attributed to the researches in epilepsy and HIV infection [25,26]. VX-765 is an orally active caspase-1 inhibitor, which is well-tolerated in a 6 weeks long-phase II trial in patients with epilepsy. It may be a promising candidate for treatment of cardiac hypertrophy through the inhibition of caspase-1 and IL-1β. In conclusion, our results provide a novel evidence that caspase-1-mediated pyroptosis plays an important role in cardiac hypertrophy, and the inhibition of caspase-1 will offer a therapeutic potential against cardiac hypertrophy.
2018-04-03T05:26:47.499Z
2018-02-12T00:00:00.000
{ "year": 2018, "sha1": "2ab387783c8c051bc0821f19c66d26b09bb7cbff", "oa_license": "CCBY", "oa_url": "https://portlandpress.com/bioscirep/article-pdf/38/2/BSR20171438/482808/bsr-2017-1438.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16b883ebff3eff257801bbba97c5e50988958abd", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
250329771
pes2o/s2orc
v3-fos-license
Multi-Omics Techniques Make it Possible to Analyze Sepsis-Associated Acute Kidney Injury Comprehensively Sepsis-associated acute kidney injury (SA-AKI) is a common complication in critically ill patients with high morbidity and mortality. SA-AKI varies considerably in disease presentation, progression, and response to treatment, highlighting the heterogeneity of the underlying biological mechanisms. In this review, we briefly describe the pathophysiology of SA-AKI, biomarkers, reference databases, and available omics techniques. Advances in omics technology allow for comprehensive analysis of SA-AKI, and the integration of multiple omics provides an opportunity to understand the information flow behind the disease. These approaches will drive a shift in current paradigms for the prevention, diagnosis, and staging and provide the renal community with significant advances in precision medicine in SA-AKI analysis. INTRODUCTION The development of SA-AKI has been widely concerned but poorly understood in recent years, and its definition covers a heterogeneous group of diseases (1). In 2016, The Third International Consensus Definitions for Sepsis and Septic Shock (Sepsis-3) was proposed (2). Since then, SA-AKI has generally been defined as sepsis or septic shock involving the kidney, resulting in a progressive decline in renal function while meeting the Global Organization for Prognosis of Kidney Disease (KDIGO) CRITERIA for AKI and excluding other possible causes of renal impairment (3,4). AKI and sepsis are defined using clinical symptoms (5). AKI is defined as loss of renal function, increased serum creatinine (SCr) levels, and/or decreased urine production (6); Sepsis is defined as a lifethreatening organ dysfunction caused by uncontrolled infection and host reactions (7). Septic shock, a subset of sepsis, is strongly associated with a higher risk of death in circulatory, molecular, and metabolic abnormalities than sepsis alone. Patients with septic shock, characterized by hypotension, can be clinically identified by the need for antihypertensive agents to maintain mean artery≥65mmHg and serum lactic acid > 2mmol/L (>18mg/dL), excluding hypovolemia (8). Currently, little is known about the epidemiology of SA-AKI. Adhikari et al. (9) extrapolated from incidence rates in the United States to estimate that there are as many as 19 million cases of sepsis per year worldwide. The annual incidence of SA-AKI may be about 6 million cases, or close to 1 case per 1,000 people, but the actual incidence is likely to be much higher. Although sepsis has long been recognized as the most common cause of AKI in critically ill patients, sepsis and its treatment may damage the kidneys. For example, a multinational, multi-center, prospective epidemiological survey showed that sepsis accounted for 45% -70% of all AKI cases in intensive care units (10); However, AKI from any source was associated with a higher risk of sepsis, and Mehta et al. (11) found that 40% of severely ill patients developed sepsis after AKI, suggesting that AKI may increase the risk of sepsis. As individual syndromes, sepsis and AKI predispose hosts to each other, and it is often difficult to determine the exact timing of the onset of these two syndromes clinically. Our understanding of the pathogenesis of SA-AKI is limited. Much of the current understanding of SA-AKI has been extrapolated from animal models of sepsis, in vitro cell studies, and postmortem observations in humans with sepsis. Postmortem kidney biopsy samples from patients provide invaluable information about proper clinical conditions. The National Institutes of Health has launched programs such as the Kidney Precision Medicine Program to expand our understanding of AKI by obtaining kidney biopsies from patients with AKI to address AKI research's technical and ethical limitations. SA-AKI animal models provide a wealth of observational data for complex and invasive measures not available in humans, such as monitoring renal blood flow (RBF), microvascular flow, cortical and medullary perfusion, oxygenation, and renal tubule health. SA-AKI Models In mammals represented by mice and rats, there are three main SA-AKI modeling methods (20): (1) Direct endotoxin administration, in which lipopolysaccharide (LPS) is directly injected into the peritoneum or intravenously. LPS is a cell wall component of Gram-negative bacteria; (2) Cecum ligation puncture (CLP) or intraperitoneal implants of excrement and urine, similar model USES the ascending colon bracket, it allows the feces from the intestinal leakage to the peritoneum, CLP model induced sepsis is relatively easy, but with the severity of the sepsis, the amount and type of bacteria release is different also, does not necessarily lead to AKI. (3) The bacterial implant model is where bacterial impregnation is placed at the desired location (within the peritoneum or blood vessels), most commonly with fibrin clots. The most widely used animal models are the first two, in which inflammation occurs, microvascular permeability increases, and white blood cells are recruited; Hemodynamic parameters changed, GFR decreased, and renal function deteriorated. In all mammals (but most commonly used in large mammals (pigs and sheep) and zebrafish), direct bacterial delivery of live bacteria from Gramnegative and Gram-positive bacteria directly to the host (vein, peritoneal, subcutaneous, or directly into organs) is commonly used (20). In a recent prospective controlled study, the septic shock sheep model was widely used to study SA-AKI in vivo using Gram-negative bacteria and to assess renal function, histology, and glomerular ultrastructure in patients with septic shock (21). It overcomes the shortcomings of the endotoxin model and supports the view that early SA-AKI represents renal insufficiency. The ideal animal model of sepsis should consistently translate relevant information from animal studies into the human condition. Rodents are small and relatively inexpensive, but the correlation between the mouse endotoxemia model and human gene change was very low and almost random (R 2 = 0.01) (22). Compared with small animals, large animals such as pigs have similar cytokine and immune cell profiles and exhibit the characteristic symptoms of human infection (23). In addition, pigs are anatomically and physiologically similar to human kidneys and have obvious advantages in modeling operations. Pigs have a more macroscopic anatomical structure, the renal artery, renal vein and ureter can be easily separated during surgery, and instruments used in laparoscopic surgery for adults or children can also be used in miniature pigs. Therefore, pigs appear to be an appropriate animal model for SA-AKI. A conference on What are the Microbial components involved in the pathogenesis of Sepsis held at Rockefeller University in May 1998 discussed the relative merits of the 2hit hypothesis to explain the process of fatal septic shock and the "multi-hit" collaborative threshold hypothesis (24). The development of the 2-hit models allowed the researchers to determine the role of inflammatory mediators in susceptibility to post-injury infection and to create 2-hit models that replicate the clinical situation to generate different injury-specific inflammation patterns, from which to account for the complex interrelationships occurring in sepsis. 2-hit models of CLP and P. aeruginosa inoculation have been reported as clinically relevant sepsis models, J.m. Walker et al. (25)studied the possible beneficial effect of Specialized Pro-considerations Mediators (SPMs) given in the postsepsis stage to reduce infection/injury in a second blow. Results show that RvD2 Resolvin D2 (RvD2) promotes host defense by increasing TLR-2 signaling and macrophage/monocyte phagocytosis in less lethal and less inflammatory bacterial sepsis 48 h after the onset of sepsis. Jacqueline Unsinger et al. (26) examined IL-7, currently in several clinical trials (including hepatitis and human immunodeficiency virus), for improved survival in a 2-hit fungal sepsis model. Clinically relevant 2-hit models can provide a clearer understanding of the in vivo mechanisms of host defense in sepsis. While there are similarities in temporal inflammation and genomic host response patterns between humans and mice, the mouse immune system is more resistant. Human sepsis complications usually occur within a few days of trauma, and mice must be artificially created (27). In addition, it may vary depending on the type of injury and other variables such as outbreeding/inbreeding lines, rodent age, etc. Therefore, the similarity of the immune inflammatory blueprint in the 2-hit models between the animal and the patient should determine the timing of the impact (28). The ideal animal model of sepsis should consistently translate relevant information from animal studies into the human condition. Currently, most animal studies use young, healthy models with no comorbidities. After searching for keywords (SA-AKI, models, comorbidities), there are very few literature studies on SA-AKI models associated with comorbidities such as advanced age, cardiovascular events, etc. One study used trauma/hemorrhage two-strike model (TH, first strike) and caecal ligation puncture model (CLP, second strike) in female (♀) and male (♂)CD-1 mice aged 3, 15, and 20 months. The study showed that age/sex differences in survival, while undeniably influential, were not reflected in the response patterns delineated between the corresponding groups. The exact role of gender/age in sepsis outcomes requires further experimental and clinical review. Another study was Kent Doi et al. (29)constructed 2-hit models of FA-CLP mice to replicate the clinical findings of high sepsis mortality in CKD patients. By introducing preexisting co-existence to mimic the common observation that human sepsis is more common in patients with underlying chronic diseases. Pathophysiological Mechanisms Since SA-AKI can occur in the absence of clinical symptoms of renal hypoperfusion and hemodynamic instability and the presence of normal or increased global renal blood flow, it is gradually recognized that ischemia-reperfusion injury is not the only mechanism of SA-AKI, and the "unified theory" theory is widely accepted (Figure 1). The pathophysiology of SA-AKI involves injury and dysfunction of many cell types, including macrophages, vascular endothelial cells (ECs), and renal tubular epithelial cells (TECs), as well as their crosstalk and association (30). There is increasing evidence that the pathogenesis of SA-AKI is multifactorial and complex, involving the interaction between inflammation, microcirculation dysfunction, and metabolic reprogramming. Inflammatory and Immune Response Dysregulated inflammation is the primary cause of many downstream complications, including kidney injury (31). In fact, the more significant the inflammatory response is more likely to lead to direct kidney damage. Macrophages play a central role in innate immunity (32). The first stage of the host response involves pathogen-associated molecular patterns (PAMP) binding to pattern recognition receptors (PRR) of innate immune cells, such as toll-like receptors, triggering downstream cascades of signals involved in early innate immune responses, leading to the synthesis and release of proinflammatory cell molecules and chemokines. Renal Tubular epithelial cells (RTECs) also express Toll-like receptors, especially TLR2 and TLR4 (33). A variety of cell-derived mediators release damage-related molecular patterns (DAMP) after tissue injury, promoting the pro-inflammatory phenotype (M1) of macrophages, activating the same sequence of events as PAMP amplifies the initial host response and affects local and distal cellular function, including proteolytic enzymes, reactive oxygen species (ROS), and neutrophil extracellular traps (NETs) (34,35). During the progression of SA-AKI to CKD, resident cells with a specific phenotype undergo dedifferentiation, followed by proliferation and redifferentiation. Macrophages play an important role in this process. In addition to the proinflammatory phenotype described above, macrophages also have a profibrotic phenotype, stimulating fibroblasts and myofibroblasts, accompanied by the deposition of type I and III collagen and fibronectin. RTECs during repair may be involved in higher regenerative potential and anti-apoptotic ability (36). Endothelial Injury and Microcirculation Dysfunction The second cell type that is vulnerable is the EC. Sepsis stimulates endothelial cells to produce nitric oxide, leading to vascular dilation, loss of self-regulation, and endothelial dysfunction. Changes in cell-to-cell contact between endothelial cells are mediated by interactions between VEGF, VEGFR2, Ang, VEcadherin, and ligand adhesion molecules, as well as complex interactions between endothelial cells and leukocytes that allow leukocytes to pass through (37). Many molecules simultaneously control microvascular permeability, resulting in insufficient blood volume relative to the vessel when tight cellular connections loosen. In addition, during the period of sepsis, confirmed microvascular thrombosis related to inflammation, bacterial pathogen associated molecular patterns were found in endothelial cells, platelets, and leukocytes on the surface of the PRR, bacterial endotoxin can also stimulate tissue factor expression and original activation increase fibrinolytic enzyme inhibitor 1 (PAI-1) levels, blocking fibrinolysis and subsequent initiation of the coagulation process promotes microvascular thrombosis (38,39). RETCs Apoptotic Cell Death and Sublethal Injury In RTECs, infiltration of inflammatory cells and a large number of inflammatory factors lead to deterioration of renal function, apoptotic cell death, and sublethal injury. Sublethal changes in RTECs include loss of cell polarity, reduced tight junction protein expression, and biological energy disturbance (40). During the progression of SA-AKI to CKD, like immune cells, early metabolic reprogramming of TECs into aerobic glycolysis improves resistance and tolerance. In addition, epigenetic changes may occur, with cell cycle stagnation in the G2/M phase and a significant increase in connective tissue growth factor and TGF-b production (41). Metabolic Reprogramming Among the various cell types of the kidney, RTECs are the most metabolically active cells in the kidney and are very sensitive to septic-related injury. Under normal physiological conditions, oxidative phosphorylation (OXPHOS) produces more than 95% of the cellular energy of ATP (42), and aerobic respiration is the main mechanism of ATP production. However, during SA-AKI, RTECs may first convert to glycolysis, converting pyruvate to lactic acid, an inefficient mechanism for producing ATP. For example, CLP animal models and human SA-AKI lead to decreased ATP levels in the kidney (43). Inhibition of aerobic glycolysis and induction of OXPHOS can reduce susceptibility to AKI and significantly improve survival rate (44). As ATP levels decrease, adenosine monophosphate activated protein kinase (AMPK) activates, on the one hand leading to increased glycolysis, FA oxidation, and glucose transport capacity. On the other hand, it induces the production of key antioxidant enzymes and induces mitochondrial biogenesis through the peroxisome proliferator activated receptor (PPAR) g CoActivator -1 a (PGC-1 a). Late activation of AMPK may FIGURE 1 | Clinical course and pathophysiology of SA-AKI. Sepsis is the most common cause of AKI in critically ill patients. However, sepsis and AKI predispose hosts to each other, and it is often difficult to determine the exact timing of the onset of these two syndromes clinically. There is increasing evidence that the pathogenesis of SA-AKI is the "unified theory" theory involving the interaction between inflammation, microcirculation dysfunction, and metabolic reprogramming. The pathophysiology of SA-AKI involves injury and dysfunction of many cell types, including macrophages, ECs, and RTECs. PAMP and/or DAMP released from damaged tissues activate and promote the pro-inflammatory phenotype (M1) of macrophages, resulting in the release of pro-inflammatory cytokines and chemokines, which can cause damage to kidney tissues. The second cell type that is vulnerable is the EC. Sepsis stimulates endothelial cells to produce nitric oxide, which causes blood vessels to dilate. Many molecules simultaneously control microvascular permeability, resulting in insufficient blood volume relative to the vessel when tight cellular connections loosen. In addition, during the period of sepsis, confirmed microvascular thrombosis related to inflammation. In RTECs, infiltration of inflammatory cells and a large number of inflammatory factors lead to deterioration of renal function, apoptotic cell death, and sublethal injury. eventually stabilize the energy balance through cell survival and mitochondrial biogenesis. The availability of functional mitochondria is an important component of cell metabolism and metabolic reprogramming. Sepsis results in significant mitochondrial damage and activation of mitochondrial quality control processes, such as mitochondrial autophagy (damaged mitochondria are swallowed into cells for recycling), biogenesis (new functional mitochondrial synthesis), or interference with cellular signaling pathways, such as the Akt/mTORC1/HIF-1 a pathway (45). Metabolic reprogramming may lead to optimization of RTECs energy consumption, reprogramming of substrate utilization, and enhanced cell resistance to oxidative damage (45). Therefore, the effect of OXPHOS induction or OXPHOS modulator promotion on mitochondrial function is closely related to renal function and survival rate during sepsis. THE DIAGNOSTIC OR THERAPUEITIC INTERVENTIONS OF SA-AKI The prevention of SA-AKI is complex, and most patients have already shown apparent renal insufficiency when seeking treatment. Therapeutically, SA-AKI remains largely supportive and nonspecific. Therefore, SA-AKI urgently needs to find more effective prevention and intervention methods. The past decade has seen an explosion in the use of high-throughput technologies and computational integration of multidimensional data. Integrating multi-omics studies offers a deeper understanding of the mechanisms of SA-AKI and the possibility of individualized treatment on an individual basis. Next, the existing prevention and treatment interventions for SA-AKI were discussed. Antibiotics and Source Control Early and appropriate sepsis source control were associated with a reduced risk of AKI and a greater likelihood of renal recovery within 24 hours (15). Improved monitoring of host responses through the use of transcriptomic and/or metabolomic analysis describes several novel interventions targeting immunotherapy. An example of a promising but failed attempt is a drug targeting toll-like receptor (TLR). In addition, a new type of epigenetic therapy that regulates interference in the epigenetic process of gene transcription in immune cells during sepsis could be used to restore the possibility of immune function. Induction of immunity and reversal of immune paralysis by b -glucan, and direct pharmacological manipulation of epigenetic enzymes (46). SIRT1 inhibitor EX-527, a small molecule-SIRT1 binding site that shuts down NAD +, increases leukocyte accumulation in the peritoneum and improves peritoneal bacterial clearance, showing significant protective effects during abdominal sepsis in mice (47). Fluid Resuscitation Fluid resuscitation is the cornerstone of septic shock management. An initial moderate infusion of resuscitation solution (30 mL/kg within the first 3 hours) was followed by dynamic measurements of fluid reactivity to determine the need for fluid or vasoactive agents. There is clear evidence that excessive resuscitation is also harmful in the case of AKI (48). However, complimentary analysis analysis of the ProCESS trial focused on renal outcomes up to 1 year and found that the use of early goal-directed therapy, alternative resuscitation, or conventional care did not affect the development of new AKI, AKI severity, fluid overload, RRT requirements, or renal function recovery (18). Vasoactive Agent In the case of SA-AKI, several large multicenter trials have looked at traditional drugs such as norepinephrine (norepinephrine), epinephrine, vasopressin, and dopamine, as well as more novel drugs such as angiotensin II and levosimendan (49). Norepinephrine is recommended as the first-line agent for septic shock treatment, and vasopressin is the consensus first-line agent for septic shock treatment (50). A small subgroup analysis of patients treated with RRT showed that patients receiving angiotensin II required less RRT than placebo and were more likely to survive to day 28 (53% versus 30%; P=0.012), the results need to be validated in a larger SA-AKI cohort (51). Drug Therapy Another treatment for sepsis is to protect individual organs. In preclinical and small clinical studies, recombinant human alkaline phosphatase (AP) has shown a protective effect against SA-AKI through direct dephosphorylation of endotoxin leading to reduced inflammation and organ dysfunction and improved survival (52). In a recent international, randomized, doubleblind, placebo-controlled, dose-discovery adaptive Phase IIa/IIb study of 301 PATIENTS with SA-AKI, 1.6 mg/kg was found to be the optimal dose with no significant improvement in short-term renal function compared with placebo. However, the use of AP was associated with a reduction in day 28 mortality (17.4% versus 29.5% in the placebo group) (52). Thiamine deficiency is associated with anaerobic metabolism and increased lactic acid. Adding thiamine improves mitochondrial function in sepsis. In a secondary analysis of a single-center, randomized, double-blind, placebo-controlled trial, patients randomized to intravenous thiamine (200 mg twice daily for 7 days) had lower AKI severity and fewer patients received RRT (53). Targeted therapies, such as targeting apoptotic pathways with caspase inhibitors and inhibiting inflammatory cascades, have shown some promising results in experimental models (54). As of June 2022, a search at www.clinicaltrials.gov listed 2,772 sepsis studies, of which 94 SA-AKI studies and 49 involved intervention (clinical trials). Many other compounds are being actively investigated for sepsis, such as remtimod, pirfenidone sustained release, l-carnitine, and probiotics. Renal Replacement Therapy Guidelines suggest using continuous or intermittent renal replacement therapy (weak recommendation, low-quality evidence) for patients with adult sepsis/septic shock who develop AKI and require RRT (2). Widely accepted indications for initiation of RRT include refractory fluid overload, severe hyperkalemia, and metabolic acidosis in which drug therapy fails, uremic signs (pericarditis and encephalopathy), dialyzable drug or toxicosis (55). There is little data on the effect of RRT initiation timing (early and delayed strategies) on SA-AKI. Early initiation of RRT may improve prognosis by limiting systemic inflammation, fluid overload, and organ damage, but there are currently no specific RCTs to determine the optimal time to initiate RRT in SA-AKI. In the RENAL and ATN studies, there was no significant difference in the odds ratio (OR) of mortality in patients with sepsis who received higher and lower intensities of RRT (56). In addition, SA-AKI has associated with lower SCr and more pronounced oliguria, so the less severe KDIGO stage defining these criteria may underestimate the severity of AKI and create a bias in the time to define RRT. New potential biomarkers that can predict AKI severity, such as TIMP-2 x IGFBP-7, may help determine when to start RRT in this setting (57). THE OMICS ERA AND ITS IMPACT ON THE STUDY OF SA-AKI SA-AKI is currently defined in terms of clinical symptoms, and there is considerable variation in disease presentation, progression, and response to treatment, highlighting the heterogeneity of the underlying biological mechanisms. As a result, clinicians encounter much uncertainty when considering the best treatment and risk prediction. Omics refers to the comprehensive study of the roles, relationships, and effects of various molecules in biological cells. Today, omics technologies are advancing rapidly, and large datasets can be obtained from individuals and patient populations of the SA-AKI genotype-phenotypic continuum ( Figure 2). Starting with genomics, new sequencing technologies have been used rapidly elucidate entire genomes and simultaneously analyze all genes (58). There are also transcriptomics (the study of the expression of all genes in a cell or organism), proteomics (the analysis of all proteins), metabolomics (the comprehensive analysis of all metabolic small molecules), epigenomics, metagenomics, glycomics, lipidomics, connectomics, and so on (59). A fundamental shift in integrative biology from focusing on the function of individual molecules or pathways to analyzing biological systems as a unified whole is the direction in which omics technology is developing. Combined with these high-dimensional data sets, computational methods such as machine learning provide the opportunity to reclassify patients into molecularly defined subgroups that better reflect underlying disease mechanisms, with the ultimate goal of improving diagnostic classification, risk stratification, and allocation of molecular, disease-specific therapies for patients with SA-AKI. Therefore, we will first discuss the application of individual omics techniques to the study of SA-AKI ( Table 1) and then provide a comprehensive use of multi-omics. Genomics Genomics is used to identify individual genetic variation and disease susceptibility and studies relatively few individual heritable traits at specific loci. The completion of the Human Genome Project led to the initial sequencing of > 20,000-30,000 genes in the human genome, while current genomic studies have whole-genome sequencing, including regulatory regions and other untranslated regions, to identify potentially pathogenic variations anywhere in the genetic code; Whole exome sequencing, involving the sequencing of protein-coding regions of the genome, is a widely used next-generation sequencing (NGS) method. The human exome makes up no more than 2% of the genome, but it contains about 85% of the variants known to be associated with disease, making this approach a cost-effective alternative to whole genome sequencing; DNA microarrays rely on nucleic acid hybridization to detect the presence of SNPs and CNVs (83). Studies have used large-scale genomic approaches to identify SNPs using microarrays of known variations in specific diseases to identify genetic variants associated with SA-AKI. Angela J. Frank et al. (60) included 1264 patients with septic shock, of whom 887 white patients were randomly assigned to the discovery and validation cohort, and found that 5 SNPs were associated with SA-AKI, such as BCL2, SERPINA, and SIK3 genes. Subsequently, Vilander (61) and colleagues included genetic samples from 2567 patients without chronic kidney FIGURE 2 | Schematic representation of a multi-omics approach to SA-AKI. Single omics data can be integrated into multiple-omics and combined with systems biology to understand the pathophysiological mechanisms of SA-AKI better and facilitate the discovery and development of emerging biomarkers for treatment. Qiao and Cui Multi-Omics Techniques in SA-AKI Frontiers in Immunology | www.frontiersin.org July 2022 | Volume 13 | Article 905601 disease, including 837 cases of sepsis and 627 cases of septic shock, and found that SERPINA4 and SERPINA5, but not BCL2 and SIK3 are associated with acute kidney injury in critically ill patients with septic shock. In addition to focusing on variants that influence survival after SA-AKI, the second goal is to find variants that influence SA-AKI risk. Laura M. Vilander et al. (64) found that SNPs in NFKB1 loci rs41275743 and RS4648143 are associated with the risk of AKI in sepsis patients. Epigenomics Epigenomics regulates gene transcription through epigenetic changes such as DNA methylation, histone modification, and changes in non-coding RNA expression. Binnie (84) and colleagues performed epigenome-wide DNA methylation analysis on whole blood samples from 68 sepsis and 66 nonsepsis severely ill adults and found 668 differential methylation regions (DMR), of which the majority (61%) were hypermethylated. SA-AKI research is currently focused on animal studies. Selective IIa class HDAC inhibitor TMP195 may have renal protective effects in LPS-induced SA-AKI mouse models (85). In LPS-induced AKI, down-regulation of miR-29B-3p exacerbates podocyte damage by targeting HDAC4. miR-29b-3p may be an important target for AKI therapy (86). Future research into the mechanisms of sepsis may aim to integrate epigenomics and transcriptome to determine how much variation in transcriptome is influenced by methylation, histone modifications, and non-coding transcripts. Transcriptomics Transcriptomics is the study of complete gene transcripts or RNA types that are transcribed by specific cells, tissues, or individuals at specific times and states (87). It includes both coding RNAs that are translated into proteins and non-coding RNAs that are involved in post-transcriptional control, which further affect gene expression. Unlike genomics, which focuses on static DNA sequences, transcriptomics can identify genes and gene networks that are activated or suppressed under specific conditions to assess dynamic gene expression patterns. At the quantitative level, with reference, genes could be quantitatively analyzed, while without reference, only Unigene (optimized transcript) could be quantitatively analyzed, and downstream differential gene analysis and functional annotation could be performed. At the structural level, parameters can be used for variable clipping, SNP analysis, gene structure optimization, and new gene prediction. At present, it has been widely used in basic research, clinical diagnosis, drug development, and other fields. Mei Tran et al. (65) performed microarray sequencing after intraperitoneal injection of LPS in mice and found that restoring the expression of mitochondrial biogenic factor PGC-1a was necessary for the recovery of endotoxin AKI, indicating that changes in gene expression pathways related to cell metabolism and mitochondrial function were most abundant in septic LPS mice. In a study of 179 children with septic shock and 53 agematched normal controls, Rajit K Basu et al. (66) found that 21 unique gene probes were upregulated in SA-AKI patients compared with non-SA-AKI patients. In other microarray experiments using miRNAs (non-translational RNA molecules with transcriptional regulatory functions), Qin-min Ge et al. (67) found that miR-4321 and miR-4270 were significantly upregulated in septic induced AKI compared with non-septic AKI, while only miR-4321 was significantly overexpressed in the septic group compared with the control group. Pal Tod et al. (69) observed that miR-762 expression was significantly increased in early septic AKI and the miR-144/451 cluster was upregulated at 24 h after intraperitoneal injection of LPS in mice. Proteomics With the development of omics technology, research has shifted to the analysis of translation "products" of cellular proteins and RNA transcripts. By mRNA transcription and protein modification after translation (add or remove phosphate or methyl specific molecular etc.) make the proteome is highly dynamic, can not necessarily infer from the level of gene expression in specific protein level, greatly increased the complexity of protein and peptide, has brought the huge challenge for proteomics analysis (88). The introduction of a Rat CLP sepsis 1H nuclear magnetic resonance analysis detected important increases in urinary creatine, allantoin, and dimethylglycine levels in septic rats. However, dimethylamine and methylsulfonylmethane metabolites were more frequently detected in septic animals treated with 6G or 10G, and were associated with increased survival of septic animals. Garcia, 2019 (80) Tissue, plasma, and urine metabolomics-NMR Pigs infused with E. coli Metabolic differences between control animals and septicemic animals: In renal tissue, lactic acid and niacin increased, while valine, aspartate, glucose and threonine decreased; Iso-glutamate n-acetyl glutamine n-acetyl aspartic acid and ascorbic acid increased, while inositol and phenylacetyl glycine decreased in urine; And In serum, lactate alanine pyruvate and glutamine increased, while valine glucose and betaine concentrations decreased. Ping, 2019 Tissue metabolomics-GC-TOFMS Rat after intraperitoneal injection of LPS Metabolic disorders of taurine, pantothenic acid, and phenylalanine and phenylalanine in the renal cortex are associated with the development of SA-AKI. Lin, 2020 variety of techniques, such as two-dimensional gel electrophoresis, liquid chromatography, and mass spectrometry, with high sensitivity and resolution mass spectrometry, has enabled the identification and quantification of proteins and peptides in tissues and biological fluids and has provided new insights into disease-related processes at the molecular level (89). High-throughput proteomic analysis of urine, plasma, and tissue samples has identified emerging biomarkers and drug targets. To construct a new rat model of acute renal failure induced by sepsis with heterogenous response similar to that in humans. DIGE was used to detect changes in urinary protein and identify potential biomarkers and drug targets for Meprin-1alpha (70); MUP5 decreased in SA-AKI, and mitochondrial energy production and electron transport were significantly correlated with protein (73). PARK7 and CDH16 are considered novel biomarkers for early diagnosis of septic AKI and validated in human patients (75). In the Mouse CLP Sepsis model, the Tissue Proteomics-Dige and MALDI-Tof/TOF MS techniques were used to identify the phosphorylated MYL12B as a potential plasma biomarker for the early diagnosis of SA-AKI Several recent studies have identified several promising candidate marker proteins for disease onset and progression, and further identified pathways specific to SA-AKI and its transition to CKD. The moderate and severe mouse CLP sepsis models were established, and the time changes of kidney proteomics and phosphorylated proteomics were examined on days 2 and 7 after surgery, and 2119 protein sites and 2950 phosphorus sites were identified. Several new and/or less studied SA-AKI labeled proteins Hmgcs2, Serpin S100A8, and Chil3 were validated (76). In the migration and E. coli inoculation model, Using proteomics Gel-free technique, urine chitinase 3like proteins 1, 3 and acidic mammalian chitinase were found to distinguish between sepsis and mouse septicaemic induced AKI, NGAL, and thioredoxins, and increased with the severity of AKI (71). Metabonomics Metabolomics refers to the comprehensive and systematic identification and quantitative analysis of molecular metabolites of less than 1000 daltons in biological samples such as blood and tissues under physiological or pathological conditions, which may more accurately describe the cellular processes active under any conditions (90). Metabolomics studies use two main methods to detect metabolites: nuclear magnetic resonance (NMR) and liquid chromatography/mass spectrometry (LC/MS). NMR is quantitative, non-destructive, reproducible, and can accurately quantify the abundance and molecular structure of metabolites (91). The sample preparation is simple and the measurement time is relatively short, which is suitable for high-throughput, untargeted metabolite fingerprint study. But the disadvantage is relatively low sensitivity. LC/MS is also widely used in metabolomics, with higher sensitivity and quantification of more metabolites, but with poor accuracy and reproducibility. Sample preparation and solvent selection are even more critical in MS-based experiments because metabolite extraction requires the removal of proteins and salts that adversely affect the quality of the measurement as well as the instrument itself. MS mass analyzers in metabolomics commonly use quadrupole time of flight, Orbitrap, and Fourier transform, which are suitable for distinguishing the chemical complexity of metabolomics (92). Metabolites are the final products of biological activities and are the most direct and comprehensive biomarkers reflecting physiological phenotypes. More and more studies have shown that changes in energy metabolic pathways, also known as metabolic reprogramming, are an important factor in the pathophysiology of SA-AKI. Therefore, it is of great significance to study the metabolic changes of SI-AKI and identify its early biomarkers for early clinical diagnosis and treatment. Firstly, inflammatory metabolites and products of kidney damage increase. Paul Waltz et al. (77) used metabolomics-LC/MS technology in the MICE CLP Sepsis model and found that the evidence of CLP-induced kidney injury is increased serum creatinine, blood urea nitrogen, and cystatin C. CLP raises multiple inflammatory markers. Levels of osmotic regulators varied, with an overall increase in pinitol, urea, and taurine in response to CLP. Francisco Adelvane de Paulo Rodrigues (79) and colleagues detected significant increases in creatine, allantoin, and dimethylglycine levels in septic rats by 1-hour NMR analysis. However, dimethylamine and methanosulfonyl metabolites were detected more frequently in septic animals treated with 6-gingerol (6G) and 10-gingerol (10G) and were associated with increased survival in septic animals. Gingerol alleviates septic AKI by reducing renal dysfunction, oxidative stress, and inflammatory response, and the mechanism may be related to the increased production of dimethylamine and methanosulfonyl methane. Secondly, the overall energy spectrum of sepsis showed an increase in glycolysis intermediates and a decrease in flux through the tricarboxylic acid (TCA) cycle. Similar changes in metabolites were also observed through tissue and serum metabolomics-1H NMR after LPS injection in mice. The contents of betaine, taurine, lactic acid, and glucose in LPS mice were significantly decreased. The contents of 3-CP, acetoacetic acid, pyruvate, NADPH, creatine, creatinine, and trimethylamine oxide were significantly increased (78). In large animal models of pigs infused with E. coli, metabolic differences were found between control and sepsis animals: lactic acid and niacin increased in renal tissues, while valine, aspartic acid, glucose, and threonine decreased; The contents of isoglutamate-acetylglutamate-acetylaspartic acid and ascorbic acid in the urine increased, while the contents of inositol and phenylacetyl glycine decreased. Serum concentrations of lactic acid, alanine, pyruvate, and glutamine increased, while those of valine, glucose and betaine decreased (80). In addition, in plasma samples from 31 patients with sepsis and 23 healthy individuals, metabolomics-GC/MS suggest that down-regulation of energy, amino acid, and lipid metabolism may serve as a new clinical marker for identifying internal environmental disorders, especially involving energy metabolism, leading to sepsis (82). Omics Techniques in SA-AKI Outcome Events Through literature review, SA-AKI is important to the two topics cited above and is considered a major public health problem associated with increased mortality and progression to CKD. However, relevant studies are mainly focused on prospective and observational cohort studies of patients in the real world, and there are still few studies on omics technology. Omics, especially multi-omics, may have more in-depth exploration and analysis of the two topics, which is the direction of further research of omics technology in SA-AKI. While maintaining homeostasis, the kidney, as an endocrine and immune organ, may regulate distant multi-organ dysfunction. Several recent experimental studies have shown that AKI is associated with extensive damage to distant organs such as the lungs, heart, liver, and intestines (93,94). The function of remote organs can be affected by a variety of biologically related pathways, such as transcriptome changes, apoptosis, upregulation of various damage promoting molecules, oxidative stress, inflammation, and loss of vascular function (95). In addition, the severity of organ dysfunction is independently associated with mortality, which can rise to as high as 45%-60% when AKI is associated with other organ dysfunction, such as acute respiratory distress syndrome [ARDS], heart failure, or sepsis. In a prospective observational cohort of 1753 patients with critically ill AKI, SA-AKI (n = 833) was associated with an increased risk of in-hospital death. In a systematic review of long-term renal outcomes after septic AKI and long-term renal outcomes, studies using keywords associated with septic AKI were identified from PubMed and CINAHL databases within 5 years, with a time range of 28 days to 3 years for long-term renal outcomes, Most take one year. Renal outcomes range from recovery to renal replacement therapy to death. All of these studies excluded patients with CKD (96). The molecular mechanisms underlying AKI's transformation into CKD are complex, and most literature has focused on the complex balance between adaptation and maladaptive repair processes (97). Maladaptive repair leads to chronic damage and loss of kidney function, setting the stage for CKD, which eventually progresses to ESRD. This process is accompanied by permanent changes in undesirable structures, persistent lowgrade inflammation, activation of perivascular and interstitial fibroblasts, vascular sparseness, and parenchymal ischemia (98). The integration of multiple omics techniques opens up new possibilities for improving our understanding of AKI and the driving forces behind the transition from AKI to CKD. Yi-han Lin et al. (76) analyzed the changes in the global proteome and phosphorylated proteome levels in renal tissues on day 2 and day 7 after CLP by constructing a mouse model of moderate severity CLP and using filter-based sample processing method combined with an unlabeled quantitative method, corresponding to SA-AKI and transition to CKD, respectively. It provides a view that renal tissue dynamically regulates the oxidative stress induced by sepsis, and provides enlightenment for the exploration of potential diagnosis and treatment methods in the future. In this study, a total of 2119 proteins and 2950 phosphates were identified to identify specific response pathways to SA-Aki-CKD transformation, including regulation of cellular metabolism, oxidative stress, and energy expenditure in the affected kidney. Of these, the majority (56%) are associated with small molecular metabolic processes (FDR = 3.35E-48), such as lipids, nucleotides, alcohol, and other fatty acids. Network analysis also revealed that several protein clusters, such as REDOX enzyme complex, peroxisome, and cytochrome P450 (CYP) family proteins, may play important roles in the AKI-CKD transition. Novel Biomarkers for SA-AKI The role of emerging biomarkers in different renal syndromes, including SA-AKI, is a rapidly growing area of research. In patients with sepsis, early detection of AKI is critical to provide optimal treatment and avoid further kidney damage. Because specific biomarkers can detect renal stress or damage before significant changes in function (preclinical AKI) or even before the absence of functional changes (subclinical AKI), studying SA-AKI biomarkers could provide additional insights into the pathophysiology of SA-AKI (99). In order to provide prevention and early diagnosis of treatment when it is most effective. Table 2 summarizes some of the biomarkers studied in SA-AKI from the aspects of inflammatory, endothelial injury, tubule injury, and AKI risk markers, to provide prevention and early diagnosis when treatment is most effective. Renal tubular cell damage contributes to the spread of AKI during sepsis. Among the newer biomarkers, neutrophil gelatinase-associated lipid carrier protein (NGAL), kidney injury molecule-1 (KIM-1), liver-type fatty acid binding protein (L-FABP), and cystatin C(Cys C) accelerated the diagnosis of SA-AKI. NGAL is the most widely studied renal biomarker which is a member of the human lipid carrier protein family and consists of 178 amino acid residues (114). The level of NGAL increased sharply after kidney injury, which can be used as an early sensitive biomarker of kidney injury. NGAL expression is inconsistent in SA-AKI. Studies have shown that urinary NGAL has higher specificity for S-AKI than plasma NGAL (80.0% vs 57.0%) (115). Sollip Kim et al. found in A systematic review and meta-analysis that plasma NGAL had A high sensitivity and A high negative predictive value for AKI in adult sepsis patients. However, this study did not reveal the usefulness of urine NGAL (116). KIM-1 is a type I transmembrane glycoprotein encoded by the TIM-1 gene and is a member of the T cell immunoglobulin mucin (TIM) gene family. Kim-1 was first used as a biomarker for acute kidney injury (AKI) in 2002, but there is little evidence to support its Qiao and Cui Multi-Omics Techniques in SA-AKI Frontiers in Immunology | www.frontiersin.org July 2022 | Volume 13 | Article 905601 role in S-AKI. Similar to uKIM-1, sKIM-1 can also predict the occurrence of septic AKI at an early stage, but it has no predictive value to judge the severity of AKI and the prognosis of sepsis (109). However, these biomarkers lack the ability to further the stratification of SA-AKI risk or inform us of primary and secondary sites of injury. Tissue Inhibitors Of metalloproteinase 2 (TIMP-2) stimulate P27 expression (117). Insulin-like growth factor binding protein-7 (IGFBP7) increases the expression of p53 and P21 (118). There are no widely accepted risk scores for SA-AKI, and only the HELENICC score currently predicts mortality in patients requiring renal replacement therapy (RRT) (120). Comparing 30 patients before electronic alert activation with 30 patients after electronic alert activation, the time to receive any sepsis-related intervention was shorter after an alert, with a median difference of 3.5 hours (P = 0.02) (121). Using electronic health records to create electronic alert systems has the potential to identify highrisk patients and initiate interventions more quickly. Using realtime data from electronic health records to identify patients with Types of biomarker Biomarker Source Potential use in SA-AKI IL-6 Mononuclear macrophages, Th2 cells, vascular endothelial cells and fibroblasts Baseline IL-6 at admission predicted AKI in patients with severe sepsis, and IL-6 also predicts the development of AKI and need for RRT in patients with severe sepsis (100). IL-18 Monocytes, dendritic cells, macrophages and epithelial cells In a prospective, multicenter cohort, UIL-18 independently predicted the progression of septic AKI (AUC 0.619; 95% CI, 0.525 to 0.731) (101). sTREM-1 Activated receptors selectively expressed on the surfaces of neutrophils, macrophages, and mature monocytes In patients with sepsis, The AUC values of plasma STREM-1 in the diagnosis and prediction of AKI (24h before diagnosis) were 0.794 and 0.746, respectively. The AUC values of urine STREM-1 were 0.707 and 0.778. ACU 0.922 was predicted 48 hours before diagnosis, and urine STREM-1 was a fairly good predictor (102). Endothelial injury biomarkers Ang Ang1 is mainly synthesized by paravascular sertoli cells, vascular smooth muscle cells and tumor cells; Ang2 is mainly synthesized by vascular smooth muscle cells Ang1 has a protective effect against endotoxemia, increasing vasoconstriction and reducing pulmonary microvascular leakage associated with inflammation (103). Circulating Ang1 levels were suppressed in critically ill patients with septic shock (104). Circulating Ang-2 is a strong independent predictor of mortality in ICU dialysis-dependent AKI patients (105). VEcadherin Vascular endothelial cell Plasma sVE-cadherin was independently associated with AKI-RRT, suggesting that disruption of endothelial adhesion and connectivity may contribute to the pathogenesis of organ dysfunction in sepsis (106). sTM Vascular endothelial cell Compared with sepsis non-AKI group, sTM in SA-AKI group was significantly different (P<0.0001); Multivariate logistic regression analysis showed that sTM was an independent predictor of AKI, and AUROC was 0.758(P<0.0001) (107). Tubular injury biomarkers NGAL Leukocytes, loops of medullary and collecting ducts SA-AKI patients have higher detectable plasma and urine NGAL compared with non-septic AKI patients. These differences in NGAL values in SA-AKI may have diagnostic and clinical relevance as well as pathogenetic implications (108). KIM-1 RTECs UKIM-1 and sKIM-1 levels were significantly higher in SA-AKI than in patients without AKI. ROC of uKIM-1 and sKIM-1 for AKI prediction was 0.607 and 0.754, respectively (109). L-FABP Liver cells; RTECs Urinary L-FABP level may be a predictive marker of sepsis severity and mortality, and can serve as a useful biomarker for patients with sepsis complicated with AKI (109). Cys C All nucleated cells Urine and plasma are of value in the diagnosis and prediction of AKI occurrence (24 hours before diagnosis) in patients with SA-AKI (21). Aydogdu et al. confirmed that plasma and urine Cys-C were good markers for early diagnosis of septic associated AKI (AUCs 0.82 and 0.86, respectively) (110). However, some studies in adults and newborns have shown that sepsis has no effect on plasma or urine levels of Cys-C (111,112). SA-AKI, automatic alarms are combined with biochemical biomarker testing to improve case detection and risk stratification for SA-AKI (122). Omics Databases on Kidney Disease Omics database provides the latest information about the molecular function orientation and expression, to store information about has conducted a similar experiment, is helpful to study design, the study of kidney disease is a valuable tool. For clinical practice, systems biology methods and high throughput technology to promote medical revolution from passive to active and prevention, through the powerful calculation method, find new biomarkers. The development of diagnostic tools to elucidate the pathogenesis and create models for possible therapies for patient screening, diagnosis, prevention and treatment. Omics is ongoing and is expected to be gradually introduced into clinical practice within the next decade (123). In this review, referring to Theofilos Papadopoulos et al. (124), we describe universal omics databases covering a wide range of molecular and pathological information as well as specific databases for kidney disease ( Table 3). MULTI-OMICS INTEGRATION Numerous studies have shown that the integration of multiomics data sets has been applied to a wide range of biological problems, helping to unravel the underlying mechanisms at the multi-omics level. Yehudit Hasin et al. (125) proposed a comprehensive analysis method of multiple sets of data, which is divided into three categories: genomic priority that attempts to determine the mechanism of GWAS loci leading to disease, the phenotypic priority that seeks to understand the pathway leading to disease, and the environmental priority that uses environment as the primary variable to study its interfering pathway or interaction with genetic variation. Although current omics research on SA-AKI has focused on a single omics study, only a few studies have integrated multiple omics techniques to address the three critical issues of SA-AKI. ① Subtypes and classification of SA-AKI based on multiple omics features. There has not been a multi-omics integrated study involved. ② Prognostic biomarkers for SA-AKI, including disease diagnosis and driver genes. A good example is Raymond J. Langley et al. (126) and his colleagues examined clinical characteristics and plasma metabolomics and proteomics of patients with communityacquired sepsis upon arrival at the hospital emergency department and 24 h later. Different characteristics of proteins and metabolomics are concentrated in fatty acid transport and b -oxidation, gluconeogenesis, and citric acid cycles and vary more as death approaches. However, the metabolomics and proteomics of survivors of mild sepsis were not different from those of survivors of severe sepsis or septic shock. An algorithm derived from clinical features and measurements of seven metabolites predicted patient survival. ③Gain insight into the pathophysiology of SA-AKI. Takashi Hato and his colleagues conducted two experiments specifically targeting SA-AKI. A combination of transcriptomics, proteomics, and metabolomics, showed that endotoxin preconditioning reprogrammed macrophages and tubules to create a protective environment to prevent severe AKI in septic mouse models, upregulating the antibacterial molecule itaconic acid and its activase Irg1. Many genes activated by endotoxin were located near heterochromatin, suggesting that epigenetic regulation may be involved in the preconditioning response (127). In the second study, they used gram-negative sepsis model for the translation group, transcriptome, and proteome of the joint inspection new; translation will be closed as a vital characteristic of the late sepsis, further found that 5 'cap dependency translation close the reversal of the improved degree of kidney damage caused by sepsis. Mariam P. Alexander et al. (128) compared COVID-19 AKI with SA-AKI, and analyzed the morphological, transcriptome, and proteomic characteristics of postmortem kidneys. Transcriptomics found that COVID-19 AKI and SA-AKI have a rich transcriptional pathway associated with inflammation (apoptosis, autophagy, major histocompatibility complex I and II, and Type 1 T-assisted cell differentiation) compared to noninfectious AKI; Proteomic pathway analysis showed that both of them were enriched to a lesser extent in necrotic apoptosis and Sirtuin signaling pathways, both of which are involved in the regulatory response of inflammation. NEW TECHNIQUES AND FUTURE PERSPECTIVES Our understanding of disease processes will likely to evolve rapidly and revolutionarily as new technologies and methods development. For example, techniques such as scRNA-seq and mononuclear RNA-seq (snRNA-seq) provide insights into the molecular processes of SA-AKI at the cellular level, with artificial intelligence aimed at accurately predicting the onset of SA-AKI in advance. In future applications, tissue samples or whole organs can be sequentially analyzed through a combination of these techniques to generate spatial multi-omics datasets, which are expected to provide unprecedented insights into the deep molecular biology of the system under study. Integrating Microarray-Based Spatial Transcriptomics and Single-Cell RNA-Seq ScRNA-seq provides detailed information on single-cell transcriptional expression, allowing cell-to-cell analysis of RNA expression differences (129). It uses a variety of methods for cell isolation and transcription amplification, such as microfluidics devices that capture cells in hydrogel droplets or methods that rely on physical isolation of a cell (such as fluorescent-activated cell sorting into a 96-well plate and microfluidics chip used by Fluidigm C1) from another well (130). Due to the heterogeneous cell types (such as epithelial cells, endothelial cells, fibroblasts, vascular smooth muscle, and immune cells) in different renal microenvironments and interactions, SA-AKI has different effects on various cells in the kidney. scRNA-seq enables researchers to detect highly variable genes (HVGS) between cells that contribute to mixed populations, which cannot be achieved by bulk RNA-seq (131). One of the significant challenges of the scRNA-seq data is matching the RNA profile with its location (spatial information) in the tissue (132). Spatial transcriptome sequencing provides complete tissue spatial location information, enabling spatial localization of different single-cell subpopulations by adding spatial information to scRNA-seq data, increasing understanding of specific cell subpopulations and their interactions in development, homeostasis, and disease (133). Currently, there are few studies on single-cell RNA sequencing technology for SA-AKI. Ricardo Melo Ferreira et al. (134) used single-cell sequencing to deconvolution the signature of each spatial transcriptome point in the mouse CLP model to determine the co-localization mode between immune cells and epithelial cells. Spatial transcriptomics revealed that infiltrating macrophages dominate the exocortical features, and Mdk was identified as the corresponding chemokine, revealing the mechanisms driving immune cell infiltration and detecting associated cell subsets to complement single-cell sequencing. Danielle Janosevic et al. (135) provided a detailed and accurate view of the evolution of renal endotoxemia at the cellular and molecular levels by sequencing single-cell RENAL RNA in a mouse endotoxemia model, providing the first description of spatio-temporal endotoxin-induced transcriptome changes in the kidney. It reveals that the involvement of various cell populations is organized and highly coordinated in time, promoting the further investigation of human sepsis. Artificial Intelligence Artificial intelligence (AI) technology has emerged as doctors face the challenge of being overwhelmed by the amount of data generated in healthcare today (136). Artificial intelligence is a scientific discipline that aims to understand and design computer systems that display intellectual processes (137). Machine learning (ML), a subset of artificial intelligence, may detect disease onset before clinical symptoms appear, allowing for a more proactive approach (138). In machine learning, supervised learning and reinforcement learning are widely used (139). In the narrative review of the clinical application of artificial intelligence in sepsis, 15 articles about the use of AI model to diagnose sepsis, the model with the best performance reached 0.97 AUROC; 7 prognostic articles, predicting mortality over time with an AUROC of up to 0.895; 3 articles on helping to treat sepsis, in which AI use was associated with the lowest mortality (140). Kumardeep Chaudhary et al. (141) used deep learning to identify the septicemic AKI subtype unknowingly and inexplicably from routinely collected data from electronic health records is the first study to use routinely collected electronic health record data to identify the clinical subtype of SA-AKI syndrome in the ICU. When combined with other biomarkers and omics data, this approach could further accelerate research into the discovering of new biomarkers and dysregulation pathways for SA-AKI. Absolute and relative protein expression data from more than 250 largescale experiments; >500,000 proteomic absolute and relative expression records; >500,000 proteomic absolute and relative expression records; Relative gene expression data Used to query different types of omics expression data and data visualization; View expression data, pathway mapping, and direct connections between proteins and genes; Provides a background for the exploration of multiple omics expressive data At present, the comprehensive performance evaluation of machine learning models is limited by research heterogeneity. In addition, because clinical implementation of models is rare, there is an urgent need to determine the clinical impact on different patient populations to ensure universality (142). CONCLUSIONS Despite significant advances in our understanding of the pathophysiology and detection markers of SA-AKI, it remains a common and highly hazardous complication of the critically ill disease. The development of multiple omics studies, which have increased the availability of kidney tissue, blood and urine samples, and patient data, has provided a tremendous opportunity to increase our understanding of SA-AKI. As the cost of omics analysis continues to decrease, the emergence of more types of omics techniques and studies integrating multiple omics techniques can be integrated into the clinic and guide the personalized treatment of SA-AKI. Such advances, however, will require a more careful selection of models and research techniques to study the effects of this molecular involvement on SA-AKI in greater detail, addressing the common challenges of omics in distinguishing causal and reactive changes in the context of disease.
2022-07-07T13:53:12.056Z
2022-07-07T00:00:00.000
{ "year": 2022, "sha1": "37f82392940c68326aff075fba7cef080620a787", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "37f82392940c68326aff075fba7cef080620a787", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
221208960
pes2o/s2orc
v3-fos-license
When the Differences in Frequency Domain are Compensated: Understanding and Defeating Modulated Replay Attacks on Automatic Speech Recognition Automatic speech recognition (ASR) systems have been widely deployed in modern smart devices to provide convenient and diverse voice-controlled services. Since ASR systems are vulnerable to audio replay attacks that can spoof and mislead ASR systems, a number of defense systems have been proposed to identify replayed audio signals based on the speakers' unique acoustic features in the frequency domain. In this paper, we uncover a new type of replay attack called modulated replay attack, which can bypass the existing frequency domain based defense systems. The basic idea is to compensate for the frequency distortion of a given electronic speaker using an inverse filter that is customized to the speaker's transform characteristics. Our experiments on real smart devices confirm the modulated replay attacks can successfully escape the existing detection mechanisms that rely on identifying suspicious features in the frequency domain. To defeat modulated replay attacks, we design and implement a countermeasure named DualGuard. We discover and formally prove that no matter how the replay audio signals could be modulated, the replay attacks will either leave ringing artifacts in the time domain or cause spectrum distortion in the frequency domain. Therefore, by jointly checking suspicious features in both frequency and time domains, DualGuard can successfully detect various replay attacks including the modulated replay attacks. We implement a prototype of DualGuard on a popular voice interactive platform, ReSpeaker Core v2. The experimental results show DualGuard can achieve 98% accuracy on detecting modulated replay attacks. INTRODUCTION Automatic speech recognition (ASR) has been a ubiquitous technique widely used in human-computer interaction systems, such as Google Assistant [5], Amazon Alexa [4], Apple Siri [52], Facebook Portal [45], and Microsoft Cortana [14]. With advanced ASR techniques, these systems take voice commands as inputs and act on them to provide diverse voice-controlled services. People now can directly use voice to unlock mobile phone [20,39], send private messages [2], log in to mobile apps [6], make online payments [48], activate smart home devices [51], and unlock a car door [36]. Although ASR provides many benefits and conveniences, recent studies have found a number of attacks that can effectively spoof and mislead ASR systems [8,11,29,31,38,47,49,53,70,71,74,75]. One of the most powerful and practical attacks is the audio replay attack [8,29,38], where a pre-recorded voice sample collected from a genuine victim is played back to spoof ASR systems. Consequently, it can easily bypass voice authentication and inject voice commands to conduct malicious activities [25]. For example, a mobile device can be unlocked by simply replaying a pre-recorded voice command of its owner [29]. Even worse, the audio replay attack can be easily launched by anyone without specific knowledge in speech processing or other computer techniques. Also, the prevalence of portable recording devices, especially smartphones, makes audio replay attacks one of the most practical threats to ASR systems. To defeat audio replay attacks, researchers have proposed a number of mechanisms to detect abnormal frequency features of audio signals, such as Linear Prediction Cepstral Coefficient (LPCC) [33], Mel Frequency Cepstral Coefficient (MFCC) [68], Constant Q Cepstral Coefficients (CQCC) [56], and Mel Wavelet Packet Coefficients (MWPC) [42]. A recent study [65] shows that the amplitudefrequency characteristics in a high-frequency sub-band will change significantly under the replay attack, and thus they can be leveraged to detect the attack. Another study [8] discovers that the signal energy in the low-frequency sub-bands can also be leveraged to distinguish if the voice comes from a human or an electronic speaker. Moreover, due to the degraded amplitude components caused by the replay noise, the frequency modulation features [21,28,55] can be leveraged into detection. Overall, existing countermeasures are effective on detecting all known replay attacks by checking suspicious features in the frequency domain. In this paper, we present a new replay attack named modulated replay attack, which can generate replay audios with almost the same frequency spectrum as human voices to bypass the existing countermeasures. Inspired by the loudspeaker equalization techniques in auditory research that targets at improving the sound effect of an audio system [13], the core idea of modulated replay attack is to compensate for the differences in the frequency domain between replay audios and human voices. Through a measurement study on ASR systems, we find the differences in the frequency domain are caused by the playback electronic speakers, which typically have a non-flat frequency response with non-regular oscillations in the passband. In reality, an speaker can hardly output all frequencies with equal power due to its mechanical design and the crossover nature if the speaker possesses more than one driver [10]. Thus, when the genuine human audio is replayed, electronic speakers exert different spectral gains on the frequency spectrum of the replay audio, leading to different distortion degrees. Typically, electronic speakers suppress the low-frequency components and enhance the high-frequency components of the genuine human audio. By evaluating the transfer characteristic of electronic speakers, we are able to customize a pre-processing inverse filter for any given speaker. By applying the inverse filter before replaying the human audio, the spectral effects caused by the speaker devices can be offset. Consequently, the attacker can produce spoofed audios that are difficult to be distinguished from real human voices in the frequency domain. We conduct experiments to demonstrate the feasibility and effectiveness of the modulated replay attack against 8 existing replay detection mechanisms using 6 real speaker devices. The experimental results show that the detection accuracy of most frequency-based countermeasure significantly drops from above 90% to around 10% under our attack, and even the best countermeasure using MWPC [42] drops from above 97% to around 50%. One major reason is that modulated replay attack is a new type of attack that leverages loudspeaker frequency response compensation. To defeat the modulated replay attack as well as classical replay attacks, we propose a new dual-domain defense method named Du-alGuard that cross-checks suspicious features in both time domain and frequency domain, which is another major contribution in this paper. The key insight of our defense is that it is inevitable for any replay attacks to either leave ringing artifacts [63] in the time domain or cause spectrum distortion in the frequency domain, even if the replay audio signals have been modulated. We formally prove the correctness and universality of our key insight. In the time domain, ringing artifacts will cause spurious oscillations, which generate a large number of local extreme points in replay audio waveforms. DualGuard extracts and leverages those local extrema patterns to train a Support Vector Machine (SVM) classifier that distinguishes modulated replay attacks from human voices. In the frequency domain, spectrum distortion will generate dramatically different power spectrum distributions compared to human voices. Also, DualGuard applies the area under the CDF curve (AUC) of power spectrum distributions to filter out classical replay attacks. Therefore, DualGuard can effectively identify replay audio by performing the checks in two domains. We implement a prototype of DualGuard on a voice interactive platform, ReSpeaker Core v2 [57]. We conduct extensive experiments to evaluate its effectiveness and performance on detecting replay attacks. The experimental results show that DualGuard can achieve about 98% detection accuracy against the modulated replay attack and over 90% detection accuracy against classical replay attacks. Moreover, we show that DualGuard works well under different noisy environments. Particularly, the detection accuracy only decreases by 3.2% on average even with a bad signal-to-noise ratio (SNR) of 40 dB. DualGuard is lightweight and can be deployed to work online in real ASR systems. For example, our testbed platform takes 5.5 ms on average to process a signal segment of 32 ms length using 24.2% CPU and 12.05 MB memory. In summary, our paper makes the following contributions: • We propose a new modulated replay attack against ASR systems, utilizing a specific software-based inverse filter to offset suspicious features in the frequency domain. By compensating the electronic speaker's non-flat frequency response in the passband, modulated replay attacks can bypass existing replay detection mechanisms. • We design a novel defense system named DualGuard to detect all replay attacks including the modulated replay attacks by checking suspicious features in both frequency domain and time domain. We formally prove that replay attacks cannot escape from being detected in both time and frequency domains. • We verify the feasibility and effectiveness of the modulated replay attack through real experiments using multiple speaker devices over existing replay detection mechanisms. We also implement a prototype of DualGuard on a popular voice platform and demonstrate its effectiveness and efficiency in detecting all replay attacks. BACKGROUND In this section, we introduce necessary background information on audio signal processing, ASR systems, and replay attacks. Audio Signal Processing As there are so many technical terms on voice signal processing, we only briefly introduce two necessary terms that are tightly related to our work. Signal Frequency Spectrum. Generally, a signal is represented as a time-domain form x(t), recording the signal amplitude at each time point. Frequency spectrum is another signal representation, providing a way to analyze the signal in the frequency domain. Fourier analysis [50] can decompose a time-domain signal as the sum of multiple sinusoidal signals of different frequencies, i.e., x(t) = n A n · sin (2π f n t + ϕ n ). The n-th sinusoidal signal is called the frequency component with a frequency value of f n . The set of {A n } is called the amplitude spectrum that represents the amplitude of each frequency component. {ϕ n } is the phase spectrum recording the phase of each component. The frequency spectrum of a signal is the combination of amplitude and phase spectrum. Frequency Response. Frequency response represents the output frequency and phase spectrum of a system or a device in response to a stimulus signal [46]. When a stimulus signal that is typically a single-frequency sine wave passes through a system, the ratio of the output to input amplitude (i.e., signal gain) varies with the input frequency of the stimulus signal. The amplitude response of the system represents the signal gains at all frequencies. Hence, the output amplitude spectrum of a signal is the product of the input amplitude spectrum and the amplitude response of the system. A system is a high-pass (low-pass) filter if the system has a higher amplitude response in the high-frequency (low-frequency) range. The phase response of a system represents the phase shifts of different frequency signals passing through the system. Figure 1 shows an automatic speech recognition (ASR) system. A recording device such as a microphone captures the audio signals from the air and converts the acoustic vibrations into electrical signals. Then, the analog electrical signals are converted to digital signals for signal processing. The processed digital signals are used for speech recognition or speaker identification in the subsequent processing of the ASR systems. These digital signals are commonly referred to as the genuine audio if the signals are directly collected from the live human speakers. ASR systems are vulnerable to replay attacks. The classical replay attack model contains four basic components, i.e., a recording device, an analog-to-digital (A/D) converter, a digital-to-analog (D/A) converter, and a playback device such as a loudspeaker. Compared with the normal speech recognition steps in the ASR systems, the replay attack contains a replay process as shown in Figure 2(a). The attacker firstly collects the genuine human voice using a recording device and converts the voice to a digital signal by an A/D converter. The digital signal can be stored in a disk device as a lossless compression format or be spread through the Internet. After that, the attacker playbacks the digital signal near the targeted ASR system, which spoofs the system to provide expected services. In the playback process, the stored digital signal is converted to an analog electric signal by a D/A converter. Then, the electric signal will be played as an acoustic wave by a playback device. MODULATED REPLAY ATTACKS In this section, we propose a new attack called modulated replay attack. By analyzing the replay model and replay detection methods, we find the existing defenses rely on the features of amplitude spectrum. Based on these observations, we propose a method to estimate the speaker response and build an inverse filter to compensate the amplitude spectrum of the replay signals. The reconstructed replay signals can bypass the existing defenses. Impacts of Replay Components Although classical replay attacks can achieve a high success rate in spoofing ASR systems, some acoustic features can still be utilized to distinguish the replay audio from the genuine audio. As shown in Figure 2(a), the main difference between these two types of audio is the additional replay process that the replay audio goes through. We study the impacts from four components involved in the replay process, namely, the recording device, A/D converter, D/A converter, and the playback device. We observe that the impacts from the first three components are negligible, and the most significant impacts on replay signals come from the playback device. First, an attacker needs to use a recording device to collect the voice command. The main factors that influence the recording process include the non-linearity of modern microphones and the ambient noise. However, the nonlinear frequency range of a microphone is much higher than the human speech frequency. When it comes to the ambient noise, it is hard to tell if the noise is introduced during the attacker's recording process or the ASR recording phase. Second, when the A/D converter transforms the signal into a digital form, it may cause the information loss of the analog signal due to the sampling and quantization operations. However, this effect is limited since the modern recording devices have a higher sampling rate (not less than 44.1 kHz) and a higher bit depth (usually higher than 16-bit resolution) than the old-fashioned recorders. Third, the signal can be transformed back into the analog form by the D/A converter, where a low-pass filter is used to eliminate the high-frequency components caused by sampling. As the sampling frequency is at least 10 times larger than the speech frequency, the filter in the D/A converter has little effect on the audio signals. Finally, we find the most significant effects on the replay signal are caused by the playback device. Because of the shape and volume, the acoustic characteristics of loudspeakers are greatly different from those of human vocal organs. Due to the resonance of the speaker enclosure, the voice from loudspeakers contains low-power "additive" noise. These resonant frequency components are typically within 20-60 Hz that human cannot produce [8]. Another important feature of loudspeakers is the low-frequency response distortion due to the limited size of loudspeakers. Within the speech frequency range, the amplitude response of a loudspeaker is a highpass filter with a cut-off frequency typically near 500 Hz [59]. As a result, the power of low-frequency components will be attenuated rapidly when a voice signal passes through a loudspeaker, which is the "multiplicative" characteristic of speakers in human speech frequency range [46]. Even though the genuine audio and the replay audio have the same fundamental frequency and harmonic frequencies, the power distributions of frequency components remain different. The low-frequency components of replay audio have a smaller power proportion compared with those of genuine audio. Because the different power distributions lead to different timbre, the voice signals sound different even with the same loudness and fundamental frequency. Attack Overview Based on our observation that existing defenses utilize the amplitude spectrum to detect replay attacks, the key idea of our proposed attack is to modulate the voice signal so that the replay audio has the same amplitude spectrum as the genuine audio. As shown in Figure 2(b), the most critical component is the modulation processor between the A/D and D/A conversion. The modulation processor can compensate for the amplitude spectrum distortion caused by the replay process. By adding the modulation processor, we can deal with the modulated replay process as an all-pass filter, so that the modulated replay audio will have an equivalent processing flow as the genuine audio. In the classical replay process, the recording device and the A/D and D/A conversion have limited effects on the replay audio. Thus, our modulation processor mainly targets the playback device, specifically, the amplitude response of it. There are many types of playback devices, such as mobile phones, MP3 players, and remote IoT devices in the victim's home. We acquire the amplitude response of a playback device by measuring the output spectrum in response to different frequency inputs. If the playback device is under remote control that the amplitude response cannot be measured directly, we can estimate an approximate response from the same or similar devices. After acquiring the amplitude response of the playback device, we design an inverse filter that is a key component in the modulation processor to compensate for the distortion of the signal spectrum. After the spectrum modulation, the modulated replay audio can bypass existing frequency-based defense. In our modulated replay attack, the modulation processor only deals with the voice signals in digital form. Therefore, the inverse filter is designed by digital signal processing (DSP) techniques. The modulated signals can be stored or spread through the Internet to launch a remote replay attack. Modulation Processor The structure of the modulation processor is shown in Figure 3. The recorded audio is a digital signal collected from the genuine human voice. The audio is then transformed from the time domain to the frequency domain by fast Fourier transform algorithm. The FFT output is a complex frequency spectrum that can be divided into two parts: (1) the amplitude spectrum that records the amplitude for each frequency component, and (2) the phase spectrum that records the phase angle for each frequency component. We only process the amplitude spectrum in the modulation processor for two reasons. One reason is that both the ASR systems and the replay detection systems extract signal features from the amplitude spectrum. Another reason is that the human ear is less sensitive to the sound phase compared to the sound amplitude. Therefore, the phase spectrum will remain the same in the modulation processor. The inverse filter, estimated based on the speaker properties, is the key component in the modulation processor. Specifically, the inverse filter is an engine in the spectrum filtering unit, transforming the amplitude spectrum to a compensated spectrum. By the spectrum filtering, the inverse filter can offset the distortion effect caused by the playback device. Therefore, the amplitude responses of the inverse filter and the loudspeaker are complementary, because the combination of these two transfer functions is a constant function that represents an all-pass filter. After processing the amplitude spectrum with the inverse filter, we can obtain a compensated spectrum that has a better frequency characteristic in the low-frequency range. With both the compensated spectrum and the phase spectrum, the inverse fast Fourier transform (iFFT) is utilized to convert the reconstructed signal from frequency domain to time domain. Finally, we can get a modulated audio in the time domain. Moreover, the modulated audio will be stored as a digital format, which is ready to be used to launch the modulated replay attack. Test Signal Inverse Filter Estimation The inverse filter is estimated through the speaker properties. Therefore, it is necessary to measure the amplitude response of the loudspeaker directly. If it is not possible for direct measurement, the amplitude response can be estimated by measuring the speakers in the same or similar model. When measuring the speaker properties, we set a single-frequency test signal as the speaker input and record the output audio, as shown in Figure 4(a). Through checking the output amplitude spectrum, we can get the output amplitude of the corresponding frequency. The amplitude response of the single frequency is the output amplitude divided by the input amplitude. Through changing the input frequency of the test signal, we can obtain the amplitude response over the entire speech frequency range. Because the test frequencies of the input signals are discrete, the amplitude response is a series of discrete data points, as shown in Figure 4(b). To obtain a continuous response function over the entire frequency range, we fill in the missing data by the curve fitting. Cubic spline interpolation [24] will be used to construct a continuous and smooth response curve H (f ) with multiple polynomials of degree 3. As the inverse filter is implemented on the digital signals, we need to convert the continuous response function into a digital form. After the Fourier transform, the signal spectrum has a fixed frequency interval ∆f denoting the frequency resolution. Hence, we sample the continuous response function at the same frequency interval and get a finer-grained amplitude response. The digital amplitude response of the electronic speaker is denoted as H (k). After obtaining the speaker amplitude response, we can design the inverse filter by the complementary principle. The amplitude responses of the inverse filter and the speaker can cancel each other, minimizing the impact of the replay process. Hence, the inverse filter H −1 (k) should satisfy the all-pass condition that H −1 (k) · H (k) = C when H (k) 0. C is a positive constant which is typically 1. In addition, if H (k) = 0 for any k, H −1 (k) should also be 0. Another speaker property is the sub-bass (0-60 Hz) energy, which can be generated by loudspeakers, not humans. The sub-bass features are dependent on the speaker models and enclosure structure [8]. Although attackers may pick the speakers to minimize the sub-bass energy, we still need to minimize the possibility of detected by the sub-bass features. Hence, we optimize the inverse filter in two ways. We set H −1 (k) = 0 when the frequency is within 0-60 Hz, because we do not want to amplify the existing noise in the sub-bass range. Another way is to enhance the inverse filter response in the speech frequency range so as to decrease the relative proportion of the additive sub-bass energy. By these optimizations, we can decrease the metric of sub-bass energy balance under the detection threshold. By applying the inverse filter before the playback device, we can compensate the unwanted replay effects that are caused by the electronic speakers. Spectrum Processing The spectrum processing will involve three phases: the time-frequency domain conversion, the amplitude spectrum filtering, and the modulated signal reconstruction. Time-Frequency Domain Conversion. First, we need to convert the recorded audio from the time domain into the frequency domain, because it is easier to filter the signals in the frequency domain. For a L-length signal segment, we pad the signal with zeros so that the total signal length would be N , where N is the smallest power of 2 greater than or equal to L. The extended signal is denoted as x(n), n = 0, 1, ..., N − 1. Then we convert the time-domain signal x(n) into the frequency-domain representation X (k) through the fast Fourier transform algorithm. is the frequency spectrum of the original signal in the form of complex numbers. The frequency resolution is defined as the frequency interval ∆f = f s /N , where f s is the sampling rate of the recording audio. Then we split the complex frequency spectrum into two parts. The magnitude spectrum X m (k) = |X (k)|, represents the signal amplitude of different frequency components k · ∆f , k = 0, 1, ..., N − 1. The phase spectrum X p (k) = ∠X (k) in radians, which is independent with the amplitude information, represents where the frequency components lie in time. Spectrum Filtering. The inverse filter will only be implemented in the amplitude spectrum. The phase spectrum will remain unchanged. The effect of applying a filter is to change the shape of the original amplitude spectrum. According to the system response theory, the compensated amplitude spectrum is the product of the input amplitude spectrum and the amplitude response of the inverse filter. Hence, after modulating the signal with the inverse filter Note that the amplitude spectrum of the speaker output is also the product of the input amplitude spectrum and the speaker amplitude response. Therefore, the amplitude spectrum of the modulated replay audio will be S m (k) = Y m (k) · H (k). We can find that the power distribution of frequency components in the modulated replay audio will be the same as that in the genuine audio, making it harder for ASR systems to detect the replay attack. Modulated Signal Reconstruction. After modifying the amplitude spectrum to compensate for the energy loss in the following playback phase, we need to reconstruct the signal in the frequency domain. The modulated signal will have the compensated amplitude spectrum and remain the original phase spectrum. Therefore, the complex frequency spectrum will be reconstructed by the amplitude Y m (k) and the phase angle X p (k). That means the frequency spectrum of the modulated signal should be Y (k) = Y m (k) · e iX p (k ) according to the exponential form of complex numbers. After reconstructing the modulated signal in the frequency domain, the complex frequency spectrum Y (k) will be converted back into the time domain by the inverse fast Fourier transform algorithm. To ensure that the length of the modulated audio is the same as that of the original audio, the last (N − L) data points in y(n) will be discarded. Hence, the total signal length of the modulated audio would be L. Then, the final modulated audio will be saved as a digital format to complete the replay attack. COUNTERMEASURE: DUAL-DOMAIN DETECTION In this section, we propose a countermeasure called DualGuard against the modulated replay attack. Due to the similarity of the amplitude spectrum between the modulated replay signals and the genuine signals, the defense will be conducted not only in the frequency domain, but also in the time domain. Defense Overview In our scheme, our countermeasure contains two inseparable parts: frequency-domain defense and time-domain defense. A voice command must pass both defenses in time and frequency domains before it can be accepted by ASR systems. The frequency-domain defense is proved to be effective against classical replay attacks. Because of the frequency spectrum distortion caused by the replay process, we use the power spectrum distribution (timbre) to distinguish the classical replay audio. The area under the CDF curve (AUC) of the power spectrum distribution is extracted as the key frequency-domain feature. We find that the AUC value of the genuine audio is statistically larger than that of the replay audio. By utilizing the frequency-domain defense, we filter out the threat from the classical replay attacks. The modulated replay audio has the same amplitude spectrum as the genuine audio. Hence, we need to detect the modulated replay audio in other domains. In the phase domain, there is no useful information in the phase spectrum, which records the starting points of each frequency component in the time axis. But in the time domain, we discover and formally prove the following theorem. Theorem. There are inevitably spurious oscillations (ringing artifacts) in the modulated replay audio. The amplitude of the ringing artifacts is restricted by the signal amplitude spectrum and absolute phase shifts. in the genuine audio and the classical replay audio, the waveform is statistically smooth. We define a new metric called local extrema ratio to quantitatively describe the strengths of the ringing artifacts. We utilize local extrema ratios at different granularity as the key time-domain feature and filter out modulated replay attacks using an SVM classifier. Time-domain Defense Because of the difficulty in detecting the modulated replay audio via frequency and phase features, we seek the defenses in the time domain. By our observations and mathematical proof (see Appendix A), we find there are small ringing artifacts in the time-domain signals when performing the modulated replay attack. Although these time-domain artifacts correspond to the high-frequency components, the power of the artifact is too small to be detected in the frequency domain because the maximum amplitude is constraint by the Equation (11). In the frequency domain, the ringing artifacts can be easily mistaken for the ambient noise. Hence, we propose a time-domain defense method that utilizes the pattern of small ringing artifacts in the modulated replay audio. The ringing artifacts pattern is a robust feature that cannot be further compensated by a higher-order filter. The ringing artifacts are caused by the physical property, but not the modulated process itself. When we modulate the recorded audio, there are no ringing artifacts in the processed audio. The ringing artifacts only occur after replaying the processed audio, thus becoming an inevitable feature in the modulated replay audio. In order to describe the ringing artifacts in the time-domain signals, we take local extreme ratio as the metric. We firstly give a definition of local extrema. Definition: In a signal segment y, if a sampling point y i is the maximum value or the minimum value in the (2r +1)-length window [y i−r , y i+r ], y i is a local extrema in the time-domain signal. Note that if the index of the window element is out of bounds, we will pad the window with the nearest effective element. Local extrema ratio (LER) is defined as the ratio of the local extrema amount to the total signal length. Given an input signal segment, the local extrema ratio correlates with the window parameter r . When the window size is small, the LER calculation is in fine granularity that reflects the small ringing artifacts in the time-domain signals. When the window size is large, LER shows the overall change trend of the signals. The modulated replay signals and the genuine signals have different patterns in local extrema ratio with different granularity. We can detect the modulated replay attack via identifying the LER patterns with different parameter r ∈ [1, r max ]. Algorithm 1 shows the function of obtaining the local extrema patterns and detecting the modulated replay audio. In Figure 5(a), under the coarse granularity (larger window size), the number of local extrema does not differ much between modulated replay audio and genuine audio. However, in Figure 5(b), the situation would be different under the fine granularity (smaller window size). Due to the ringing artifacts, small spurious oscillations occur in modulated replay audio. The number of local extrema in modulated replay audio will be significantly larger than that in genuine audio, which becomes a critical feature that helps us detect the modulated replay attack. A Support Vector Machine (SVM) classifier is trained to distinguish modulated replay audio by determining the local extrema pattern (LEP) with different granularity. The time-domain attack detection is shown in Algorithm 1. The audio will become the candidate audio for the frequency-domain checking if it does not come from the modulated replay attack. output modulated replay attacks 17: else 18: output candidate audio Frequency-domain Defense The frequency-domain defense is used to counter the classic replay attack. It is based on the noticeable different timbre of the voice sounded from human and electronic speakers. In the replay model, each component frequency in the genuine audio is exactly the same as that in the replay audio, no matter the fundamental frequency or the harmonics. For example, if the fundamental frequency of the genuine audio is 500 Hz, the replay audio will also have a fundamental frequency of 500 Hz. However, even with the same component frequencies, the genuine human voice and the replay voice sound different in our perception. The main reason is the power distributions of the frequency components, namely the timbre, are different. For human, our voice is sounded from the phonatory organ. The typical sound frequency for human is within the range from 85 Hz to 4 kHz, where the low-frequency components are dominant. For electronic speakers, there is an acoustic defect on the low-frequency components due to the speaker structure, materials, and the limited size. The power of the replay signals decays dramatically in the lowfrequency range, especially under 500 Hz. Meanwhile, the human fundamental frequency range is 64-523 Hz for men, and 160-1200 Hz for women. Hence, the electronic speakers will attenuate the power in the human fundamental frequency because of the speaker properties. With respect to the power distribution, the power of the genuine audio is mainly concentrated in the low-frequency range, while the power of the replay audio is more distributed in the speech frequency range. Our frequency-domain defense utilizes these timbre features to defeat the classic replay attack. Algorithm 2 Frequency-Domain Replay Detection Input: an audio signal y, output дenuine audio Timbre is described by the power distribution of different frequency components. It is necessary to define a mathematical description for the timbre. When an ASR system captures a voice signal from the air with a sampling rate of f s , we firstly obtain the amplitude spectrum of the signal through N -point fast Fourier transform. The signal amplitude spectrum is denoted as K(n), n = 0, ..., N − 1, with the frequency resolution ∆f = f s /N . The frequency value of the i-th component is i · ∆f , while the amplitude is K(i). Hence, the signal power spectrum is K 2 (n), and the power spectral density (PSD) of frequency components is defined as D(n) = K 2 (n)/ N −1 i=0 K 2 (i). To distinguish the different power distributions, we measure the cumulative density function (CDF) A(n) for the power spectral density, A(n) is a monotonically increasing function, with a range of [0, 1]. As shown in Figure 6, the power spectrum CDF of genuine audios and replay audios are quite different. For genuine audios, the power is concentrated in the low-frequency range, so the CDF rises more quickly. For replay audios, the CDF function grows slower due to the more distributed power spectrum. We utilize the CDF characteristic to distinguish replay audios from genuine audios. We utilize the area under the CDF curve (AUC) to verify and filter out the classic replay audio. AUC is calculated as n A(n)/N . If the AUC value is less than a specific threshold A T H ∈ (0, 1), there is a classic replay attack. We show the frequency-domain attack detection in Algorithm 2. Security Analysis We discover and prove that there are inevitably either ringing artifacts in the time domain or spectrum distortion in the frequency domain, no matter if replay signals are modulated. For the frequency-domain defense, the principle comes from the signal difference of the power spectrum distributions. It is known that human speech is not a single-frequency signal, but a signal with fundamental frequency f and several harmonics n f , n ≥ 2. Within the human voice frequency range, the speaker response has a great difference in the low-frequency band and the high-frequency band, which means H (f ) H (n f ). As a result, the power ratio of genuine audio A(f )/A(n f ) is different from that of the corresponding replay audio (H (f )·A(f ))/(H (n f )·A(n f )). The different power ratios cause the difference in the power spectrum distributions. For the time-domain defense, we can prove that there are inevitably spurious oscillations (ringing artifacts) in the modulated replay audio. The critical factor is the inevitable phase shifts that cannot be accurately measured (see details in Appendix A). Although the amplitude spectrums are the same, the signal phase spectrums can be different. The relationship between the amplitude spectrum to the time-domain signals is a one-to-many relationship. Moreover, we cannot compensate for the phase shifts due to the limits of the accuracy in measurements. Even a small phase error can cause ringing artifacts in the time-domain. That is why we need to check the signals in both frequency domain and time domain. Besides, the high local extrema ratio in the modulated replay audio can result from other aspects, i.e. the measurement error, the FFT truncation effect, and the time-domain joint. First, the measurement involves exponential computation, where the round-off errors can be accumulated so that the amplitude estimation is not accurate, finally bringing about parasitic vibration in the modulated replay signals. Second, the real FFT operation works on a finitelength signal, which is equivalent to adding a window function to an infinite-length signal. The window function in the time domain corresponds to a sinc(x) function convolved in the frequency domain, causing the frequency spectrum to expand and overlap. Third, when splicing the reconstructed signals into new audio, there is no guarantee of the continuity at the starting and ending splice points. A discontinuous splice point can lead to ringing artifacts due to the Gibbs phenomenon [77]. Moreover, ringing artifacts cannot be further compensated by a higher-order filter since ringing artifacts only occur after the replay process rather than after the modulation process. Moreover, the iterative filtering scheme can reduce ringing artifacts in image restoration that are mainly caused by overshoot and oscillations in the step response of an image filter [63]. However, it is not suitable for speech signals because the ringing artifacts are introduced by hardware properties. Even if attackers might reduce ringing artifacts to a certain extent, the time-domain defense can still detect modulated replay audio. This is because our method does not rely on the amplitude threshold of ringing artifacts. Although the amplitude of ringing artifacts may decrease, the local extrema cannot be eliminated. The time-domain defense uses local extrema as features so that even small ringing artifacts can be detected. EVALUATION In this section, we conduct experiments in a real testbed to evaluate the modulated replay attack and our defense. Experiment Setup We use a TASCAM DR-40 digital recorder for collecting the voice signals. The sampling rate of the digital recorder is set to 96 kHz by default. We conduct real experiments with a variety of common electronic devices in our lives, including iPhone X, iPad Pro, Mi Phone 4, Google Nexus 5, Bose Soundlink Micro, and Samsung UN65H6203 Smart TV. Figure 7 shows the testbed in our experiments. We aim to demonstrate that both our attack and countermeasure scheme can be applied to various speaker devices. To generate modulated replay audios, we apply MATLAB to estimate the amplitude response and design the inverse filter for different speakers. Due to space constraints, we put the details in Appendix C. ASVspoof 2017 [29] and ASVspoof 2019 [61] are two popular databases for replay attacks. However, we cannot convert the replay attack samples in these two databases into modulated replay attacks, due to the lack of information of replay devices. Instead, to conduct a fair comparison between modulated replay audio and classic replay audio, we collect an in-house dataset with 6 replay devices. For each of these replay devices, the dataset contains 222 modulated replay audios as well as 222 corresponding classic replay audios. All audio signals are collected in a quiet lab environment. We use 10-fold cross-validation accuracy as a metric since it can reflect the whole performance of the system. Moreover, we implement the prototype of our defense DualGuard in C++ language and run it on a popular voice interactive platform, i.e., ReSpeakerCore v2. Effectiveness of Modulated Replay Attacks We conduct experiments with the modulated replay attack. The attack leverages the inverse filter to generate synthetic audio that has a similar frequency spectrum as the genuine audio. The modulated signals are generated in the Matlab environment and stored in a lossless format. They are then transferred to replay devices for performing attacks. Figure 8 shows the amplitude spectrum of the signals during the modulated replay process in our experiments. Here, the results are collected using the iPhone device, while we have similar results with other devices. Figure 8(a) illustrates the genuine audio that is captured directly from a live human in a quiet room environment. The energy of genuine audio is mainly concentrated in the low-frequency range. Figure 8(b) shows the spectrum of the direct replay audio, which is captured from the direct playback of the genuine audio. Due to the response properties of the speaker devices, the high-frequency components in the direct replay audio have a higher relative proportion compared with those in the genuine audio. The spectrum difference is a vital feature in the various classic replay detection methods. Figure 8(c) shows the spectrum of the modulated replay audio collected by the ASR system. We can see that the low-frequency energy is greatly enhanced to cope with the speaker effects. Thus, the spectrum of the modulated replay audio is very similar to that of the genuine audio in Figure 8(a). Moreover, we quantify the similarity between the modulated replay audio and the genuine audio using the L2 norm comparison [43] that has been widely used to compare the spectrums of audio. It is defined as ∥K 1 − K 2 ∥ 2 2 , where K 1 and K 2 are two normalized spectrum distributions of audio, and ∥·∥ 2 2 is the square of Euclidean distance. The smaller the L2 norm is, the more similar the two audios are. We measure the similarity values on 660 pairs of audio samples, the average similarity between the modulated replay audio and the genuine audio is 1.768 × 10 −4 . However, the average similarity between the direct replay audio and the genuine audio S r д is 15.71×10 −4 on average, which is much larger than the similarity between the modulated replay audio and the genuine audio. The results demonstrate that the modulated replay audio is much more similar to the genuine audio. Furthermore, we re-implement 8 popular detection methods that can be divided in three categories, namely, Cepstral Coefficients Features based defense, High-frequency Features based defense, and Low-frequency Features based defense. We apply those defense methods to detect both direct replay attacks and modulated replay attacks on 6 electronic devices, and the results in Table 1 show that our modulated replay attacks can bypass all these countermeasures. Bypassing Cepstral Coefficients Features Based Defense. The most popular method to detect replay attacks is based on cepstral coefficients features extracted from the signal amplitude spectrum. These cepstral coefficients features includes CQCC [56], MFCC [68], LPCC [33], and MWPC [42]. Our experiments show that the accuracy of detecting direct replay attacks can always achieve over 88% accuracy. However, Table 1 shows the accuracy significantly drops to 1.80%∼58.56% when detecting the modulated replay audio. The results indicate that our modulated attack can bypass existing cepstral coefficients based detection methods. Bypassing High-frequency Features Based Defense. As shown in Figure 8(a) and Figure 8(b), the high-frequency spectral features between the genuine audio and the replay audio are significantly different. Therefore, a number of methods [27,55,65] detect replay attacks using high-frequency features, including Sub-band Energy [27], HF-CQCC [65], and FM-AM [55]. Table 1 shows they can achieve high accuracy on detecting the direct replay attack, e.g., 96.43%. However, they fail to detect the modulated attack due to frequency compensation. The highest accuracy on detecting the modulated replay attack is only 38.74%. Bypassing Low-frequency Features Based Defense. Besides detection based on high-frequency features, a recent study [8] provides an effective method, i.e. Sub-bass, to detect replay attacks based on low-frequency features. It defines a metric named energy balance metric, which indicates the energy ratio of the sub-bass range to the low-frequency range . Our experiments show that it can achieve 99.1% accuracy on detecting direct replay attacks with the metric. However, the accuracy significantly drops to less than 8% when detecting modulated replay attacks. In these 8 detection methods above, MWPC performs better than other techniques. This is because MWPC can capture partial temporal information using the mel-scale Wavelet Package Transform (WPT) [64], which handles the temporal signals on different scales. HF-CQCC can capture the high-frequency difference in signals. Such partial temporal information and high-frequency difference provide more useful features for the detection of replay attacks. Thus, MWPC and HF-CQCC perform better than other techniques. In addition, Table 1 also shows the experimental results of the modulated replay attack with six loudspeaker devices respectively. In theory, whatever frequency response a speaker has, we can always find the corresponding inverse filter to counteract the effect of the replay process. As a result, the modulated replay attack does not depend on any specific type of speaker. The experimental results in Table 1 validate our attack design. For any specific detection method, the modulated replay attack exhibits similar performance when leveraging different speaker devices. This property is critical for real-world replay attacks, because it demonstrates the modulated replay attack is independent of the loudspeaker. An attacker can utilize any common speaker in our lives to perform the modulated replay attack against ASR systems. Effectiveness of Dual-Domain Detection Our defense, i.e. DualGuard, contains two parts: time-domain detection and frequency-domain detection. The time-domain detection mainly aims to identify modulated replay attacks and the frequencydomain detection mainly aims to identify direct replay attacks. We show the experimental results for these two parts, respectively. Time-Domain Detection. We conduct experiments to evaluate the accuracy for DualGuard to detect modulated replay attacks in the time domain. As the local extrema ratio (LER) is the key feature to detect replay attacks in the time domain, we first measure the LER values of both modulated replay audios and genuine audios from 6 different speaker devices. Figure 9 illustrates the change of LER value from fine granularity (with small window size) to coarse granularity (with large window size). We can see that the LER decreases with the increase of the window size. When the window size is small, the LER value of the modulated replay audio is statistically larger than that of the genuine audio, which is the main difference between these two types of audios. As we mentioned in Section 4.2, the relatively high LER value results from the ringing artifacts in the modulated replay audio. The results demonstrate the feasibility to detect the modulated replay attack in the time domain with the LER patterns. We conduct experiments to evaluate the detection accuracy in the time domain with Algorithm 1. As shown in Figure 9, there are no significant differences for the LERs of the genuine audio and the modulated replay audio when the window size reaches 20. Thus, we choose a 20-dimensional tuple {LER 1 , LER 2 , ..., LER 20 } in our algorithm as the feature to detect the modulated replay attack. Here, LER r denotes the LER value with the window size r . The detection accuracy of DualGuard on modulated replay attacks is shown in Table 1. We can see that DualGuard can accurately identify modulated replay attacks in the time domain. The detection accuracy for modulated replay attacks always exceeds 97% with different speakers. We also calculate the false positive rate of our method in detecting modulated replay attacks. It always maintains less than 8% false positive rate. The results demonstrate the generalization ability of DualGuard with different speakers. Actually, the generalization is due to the robust artifact properties in the time-domain signals (see Appendix A). Our time-domain defense is independent of speakers. Our main contribution of time-domain defense is on the key feature extraction. For the experiments on comparing different classifiers, we refer the readers to Appendix D. In our defense, we choose SVM due to its high performance and easy deployment. Frequency-Domain Detection. We conduct experiments to evaluate the accuracy for DualGuard to detect direct replay attacks in the frequency domain. To decide the decision threshold of Algorithm 2, we first obtain the Area Under CDF curve (AUC) from the amplitude spectrum of audios. Figure 10 shows the AUC distributions for both genuine audios and direct replay audios. We can see that the AUC values of genuine audios are concentrated and close to 1, which indicates that the low-frequency energy is dominant. However, the AUC values of direct replay audios are distributed and small, which is consistent with the distributed spectrum of replay audios. As shown in Figure 10, the best decision threshold is 0.817 since it can minimize the classification errors between genuine audios and replay audios. Table 1 shows the detection accuracy of DualGuard on direct replay attacks using Algorithm 2 with a decision threshold of 0.817. The accuracy with different speakers always exceeds 89%. We also calculate the false positive rate of our method in detecting direct replay attacks. It always maintains less than 5% false positive rate. Moreover, we conduct experiments with the ASVspoof 2017 and 2019 datasets to show that DualGuard can effectively detect classic replay attacks. Our experimental results show that DualGuard can achieve 87.13% and 83.80% accuracy in these two datasets, respectively. Moreover, we train another model only with frequency features from a mix of genuine audios, direct replay audios, and modulated replay audios in order to demonstrate the necessity to detect all replay attacks in two domains. Our experimental results show that the accuracy can only reach 63.36%. It is due to the great spectral similarity of genuine audios and modulated replay audios in the frequency domain. Therefore, the dual-domain detection is necessary to accurately detect both two types of replay attacks. Robustness of Dual-Domain Detection We conduct experiments to show the robustness of our dual-domain detection under different sampling rates, different recording devices, different speaker devices, and different noisy environments. Impact from Genuine Audio Sampling Rate. We evaluate the impact of the sampling rate for recording the initial human voice by attackers. We first use TASCAM DR-40 digital recorder with fs = 96 kHz to capture initial human voice. We also use iPhone X with fs = 48 kHz to capture human voice. For both sampling rates, the average detection accuracy of DualGuard on modulated replay attack is 98.05%. That is because the sampling rate used by attackers only changes the spectral resolution in the modulation process. However, the waveform of modulated replay audios will not be changed since D/A converter will convert modulated signals into analog form before the replay process. Impact from ASR Sampling Rate. We conduct experiments on different recording devices with different sampling rates. In our experiments, there are three settings of sampling rates for our recording devices: (S1) TASCAM DR-40 with 96 kHz, (S2) TASCAM DR-40 with 48 kHz, and (S3) a mobile phone (Xiaomi 4) with 44.1 kHz. Figure 11(a) shows the experimental results. We can see the detection accuracy usually increases with the increase of sampling rates. We find that although changing the sampling rate has little effect on the frequency-domain detection, it significantly affects the time-domain detection due to the change of the sampling interval. Note that the smaller sampling interval means the finer detection granularity of local extrema ratios, which increases the detection accuracy. Moreover, in Figure 11(a), our experiments show that DualGuard still achieves around 85% detection accuracy in the worst case where the sampling rate is 44.1 kHz. We note that 44.1 kHz is the minimum sampling rate of common electronic devices in our lives [23]. Therefore, DualGuard can achieve a good detection accuracy with different sampling rates in common devices. Impact from Different Recording Devices. In Figure 11(a), the detection accuracy does not significantly change when we use different recorders with the same sampling rate. The detection accuracy changes less than 2% with different recording devices when the sampling rate is 48 kHz or 44.1 kHz. The results show DualGuard can be applied to different recording devices in our lives. Impact from Different Noisy Environments. To test the detection accuracy under different noisy environments, we introduce noise factors in our experiments. We test our detection method under three scenarios: (1) in a quiet environment, (2) in a noisy environment with the signal-to-noise ratio (SNR) of 60 dB, and (3) in a noisy environment with the SNR of 40 dB. The additive noise signal is produced by a loudspeaker that plays a pre-prepared Gaussian white noise signal, simulating the noise in the real world. The noise is mixed with the test signals with specific SNR. Figure 11(b) illustrates the detection accuracy in various noise conditions. We can see that the impact of noise is limited. Particularly, the detection accuracy remains unchanged when the SNR is 60 dB. When the SNR drops to 40 dB, the detection accuracy decreases by 3.2% on average. Actually, the impact of noise is mainly reflected in the time-domain defense. General noise has little effect on the frequency-domain defense part. With the increase of noise power, the burr amplitude in the noise will also increase. As a result, noise can result in the imprecise detection of the local extrema pattern in the test signals. However, our experimental results indicate that DualGuard still works well at the general ambient noise level. Overhead of Dual-Domain Detection We implement DualGuard with C++ language, and build a system prototype in ReSpeaker Core v2, which is a popular voice interactive platform with quad-core ARM Cortex-A7 of 1.5GHz and 1GB RAM on-board. Our experimental results show that the embedded program takes 5.5 ms on average to process a signal segment of 32 ms length with the CPU usage of 24.2%. The largest memory usage of the program is 12.05 MB. The results demonstrate the feasibility of applying our dual-domain detection system in the real world. RELATED WORK In this section, we review related research on attacks targeting ASR systems, techniques on loudspeaker frequency response compensation, and defense systems against replay attacks, respectively. Attacks on Speaker Dependent ASRs. A speaker dependent ASR system is designed to only accept voice commands from specific users [66]. It verifies the speaker's identity by matching the individual characteristics of human voice. There are four main spoofing attacks against the speaker dependent ASRs. First, an attacker can physically approach a victim's system and alter its voice to impersonate the victim [22]. Second, the attacker can launch a simple replay attack by playing back a pre-recorded speech of the victim to the ASR systems [60,62]. Third, speech synthesis attacks generate artificial speech to spoof the ASR systems [12,15,34]. Fourth, speech conversion attacks aim to achieve a speech-to-speech conversion, so that the generated speech has the same timbre and prosody with the victim speech [30,67]. Attacks on Speaker Independent ASRs. A speaker independent system is designed to accept commands from any person without identity verification. Comparing to the speaker dependent system, it is more vulnerable to attacks [3,17,26,44]. Recently, researchers found more surreptitious attacks that humans cannot easily perceive or interpret. Dolphin attack is hard to be noticed since the malicious audio is modulated into the ultrasonic range [47,53,71]. The voice commands can also be modulated into laser light to launch audio injection attack [54]. Also, the malicious audio can be perturbed into an unintelligible form in either time domain or frequency domain [1]. To attack the machine learning module in ASRs, recent research shows attackers can produce noise-like [11,32,58,76] or song-like [70] voice commands that cannot be interpreted by human. Psychoacoustic model can also be applied to generate the adversarial audio below the human perception threshold [49]. By fooling the natural language processing (NLP) module after ASRs, skill squatting attacks mislead the system to launch malicious applications [18,31,40,74,75]. Loudspeaker Frequency Response Compensation. In the field of room acoustics, loudspeaker frequency response compensation is a technique used to improve the sound reproduction [13]. The basic method is to design an intelligent filter to flatten the frequency response of the loudspeakers [10]. The frequency response compensation can also be achieved by advanced filter with a generic Hammerstein loudspeaker model [16]. For a multichannel loudspeaker system, the minimax approximation method is proposed to flatten the spectral response around the crossover frequency [37]. Also, a polynomial based MIMO formulation is proposed to solve the multi-speaker compensation problem [9]. Defenses against Replay Attacks. In ASVspoofing Challenge [29], several replay detection methods are proposed by exploiting the frequency-based features, such as Linear Prediction Cepstral Coefficient (LPCC) [42], Mel Frequency Cepstral Coefficient (MFCC) [68], Constant Q Cepstral Coefficients (CQCC) [56], High Frequency Cepstral Coefficients (HFCC) [41] and Modified Group Delay Cepstral Coefficient (MGDCC) [35]. Besides, the high-frequency sub-band features can be used to detect live human voice by the linear prediction (LP) analysis [65]. The sub-bass (low-frequency range) energy is also an effective feature to detect the replay signals, though this method can be bypassed by altering the speaker enclosure or modulating the signals with our inverse filter [8]. The frequency modulation features [21,28,55] can also be leveraged due to the degraded amplitude components of replay noise. Researchers propose to detect replay attacks using physical properties. Gong et al. detect the body-surface vibration via a wearable device to guarantee the voice comes from a real user [19]. 2MA [7] verifies the voice commands by sound localization using two microphones. Yan et al. propose a spoofing detection method based on the voiceprint difference between the authentic user and loudspeakers [69]. All these methods require special equipment or specific scenarios. VoiceLive [73] detects live human voice by capturing the time-difference-of-arrival (TDoA) dynamic of phoneme sound locations. VoiceGesture [72] reuses smartphones as a Doppler radar and verifies the voice by capturing the articulatory gesture of the user when speaking a passphrase. However, these two methods work well only when there is a short distance between the recorder and the userâĂŹs mouth. CONCLUSION In this paper, we propose a new modulated replay attack against ASR systems. This attack can bypass all the existing replay detection methods that utilize different frequency domain features between electronic speakers and humans. We design an inverse filter to help compensate frequency distortion so that the modulated replay signals have almost the same frequency features as human voices. To defeat this new attack, we propose a dual-domain defense that checks audio signal's features in both frequency domain and time domain. Experiments show our defense can effectively defeat the modulated replay attacks and classical replay attacks. A MATHEMATICAL PROOF OF RINGING ARTIFACTS IN MODULATED REPLAY AUDIO Theorem A.1. Uncertainty Principle: It is hard to accurately determine the entire frequency response of a loudspeaker. Proof. The frequency response of a loudspeaker contains amplitude response and phase response. The measurement of amplitude response is demonstrated in Section 3.4. However, it is difficult to accurately measure the phase response. For an electronic circuit system, the phase response can be measured by observing the electric signals x out (t) and x in (t) with an oscilloscope. But in a loudspeaker system, we cannot measure the phase response directly because the output signal x out (t) is a sound wave. Other equipment (such as a receiver that converts sound wave to electronic signal) is required to complete the measurement. But the measuring system can introduce other phase differences. There are mainly three influence factors: (1) Time of flight. The propagation time will add phase differences. It is important to know the accurate delay time t = L/v 0 , where L is the direct distance between the speaker and the sensor. The sound speed v 0 ≈ 344m/s (@20 • C). (2) Time incoherence. Most of the available loudspeakers are not time coherent, which will exhibit phase error in the measurement. (3) Phase response of receiving sensor. The phase response of receiving sensor is typically unknown, which will also introduce phase shifts. As a result, the accuracy of phase response measurement cannot be guaranteed. That means the entire frequency response cannot be accurate. Also we can prove that even small measurement errors for phase response can cause ringing artifacts (see Theorem 3). □ Theorem A.2. Compared to the genuine signal x(t), there are phase shifts for each frequency component in the modulated replay signal x mr (t). Proof. In the modulated replay attack, the inverse filter only needs to compensate the amplitude spectrum because the features (e.g. CQCC, MFCC, LPCC) in the existing defenses only derives from the amplitude spectrum. However, a loudspeaker has a non-zero phase response in the real world, though it cannot be accurately measured (see Theorem A.1). Suppose the genuine audio x(t) is a digital signal. Through the fast Fourier transform, x(t) would be decomposed as N frequency components with the frequency set { f 1 , f 2 , ..., f N }. The frequency spectrum of x(t) is denoted as {A n , φ n }, where {A n } is the amplitude spectrum while {φ n } is the phase spectrum. So, x(t) can be represented as Assume that the frequency response of the loudspeaker is H = {G n ,ψ n }, where {G n } is the amplitude response while {ψ n } is the phase response. By measuring the input and output test signals, attacker can achieve the estimated frequency responseĤ = {Ĝ n , 0}. The inverse filter is then designed based onĤ , denoted as I = H −1 = {Ĝ n −1 , 0}. As a result, the generated modulated audio would be x m (t) = n (A n /Ĝ n ) · sin(2π f n t + φ n ). If the loudspeaker is ideal that does not have phase shift effects. And the amplitude estimation is enough accurate. The estimated replay output of the modulated audio would bê which is approximately equal to the genuine audio. However, if the modulated audio x m (t) passes through the real loudspeaker system H , the real modulated replay x mr (t) audio would be x mr (t) = n (A n · G n /Ĝ n ) · sin(2π f n t + φ n + ψ n ) ≈ n A n · sin(2π f n t + φ n + ψ n ) x(t). Because x mr (t) has almost the same amplitude spectrum with the genuine audio x(t), it can bypass the existing defense systems. However, compared to the genuine signal x(t), there are phase shifts for each frequency component in the modulated replay signal x mr (t). □ Theorem A.3. The phase shifts will cause the spurious oscillations (ringing artifacts) in the original audio. Proof. Suppose there is a small phase shift dφ in the N -th frequency component of the signal x(t), while other frequency components remain unchanged. The new signal would be (8) Because dφ is a very small shift value, C is a small constant that satisfies |C | < |A n · dφ|. x(t) is an audio signal that is statistically smooth in the time domain. Hence, the new signal x ′ (t) contains small ringing artifacts because of the additional oscillations signal o N (t) with the frequency of f N . The maximum amplitude of the spurious oscillations is limited by |C | value. Assume that the phase shifts of a loudspeaker system are denoted as ψ = {ψ n } for all frequency components. The modulated replay signal would be The total spurious oscillations o(t) can be presented as The maximum amplitude A o of the spurious oscillations is constraint by the following condition. As a result, the phase shifts of the loudspeakers will lead to the ringing artifacts in the modulated replay audio. □ B PARAMETERS IN DETECTION METHODS We list the parameters of different replay detection methods here for better understanding the modulated replay attack. (1) Constant Q Cepstral Coefficients (CQCC) based method. The Constant-Q Transform (CQT) is applied with a maximum frequency of F max = f s /2 = 48kHz. The minimum frequency is set to F min = F max /2 12 = 11.7Hz (12 is the number of octaves In the LPCC feature, the frame length is set to 1280 and the offset is 0. The threshold of the silence power is 10 −4 . The prediction order in the LPC coefficients is set to 14. to the low-frequency range . The threshold is set to 0.228 according to the study [8]. C INVERSE FILTER IMPLEMENTATION The speaker response estimation process contains two steps: discrete amplitude response measurement and continuous amplitude response fitting. In the discrete amplitude response measurement, we measure the speaker input/output response coefficient by test- signals with the same amplitude of 1, which are generated by using the wavwrite tool and stored in a lossless format. The test audio is then transferred to replay devices and played at medium volume on loudspeakers, since the response function is not directly related to the input amplitude according to our experiments. After the spectrum analysis, we can get a rough response polygonal curve across 68 discrete points. In the finer-grained amplitude response fitting, we need to first calculate the spectral resolution of the modulated signal ∆f = f s /N , where f s is the signal sampling rate. N is the FFT point number which is the minimum power of 2 that is greater than or equal to the signal length L, denoted as N = 2 ⌈log 2 L⌉ . The finer-grained amplitude response curve can be achieved by the cubic spline fitting. And the estimated response used in the inverse filter generation is sampled with the signal spectral resolution ∆f . The inverse filter is designed by using the finer-grained speaker response H (k). In order to avoid divide-by-zero error in our experiments, the inverse filter transfer function is calculated as 1/(H (k) + eps), where eps is a small value from 0.001 to 0.002. Figure 12 shows the amplitude response curves of different speaker devices and their inverse filters. For mobile devices, the response curves are high-pass filters due to the limited size of speakers. Therefore, the inverse filters should be low-pass filters. For Bose Soundlink Micro which has a tweeter and a woofer, there are obvious two-stage enhancements in the amplitude response. However, the transfer function still cannot be considered as a pass-through filter. The frequency response of Samsung Smart TV fluctuates with frequency due to its two speakers that create stereo audio. We can use designed inverse filters to compensate the speaker amplitude response, mitigating the decay of frequency components. D CLASSIFIERS IN TIME-DOMAIN DEFENSE In the time-domain defense, the local extreme ratio (LER) is a robust feature that can describe the ringing artifacts in modulated replay audios. Therefore, the classifier selection has little impact on the defense performance. To verify this hypothesis, we conduct experiments to evaluate the effects of different classifiers on the feature classification. We classify the LER features using five common classifiers, including Support Vector Machine (SVM), Decision Tree (DT), Naive Bayes (NB), Gaussian Mixture Model (GMM), and K-Star. The 10fold cross-validation accuracy is used as the evaluation standard. The performance of different classifiers is shown in Figure 13. We can see that SVM, Decision Tree, and KStar achieve better performance than other classifiers. Gaussian Mixture Model obtains the worst accuracy since the data distribution of LER features does not subject to the normal distribution. Above all, we choose the SVM model in our system due to its easy deployment and high performance.
2020-08-21T05:00:05.634Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "04426634083c66326df6d6045a79057a68395f0f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2009.00718", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "88ad755356fbf24c896b9a92317df7b383615944", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
32839655
pes2o/s2orc
v3-fos-license
Residence time and collision statistics for exponential flights: the rod problem revisited Many random transport phenomena, such as radiation propagation, chemical/biological species migration, or electron motion, can be described in terms of particles performing {\em exponential flights}. For such processes, we sketch a general approach (based on the Feynman-Kac formalism) that is amenable to explicit expressions for the moments of the number of collisions and the residence time that the walker spends in a given volume as a function of the particle equilibrium distribution. We then illustrate the proposed method in the case of the so-called {\em rod problem} (a 1d system), and discuss the relevance of the obtained results in the context of Monte Carlo estimators. I. INTRODUCTION The so-called Pearson random walk describes the evolution of particles starting from a point-source and performing straight-line displacements until collision events, where either the direction of propagation changes at random with probability p (scattering), or the trajectory is terminated (absorption) [1,2]. When the traversed medium is homogeneous, so that the scattering centers are uniform, the inter-collision distances (flights) are exponentially distributed. Exponential flights are key to understanding the dynamics of many transport processes, encompassing areas as diverse as radiation transfer, electron motion in semiconductors, gas dynamics, and search strategies [3][4][5][6][7][8]. In most such applications, one is typically interested in assessing the particle density Ψ V (or some functional defined on Ψ V ) averaged over a d−dimensional volume V in the phase space. In Reactor Physics, for instance, Ψ V might represent the number of particles escaping from radiation shielding [4]. The particle density of exponential flights, in turn, is intimately connected to the statistical properties of the collisions n V falling in the region V , and the residence time t V spent within V [8][9][10][11]. Even under simplifying hypotheses, namely, that scattering and absorption probabilities do not depend on particle energy, so that we can safely define an average speed v (one-speed approximation), and that scattering is isotropic, the interplay between Ψ V , n V and t V turns out to be a deceivingly simple problem, and has attracted a renovated interest in recent years [9][10][11][12][13][14][15]. For instance, when the typical size R of the volume V is much larger than the mean free path λ t between collisions, namely, R λ t (the so-called diffusion limit), the normalized distribution of collision number P(n V ) and the normalized probability density of residence times Q(t V ) converge to each other [10], as illustrated in Fig. 1 (left). This is in general not true when R is comparable to λ t , i.e., when finite speed effects and boundaries come into play, and particles spend only a limited number of collisions in V before wandering away, * Electronic address: andrea.zoia@cea.fr as shown in Fig. 1 (right). Aside from its theoretical interest for understanding the dynamics of exponential flights in bounded geometries, the study of P(n V ) and Q(t V ) is also motivated by their prominence in Monte Carlo methods. In view of the intrinsic stochastic nature of exponential flights, one is naturally led to resort to Monte Carlo simulation, which can guide the development of analytical solutions, and, in most realistic applications, provide the answers that are not accessible by analysis alone [16,17]. In plain Monte Carlo methods, the volume-averaged particle density Ψ V for one-speed transport can be estimated by simulating particle trajectories in the phase space and either counting the collisions n V in V , or measuring the length V of particle tracks within V . In the former case, we have Ψ coll [17]. As speed is assumedly constant, we can equivalently compute Ψ track V by measuring the residence time t V = V /v that particles spend in V . It is well known that the two estimators described above are unbiased with respect to Ψ V , which amounts to saying thatΨ coll where· denotes averaging with respect to particle trajec- tories, and the limit is attained for an infinite number of realizations [17]. This in particular implies that the two estimators are related byn V =t V /τ t , where τ t = λ t /v is the average flight time. Such a property non-trivially holds for any kind of boundaries imposed on V and stems from the memoryless (Markovian) nature of the underlying exponential flight process [16,17]. Fig. 1 suggests that t V and n V , while preserving the same average, will generally have different higher order moments, and in particular different variances. Hence, there might be an advantage in using either estimator for determining the desired particle density Ψ V . In the following, we address the issue of characterizing the distribution of collisions n V and residence times t V in a volume V for exponential flights. This paper is structured as follows. In Sec. II, we will first recall some preliminary background, and sketch a general approach for the moments of the distributions, based on the Feynman-Kac formalism. Then, in Sec. III we will exemplify the proposed methodology by explicitly evaluating those moments for a 1d domain, the so-called rod problem. Perspectives are finally discussed in Sec. IV. II. METHODOLOGY The trajectory z t = {r t , ω t } of exponential flights in the phase space is defined by the stochastic evolution of position and direction, starting from the point-source z 0 = {r 0 , ω 0 } at time t = 0. Due to the exponential nature of the displacement lengths, the stochastic process z t is Markovian. In other words, knowledge of the pair position-direction at a given time enables to determine the system evolution [37]. For the sake of simplicity, we assume that scattering is isotropic. The propa- gator Ψ(r, ω, t|r 0 , ω 0 ) defines the probability density for the walker being at a point {r, ω} in the phase space, at a time t, having started from the initial condition. The propagator of exponential flights satisfies a probability balance, the forward Chapman-Kolmogorov equation where L is the forward transport operator Here we have set v = ωv, τ s = λ s /v, λ s being the scattering mean free path, and the integral over directions is normalized to Ω d = 2π d/2 /Γ(d/2), i.e., the surface of the unit sphere. We introduce then the collision density which intuitively represents the equilibrium distribution of the particle ensemble. Remark that the propagator Ψ(r, ω, t|r 0 , ω 0 ) depends on the boundary conditions imposed on ∂V . The absence of boundary conditions corresponds to defining a fictitious ('transparent') volume V , where particles can indefinitely cross ∂V back an forth. On the contrary, the use of leakage boundary conditions leads to the formulation of first-passage problems [18][19][20][21], where the walker is lost upon crossing ∂V . The collision number n V and the sojourn time t V of the walker inside V depend on the realizations of the trajectories z t , and as such are random quantities, whose behavior can be fully characterized in terms of their respective moments Remark that P(n V |r 0 , ω 0 ) and Q(t V |r 0 , ω 0 ) depend on the initial conditions. Knowledge of all moments suffices to describe the associated distributions. In the following, we derive explicit expressions that allow evaluating the moments n m V (r 0 , ω 0 ) and t m V (r 0 , ω 0 ) in terms of the equilibrium distribution Ψ(r, ω|r 0 , ω 0 ). A. Residence times In a series of seminal works based on Feynman pathintegral formalism, Kac [22][23][24][25] has worked out a general method for deriving the residence time distribution when the underlying stochastic process is a Brownian motion W t , and later showed that his results hold more generally for Markov processes [38]. For a review (focused on Brownian motion), see, e.g., [27]. When a trajectory z t is observed up to a time t, the associated residence time being the marker function of the domain V , which is equal to 1 when z ∈ V , and vanishes elsewhere. When V has leakage boundary conditions, t V (t) for an infinite observation time corresponds to the first-passage time to the boundary ∂V . More generally, the definition in Eq. (5) allows for multiple exits and re-entry crossings of ∂V [8]. The key ingredient of Kac approach is the stochastic integral where the expectation is taken with respect to the propagator Ψ(z, t|z 0 ), i.e., the probability density of performing a trajectory from z 0 at t = 0 to z at time t = t, namely, n V 2 , t V 2 , p 0.9 The existence and well-posedness of Eq. (6) is discussed in, e.g., [22,25]. By slightly adapting the treatment in [25], it can be shown that F (t, s|z 0 ) satisfies the equation where L * is the backward transport operator Kac has shown that F (t, s|z 0 ) can be interpreted as the Laplace transform (the transformed variable being s) of Q(t V |z 0 ). The standard approach would therefore imply first solving Eq. (7) for F (t, s|z 0 ), and obtaining then Q(t V |z 0 ) by performing an inverse Laplace transform. Eqs. (7) and (6) are known as the Feynman-Kac formulae [27]. Once F (t, s|z 0 ) is known, the moments of residence time can be obtained from Eqs. (7) and (9) yield the recursion property with the conditions t m V (z 0 , 0) = 0 and t 0 V (z 0 , t) = 1. In most applications, the observation time is assumed to be infinite, i.e., t → +∞, which leads to the simplified equation where t V (z 0 ) = lim t→+∞ t V (z 0 , t). Eq. (11) is proposed in [8] to generalize a result by [11] and derive an elegant recursion formula for the moments of residence (and firstpassage) times of exponential fights averaged over initial n V 2 , t V 2 ,p 0.5 conditions. In the context of Brownian motion, the relevance of Eq. (11) is discussed at length in [28]. When one is interested only in the moments of the distribution, and the observation time is infinite, the Feynman-Kac formalism may be rather cumbersome (Eq. (11) would still require inversing the backward operator L * ), and can be altogether avoided by resorting to the so-called Kac moment formula [25]. This approach has been successfully applied to the study of the residence time of Brownian particles in [29]. For a review, see, e.g., [30]. The m-th moment of the residence time is obtained from where the convolution products read Finally, by interchanging the order of integration in time, and extending the integration limit to infinity for each convolution product [29], we have the formula for the moments of the residence time, namely, where is the collision density. Eq. (14) thus allows expressing the moments of the residence time as a function of the particle equilibrium distribution. B. Collision number In a previous work [10], we have explicitly derived the moments of the collision number n V for a broad class of renewal processes, when the point-source emits isotropically. For exponential flights, it is sufficient to remark that the process r n , i.e., the direction-averaged position of the walker, is Markovian at each collision event. The probability of performing n V collisions in the volume V is related to the propagator by P(n V |r 0 ) = drΨ(r, n V |r 0 ) − drΨ(r, n V + 1|r 0 ). (16) We introduce the direction-averaged collision density The derivation of the moments n m V (r 0 ) closely follows that of t m V (z 0 ) [10]. Here we just recall that the moments of n V are given by where the coefficients are the Stirling numbers of second kind [31], and are defined as k-fold convolutions of the collision density Ψ(r|r 0 ) with itself [10]. Now, the moments n m V (r 0 , ω 0 ) for a directed source δ (r − r 0 ) δ (ω − ω 0 ) can be evaluated as follows. First, we compute the density π(r |r 0 , ω 0 ) of the walkers entering their first collision at r . Each first-collision point will re-emit isotropically after the collision, i.e., the distribution of the outgoing ω is uniform. Then, the moments n m V (r 0 , ω 0 ) are obtained by convoluting Eq. (18) for an isotropic source at r with the first-collision source pπ(r |r 0 , ω 0 ). In terms of collision number probabilities, we have P(n V |r 0 , ω 0 ) = p dr χ[r ]P(n V − 1|r )π(r |r 0 , ω 0 ) + p dr χ[r ]P(n V |r )π(r |r 0 , ω 0 ), whereχ[r ] vanishes for r ∈ V and is equal to one else-where. This leads to n m V (r 0 , ω 0 ) = p m−1 k=0 m k dr χ[r ] n k V (r )π(r |r 0 , ω 0 ) + p dr n m V (r )π(r |r 0 , ω 0 ). III. THE ROD PROBLEM The approach presented in the previous Section allows explicitly evaluating the moments n m V (r 0 , ω 0 ) and t m V (r 0 , ω 0 ). When the equilibrium distribution is known, this amounts to solving the convolution integrals in Eqs. (22) and (14), respectively. However, analytical expressions for Ψ(r, ω|r 0 , ω 0 ) or Ψ(r|r 0 ) (subject to the appropriate boundary conditions) are known only in a few cases [9,10], so that one must generally resort to numerical integration. A well-known and long-studied example where calculations can be carried out analyti-cally is the so-called rod model, where particles can move along a straight line [1,2,32]. This corresponds to exponential flights in 1d, with only forward and backward direction allowed. Though the rod model is somewhat inadequate to address realistic radiation transport phenomena, we shall discuss it here for two main reasons. First, it allows illustrating the application of the above formulas for n m V (r 0 , ω 0 ) and t m V (r 0 , ω 0 ), and provides some hints on their use as Monte Carlo estimators. Second, the rod model, despite being admittedly oversimplified, is nonetheless widely used in biology (often called velocity jump process), gas dynamics (Lorentz gas), finance and neutronics, as it captures the essential features of the corresponding physical system [32][33][34][35][36]. We define ω f and ω b the forward and backward directions, respectively. Similarly, we denote by S f and S b the forward and backward components of the source, located at x 0 . Furthermore, we denote by x the abscissa of the rod, positive when oriented as ω f . We set the mean free path λ t = 1, and we take v = 1. Scattering is isotropic. The volume V is assumed to be the interval [−R, R]. With this choice of parameters and notations, Eq. (1) reduces to the following set of stationary firstorder differential equations when the source is S f , and when the source is S b . Two relevant examples will be considered here: i) leakage boundary conditions (a first-passage problem) without absorption, and ii) transparent boundaries with absorption. In the former case, leakages at x = ±R impose Ψ(−R, ω f |x 0 , ω f ) = 0, Ψ(R, ω b |x 0 , ω f ) = 0, Ψ(R, ω b |x 0 , ω b ) = 0, and Ψ(−R, ω f |x 0 , ω b ) = 0, which corresponds to an homogeneous medium surrounded by vacuum. The source is therefore x 0 ∈ V . In the latter, boundary conditions are imposed at infinity, which corre-sponds to an infinite homogeneous medium, the boundaries of V being transparent and not affecting particle trajectories. One-dimensional exponential flights are recurrent walks (i.e., they almost surely re-visit their initial position) [9,10], so that it is necessary to impose leakages and/or set p < 1 in order to prevent n m V (x 0 , ω 0 ) and t m V (x 0 , ω 0 ) from diverging. For the case of purely scattering media, i.e., p = 1, and leakage boundaries, the rod problem equations (27) and (28) are straightforwardly solved by direct integra-tion, and give rise to first-order discontinuous polynomials, the discontinuity being located at x 0 , i.e., at the source. Once the four solutions Ψ(x, ω f |x 0 , ω f ), Ψ(x, ω b |x 0 , ω f ), Ψ(x, ω b |x 0 , ω b ), and Ψ(x, ω f |x 0 , ω b ) have been obtained, the moments n m V (x 0 ) and t m V (x 0 ) are computed by performing the convolution integrals in Eqs. (22) and (14), respectively. Remark that the isotropic source corresponds to assuming S f = S b and integrating with respect to the initial direction. For the mean first-passage time, we have with n 1 V (x 0 ) = t 1 V (x 0 ) (v = 1, so that τ t = 1). For the second moment, we have and The terms in these formulas look inhomogeneous (this is due to setting λ t = 1), but expressions are indeed dimensionless. The surfaces are discontinuous, since |x 0 | ≤ R. Observe that when R is large we have t 2 V (x 0 ) n 2 V (x 0 ). When R → +∞, the moments diverge, as expected from 1d exponential flights being recurrent walks. Observe that the first and second moment of the first-passage time satisfy the recursion property derived in [8], namely for m ≥ 1, where {·} Σ and {·} V denote averaging x 0 over the surface Σ (of V ) or the volume V , respectively. Here d = 1 and m = 2, and it is easy to verify that Remark that we have an overall factor 1/2 with respect to the surface averages in [8], because trajectories are here allowed starting from Σ in the outward direction, whereas in [8] they are not. In Fig. 2 we display the mean collision number n 1 V (x 0 ) and the mean first-passage time t 1 V (x 0 ) for leakage boundary conditions. The two surfaces, as a function of x 0 and R, coincide, as expected from the considerations exposed above. This goes along with the collision and track length Monte Carlo estimators being unbiased with respect to each other. The second moments n 2 V (x 0 ) and t 2 V (x 0 ) are displayed in Fig. 3. It is immediately apparent that n 2 V (x 0 ) ≥ t 2 V (x 0 ) (actually, equality is attained only for R 1): this means that for this example the use of a track length estimator is to be preferred, as it would lead to a smaller variance. All analytical results have been validated by comparison with Monte Carlo simulations. For p < 1 and transparent boundaries, the solutions of the rod problem (27) and (28) are given by combinations of exponential functions, rather than linear polynomials. In this case, the expressions for the moments are rather cumbersome and will not be reported here. Instead, we plot the moments as a function of the initial condition x 0 , the domain size R and the scattering rate p. In Fig. 4 we display the mean collision number n 1 V (x 0 ) and the mean residence time t 1 V (x 0 ) for transparent boundaries and p = 0.5. The two surfaces, as a function of x 0 and R, coincide, and this relation holds for any value of p. The second moments n 2 V (x 0 ) and t 2 V (x 0 ) are displayed in Figs. 5 (p = 0.9), 6 (p = 0.5), and 7 (p = 0.1). In this case, it is not possible to establish a simple inequality between the two surfaces, independent of p. As the scattering rate varies, the surfaces change and there exist a value of p for which n 2 V (x 0 ) is smaller than t 2 V (x 0 ). This means that in presence of absorption the collision estimator may lead to a smaller variance. In Fig. 8 we display the difference n 2 V (x 0 ) − t 2 V (x 0 ) when x 0 = 0, for various values of p: when R is large, t 2 V (x 0 ) becomes larger than n 2 V (x 0 ), and this behavior is enhanced for small values of p, i.e., large absorption rates. When R is large, the dependence on the angular variable ω gets progressively weaker, so that the integrals (20) and (14) coincide. Under this assumptions, calculations show that n 1 V (x 0 ) 1/(1 − p), and n 2 V (x 0 ) (1 + p)/(1 − p) 2 , independent of the initial condition. Then, from the equality of the Kac convolution integrals, the difference n 2 V (x 0 ) − t 2 V (x 0 ) for large R converges to the limit 1/(p − 1), which implies a smaller variance for the collision estimator. We have also verified that t 1 V (x 0 ) and t 2 V (x 0 ) satisfy the surface-and volume-averaged recursion property in [8], which generalizes Eq. (32) to residence times. Again, all analytical results have been validated by comparison with Monte Carlo simulations. IV. CONCLUSIONS Motivated by their relevance for stochastic transport phenomena as well as for Monte Carlo methods, in this paper we have examined the moments of collision number n V and residence time t V of exponential flights in a volume V . We have presented a general approach that -based on the Kac moments formula -allows explicitly evaluating such quantities in terms of repeated convolutions of the particle equilibrium distribution. To exemplify the proposed formalism, we have in particular analyzed a 1d system, the so-called rod problem, where closed form expressions can be found. We have therefore explicitly computed the moments of n V and t V for var-ious boundary conditions, focusing in particular on the first and second moment. Finally, the relevance of these findings in the context of Monte Carlo collision and track length estimators has been discussed. Results show that the averages of n V and t V coincide, whereas the second moments (hence the variances) depend on boundary and initial conditions, and on the scattering probability. Residence time has in general a smaller variance, but the opposite is true when absorption dominates over scattering. By virtue of the increasing power of Monte Carlo methods in solving realistic three-dimensional transport problems, one might argue that such a simple system as the rod problem has a limited interest. On the contrary, we are persuaded that this analysis is useful, in that it allows focusing on the essential features of the physical system at hand. Indeed, on one hand it sheds light at the deep connections between sojourn times and collision number for exponential flights, and on the other hand it gives some hints on the behavior of the intrinsic variance of Monte Carlo collision and track length estimators. Extending the proposed approach to higher-dimensional and more complex systems is highly desirable, and investigations to this aim are ongoing.
2018-04-03T02:15:37.613Z
2011-07-01T00:00:00.000
{ "year": 2011, "sha1": "afeb3f60088d707d9c1516de6543e630c460239f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1107.0324", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a499f4275b64a77db5735aa8598ada5fc9566c3d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Mathematics", "Physics" ] }
219620639
pes2o/s2orc
v3-fos-license
DUBs Activating the Hedgehog Signaling Pathway: A Promising Therapeutic Target in Cancer The Hedgehog (HH) pathway governs cell proliferation and patterning during embryonic development and is involved in regeneration, homeostasis and stem cell maintenance in adult tissues. The activity of this signaling is finely modulated at multiple levels and its dysregulation contributes to the onset of several human cancers. Ubiquitylation is a coordinated post-translational modification that controls a wide range of cellular functions and signaling transduction pathways. It is mediated by a sequential enzymatic network, in which ubiquitin ligases (E3) and deubiquitylase (DUBs) proteins are the main actors. The dynamic balance of the activity of these enzymes dictates the abundance and the fate of cellular proteins, thus affecting both physiological and pathological processes. Several E3 ligases regulating the stability and activity of the key components of the HH pathway have been identified. Further, DUBs have emerged as novel players in HH signaling transduction, resulting as attractive and promising drug targets. Here, we review the HH-associated DUBs, discussing the consequences of deubiquitylation on the maintenance of the HH pathway activity and its implication in tumorigenesis. We also report the recent progress in the development of selective inhibitors for the DUBs here reviewed, with potential applications for the treatment of HH-related tumors. The HH pathway is a mitogen and morphogen signaling, conserved from Drosophila to mammals. It plays a crucial role in organogenesis and central nervous system (CNS) development [1,2]. In post-embryonic stages, HH signaling regulates tissue homeostasis and repair, modulating the specification of the adult stem cells [3,4]. Several studies have highlighted similarities and divergences between Drosophila and mammals HH signal transduction ( Figure 1A,B). Both in flies and in vertebrates the HH pathway activation is finely orchestrated by two membrane receptors: the multi-pass transmembrane protein Patched (Ptc/PTCH) and the heptahelical transmembrane co-receptor Smoothened (Smo/SMO). In Drosophila, in absence of the Hh ligand, Ptc keeps off the signaling by directly affecting Smo activity and preventing its accumulation on the plasma membrane. In this state, Costal-2 (Cos2; Costa-FlyBase), a kinesin family protein, Fused (Fu), a serine-threonine kinase and the Suppressor of fused [Su(fu)] inhibit the bifunctional transcription factor Cubitus interrupts In the cytoplasm, Cos2, Fu and Sufu assemble in complex with Ci-FL protein, favoring its phosphorylation by PKA, CK1, and GSK3. This event induces the Ci-FL ubiquitylation by SCF Slimb E3 ligase thus leading both to proteasome degradation and cleavage into truncated repressor form (Ci R ). Ci R blocks the transcription of Hh target genes. On the contrary, in the presence of Hh ligand, Ptc releases the inhibitory effect exerted on Smo which is activated by PKA and CK1 phosphorylation on the C-terminal domain, and then bound by Cos2 and Fu. These processes culminate in the Ci activation, promoting Hh transcription. (B) The Hedgehog signaling pathway in vertebrates. When the pathway is turned off, PTCH prevents the accumulation of SMO in the primary cilium. SUFU restrains GLI transcription factors in the cytoplasm where PKA, CK1α, and GSK3β kinases promote their phosphorylation. This process attracts the SCF βTrCP E3 ligase that determines the processing of GLI2 and GLI3 (GLI2/3 R ) in their repressor forms and the proteasome-mediated degradation of GLI1. In presence of HH ligand, PTCH inhibition is relieved. SMO is accumulated in the primary cilium and activated by GRK2 and CK1α phosphorylation. GLI activator forms (GLIs A ) translocate into the nucleus and induce the transcription of HH target genes. In mammals, three ligands belonging to the HH family are secreted: Desert hedgehog (DHH), Indian hedgehog (IHH) and Sonic hedgehog (SHH). The proteins, encoded by three paralogous mammalian genes, share high similarity in the affinity with HH-binding proteins. SHH is mostly expressed in brain cells and implicated in central nervous system (CNS) development, while IHH This event induces the Ci-FL ubiquitylation by SCF Slimb E3 ligase thus leading both to proteasome degradation and cleavage into truncated repressor form (Ci R ). Ci R blocks the transcription of Hh target genes. On the contrary, in the presence of Hh ligand, Ptc releases the inhibitory effect exerted on Smo which is activated by PKA and CK1 phosphorylation on the C-terminal domain, and then bound by Cos2 and Fu. These processes culminate in the Ci activation, promoting Hh transcription. (B) The Hedgehog signaling pathway in vertebrates. When the pathway is turned off, PTCH prevents the accumulation of SMO in the primary cilium. SUFU restrains GLI transcription factors in the cytoplasm where PKA, CK1α, and GSK3β kinases promote their phosphorylation. This process attracts the SCF βTrCP E3 ligase that determines the processing of GLI2 and GLI3 (GLI2/3 R ) in their repressor forms and the proteasome-mediated degradation of GLI1. In presence of HH ligand, PTCH inhibition is relieved. SMO is accumulated in the primary cilium and activated by GRK2 and CK1α phosphorylation. GLI activator forms (GLIs A ) translocate into the nucleus and induce the transcription of HH target genes. In mammals, three ligands belonging to the HH family are secreted: Desert hedgehog (DHH), Indian hedgehog (IHH) and Sonic hedgehog (SHH). The proteins, encoded by three paralogous mammalian genes, share high similarity in the affinity with HH-binding proteins. SHH is mostly expressed in brain cells and implicated in central nervous system (CNS) development, while IHH modulates chondrogenesis, and DHH regulates spermatogenesis and nerve-Schwann cell interactions [7]. Of note, HH signaling also regulates the expression of the stemness genes Nanog and Oct4, thus participating in the formation or maintenance of cancer stem cells (CSCs) responsible of tumor initiation, relapse and drug resistance [48][49][50]. For all these reasons, the HH pathway is emerged as an attractive druggable target for anti-cancer therapy. A various number of SMO antagonists, able to block the pathway at upstream level, have been identified and patented. Some of them, vismodegib and sonidegib, and recently glasdegib, have been approved by the Food and Drug Administration (FDA) for the treatment of BCC and Acute Myeloid Leukemia (AML), respectively [34]. Many others, such as GANT61 and GlaB, have been designed targeting GLI1, the downstream effector of HH signaling, and have shown efficacy in preclinical study [34,51,52]. The major issue in employment of HH-inhibitors is the recurrence of drug-resistance mutations or alternative mechanisms of activation. Consequently, multi-target therapy is emerging as a promising strategy for the treatment of HH-dependent cancers. The best approach envisioned so far is the development of further Hyperactivation of HH signaling can occurs through either ligand-independent or ligand-dependent mechanisms. Tumorigenesis is ligand-independent when the pathway is constitutively activated in the absence of ligand via mutations in HH signaling components. Loss-of-function mutations in PTCH or SUFU or gain-of-function mutations in SMO, as well as GLI1 overexpression or GLI2 amplification have been identified in BCC, a common human skin cancer, and in MB, a highly malignant pediatric brain tumor [35][36][37][38][39]. Depending on the type of HH ligand release, two mechanisms of ligand-dependent pathway hyperactivation have been described in cancers, generating a tumor-stromal crosstalk [40]. Ligand-dependent autocrine/juxtacrine secretion occurs when the HH ligand is profusely released and caught by the same tumor cells, thus activating the pathway. Tumors that arise from this condition may display HH ligand overexpression or high levels of PTCH1 and GLI1 [41][42][43]. Alternatively, a paracrine secretion of HH ligand by tumor cells can induce the activation of the HH pathway in stromal cells of tumor microenvironment. As consequence, the stroma secretes paracrine growth signals to induce tumor growth [44]. For instance, in prostate cancer specimens, the expression of HH was detected in the tumor epithelium, while GLI1 expression was found in the tumor stroma cells, suggesting their paracrine crosstalk [45]. Moreover, this mechanism of HH signaling activation can work in a reverse paracrine manner in which cancer cells take the HH ligand released by stromal cells. For example, HH ligand released by bone marrow, nodal and splenic stroma can activate the HH pathway and maintain the survival of B and plasma cells in hematological malignancies [46]. Interestingly, HH-producing microenvironment is required for GLI activation in gliomas [47]. Of note, HH signaling also regulates the expression of the stemness genes Nanog and Oct4, thus participating in the formation or maintenance of cancer stem cells (CSCs) responsible of tumor initiation, relapse and drug resistance [48][49][50]. For all these reasons, the HH pathway is emerged as an attractive druggable target for anti-cancer therapy. A various number of SMO antagonists, able to block the pathway at upstream level, have been identified and patented. Some of them, vismodegib and sonidegib, and recently glasdegib, have been approved by the Food and Drug Administration (FDA) for the treatment of BCC and Acute Myeloid Leukemia (AML), respectively [34]. Many others, such as GANT61 and GlaB, have been designed targeting GLI1, the downstream effector of HH signaling, and have shown efficacy in preclinical study [34,51,52]. The major issue in employment of HH-inhibitors is the recurrence of drug-resistance mutations or alternative mechanisms of activation. Consequently, multi-target therapy is emerging as a promising strategy for the treatment of HH-dependent cancers. The best approach envisioned so far is the development of further inhibitors, or the identification of additional regulators of the HH pathway that could be targeted in tumorigenesis. Ubiquitylation Process Ubiquitylation dictates the fate and function of most cellular proteins increasing the complexity of the proteome. This modification is a dynamic and tightly regulated post-translational event with many distinct outcomes affecting protein stability, localization, interactions, and activity. Ubiquitin (Ub) is a small globular protein consisting of 76 amino acids encoded in mammals by four different genes (UBB, UBC, RPS27, and UBA52) that ensure high cellular Ub levels [53]. Ubiquitylation is a multi-step process orchestrated by an enzymatic cascade that relies on Ub and three different enzymes: Ub-activating (E1), Ub-conjugating (E2), and Ub-ligating (E3) [54]. During the catalytic reactions, Ub is activated in an ATP-dependent way by an E1 enzyme, subsequently transferred to the active cysteine (Cys) residue of an E2 enzyme via a trans-(thio) esterification reaction, and finally attached with an isopeptide bond to a substrate by an E3 enzyme ( Figure 3A). In humans, two E1s, around 30 E2s and over 600 E3s have been identified [55,56]. The latter are the major determinants and provide specificity for substrate recognition. Based on their functional domains and on the mechanism of catalysis, E3s are divided into three main families: the Really Interesting New Gene (RING), the Homologous to the E6-associated protein Carboxyl-Terminus (HECT) types, and RING-between-RING (RBR), which can be considered a RING-HECT hybrid [57,58]. Each class of E3 ligases can create Ub linkages of different length and architecture. The transfer of the Ub moiety to substrate occurs through the formation of the covalent bond between α-carboxyl group of the terminal glycine (Gly) residue of Ub and, commonly, ε-amino group of an internal lysine (Lys) residue of the substrate. Of note, for a subset of substrates the attachment of Ub may interest their N-terminal residue, a process known as N-terminal ubiquitylation [59], or serine and threonine residues, further expanding the complexity and the biological relevance of this process. In this regard, Ub modifications of a target protein occur in various forms: attachment of a single Ub moiety on a single substrate residue (monoubiquitylation), a single Ub on multiple residues (multi-ubiquitylation), or additional Ub molecules to initial Ub yielding an ubiquitin chain (poly-ubiquitylation). Typically, mono-and multiubiquitylation regulate endocytosis, signal transduction, DNA repair, and often result in changes in the cellular localization and protein activity [60][61][62]. By contrast, polyubiquitylation is the most abundant modification that controls protein homeostasis. Indeed, the polyubiquitylated target substrates are recognized by the 26S proteasome, a multiprotein complex, that degrades the proteins into small peptides and releases the Ub for cyclic utilization [63]. Besides regulating protein degradation, polyubiquitylation brings different functional consequences depending on Ub chain linkage-type [64]. Ub has seven Lys residues (K6, K11, K27, K29, K33, K48, and K63) that may serve as polyubiquitylation points. Depending upon the Lys used, length of the chains and linkage type, distinctive forms of Ub chains may be achieved to drive the fate of target proteins [65]. Lys48-linkage targets protein for proteasome-dependent degradation, whereas Lys63-linkage is associated to regulative processes, including trafficking, protein localization, protein-protein interaction; the biological significance of other Ub modifications is still largely unclear [66]. Further complexity is provided by Ub modifications (i.e., phosphorylation, acetylation, sumoylation) and by the linkage of Ub to other Ub-like proteins (i.e., NEDD8, SUMO), creating a multitude of distinct signals. The combination of all these parameters has been referred as the "Ub code" [65]. The Ub code governs the fate of the targeted substrates by modulating their interactions with many other proteins that incorporate Ub-binding domains and determine their accessibility to deubiquitylating enzymes (DUBs), a family of protease conserved from yeast to humans [67]. Cancers 2020, 12, x 6 of 29 Deubiquitylating Enzymes: Functions and Classification Like other important post-translational modifications, ubiquitylation is a dynamic and reversible process counteracted by DUBs activity [65]. DUBs are proteases that hydrolyze isopeptide or peptide bond removing Ub conjugates from substrates and disassembling anchored Ub chains ( Figure 3B) [65,68]. DUBs may remove Ub moieties from the distal end or through the cleavage within chains in two distinct ways: i) via direct interaction with specific substrates; ii) through selective recognition for particular Ub chain architecture. Both chain length and linkage type may drive the choice of the target proteins. Importantly, linkage selectivity may occur within the catalytic domain or through the cooperation with Ub-binding domains within DUBs or their interaction partners [68]. Given their crucial role in opposing E3 ligases function, DUBs control protein homeostasis and activities, and are implicated in the regulation of various physiological and pathological processes, such as development, metabolism, immune response and tumorigenesis. Currently, 99 cellular DUBs have been identified and are classified into six main families depending on distinct catalytic domains: the largest group ubiquitin-specific proteases (USPs), ubiquitin C-terminal hydrolases (UCHs), ovarian tumor proteases (OTUs), JAD/PAD/MPN-domain containing metalloenzymes (JAMMs), Machado-Joseph disease domain proteases (MJDs or Josephins) and motif interacting with Ub-containing novel DUB family (MINDYs) [69,70]. Unlike of the JAMM family, classified as a zinc-dependent metalloproteinase, the other DUBs classes are cysteine proteases. Available data indicate that each family may display linkage or substrate preferences. For instance, OTU family exhibits linkage type specificity, whereas USP group members show differences in catalytic rate constants [68,71,72]. Studies aimed at defining the abundance of individual DUBs suggest that those with constitutive functions show high copy number, while DUBs with peculiar roles are the rarer forms [70]. Different approaches used to determine the intracellular localization of the DUBs allowed highlighting that subsets of these proteases show particular association with subcellular compartments. Although many DUBs are nuclear, several USP members localize to defined structure including plasma membrane, microtubules, endosome, and endoplasmic reticulum (ER) [73]. To date, a growing body of evidence indicated that DUBs can act as oncogenes or tumor suppressors emerging as a promising class of therapeutic targets. For these reasons, many efforts are devoted to the development of highly selective DUBs inhibitors for anti-cancer therapies. DUBs Acting on SMO SMO is the main upstream signal transducer of the HH pathway in both insects and vertebrates. SMO is classified as an atypical G protein-coupled receptor (GPCR), since it possesses stereotypical GPCR functional domains: seven transmembrane domains (TM), an intracellular C-terminal tail, an amino-terminal cysteine rich domain (CRD), three extracellular and three intracellular loops (ECL and ICL) [74,75]. The molecular mechanisms that induce SMO activity in response to the activation of the HH pathway represent a crucial question in the understanding of HH signal transduction. In Drosophila, activated Smo accumulates in the plasma membrane [76,77], while in vertebrates it translocates into the primary cilium, a small protruding organelle in which all the key components of HH signaling are enriched [78,79]. Post-translational modifications regulate Smo activity. At present, the positive role of phosphorylation on Smo subcellular trafficking and activation is well established: in Drosophila protein kinase A (PKA) and casein kinase 1 (CK1)-mediated phosphorylation promotes Smo cell surface localization [80][81][82][83], whereas in vertebrates GRK2 and CK1α-dependent phosphorylation of SMO C-tail has been found to be pivotal for its ciliary accumulation [83]. In the last years, the role of ubiquitylation as negative modulator of Smo, due to the involvement in its endocytosis, trafficking and degradation has increasingly emerged [26,84]. Ubiquitin-specific protease 8 (USP8) is a multi-domain deubiquitylating enzyme with pleiotropic functions. Besides its canonical role in protein trafficking and receptor tyrosine kinase degradation, USP8 controls other biological processes, such as endosomal sorting, mitochondrial quality control, ciliogenesis and apoptosis [85]. Indeed USP8 was found to deubiquitylate the E3-ubiquitin ligase Parkin, involved in autophagy of dysfunctional mitochondria, the HIF1α protein, important for endosome trafficking-mediated ciliogenesis, and c-FLIP a master anti-apoptotic player [85]. Recently, the involvement of USP8 in the regulation of Hh signaling, through the stabilization of Smo, has been described. Two independent studies have demonstrated that the absence of Hh ligand induces both the polyand monoubiquitylation of Smo, leading to its endocytosis and degradation both by the lysosome-and proteasome-mediated pathway, in order to keep Hh signaling off [26,84]. Conversely, upon ligand stimulation, Smo is deubiquitylated and hence accumulated on the cell surface, where it becomes activated [84]. By using an in vivo RNAi screen that targeted Drosophila DUBs, Xia and colleagues identified USP8 as a deubiquitylase that prevents Smo ubiquitylation and is required for Hh-induced cell surface accumulation of Smo, thus increasing Hh signaling activity [84]. Ubiquitin-specific protease 8 (USP8) is a multi-domain deubiquitylating enzyme with pleiotropic functions. Besides its canonical role in protein trafficking and receptor tyrosine kinase degradation, USP8 controls other biological processes, such as endosomal sorting, mitochondrial quality control, ciliogenesis and apoptosis [85]. Indeed USP8 was found to deubiquitylate the E3-ubiquitin ligase Parkin, involved in autophagy of dysfunctional mitochondria, the HIF1α protein, important for endosome trafficking-mediated ciliogenesis, and c-FLIP a master anti-apoptotic player [85]. Recently, the involvement of USP8 in the regulation of Hh signaling, through the stabilization of Smo, has been described. Two independent studies have demonstrated that the absence of Hh ligand induces both the poly-and monoubiquitylation of Smo, leading to its endocytosis and degradation both by the lysosome-and proteasome-mediated pathway, in order to keep Hh signaling off [26,84]. Conversely, upon ligand stimulation, Smo is deubiquitylated and hence accumulated on the cell surface, where it becomes activated [84]. By using an in vivo RNAi screen that targeted Drosophila DUBs, Xia and colleagues identified USP8 as a deubiquitylase that prevents Smo ubiquitylation and is required for Hh-induced cell surface accumulation of Smo, thus increasing Hh signaling activity [84]. Figure 4A,B). Parallelly, the sumoylation of Smo at K851 induced by Hh, recruits USP8 to inhibit Smo ubiquitylation and degradation, leading to its cell surface trafficking and amplifying the Hh pathway activity, both in Drosophila and mammals [86]. These data stand USP8 as a positive regulator in the HH pathway, able to prevent SMO localization to early endosomes, promoting its stability [84]. UCHL5/UCH37 A similar role to USP8 has been described by Zhou et al. for the deubiquitylase UCHL5 able to increase the protein stability and the cell membrane accumulation of Smo [87]. UCHL5 (also known as UCH37 in mammals) is a deubiquitylase involved in the regulation of several substrates (i.e., type I TGF-β receptor, E2 promoter binding factor 1) [88,89] and is formed by an N-terminal UCH and a C-terminal extension domains ( Figure 4B) [90]. In Drosophila, the UCH region of UCHL5 binds Smo C-tail [87]. Through its C-terminal fragment, UCHL5 recruits Rpn3, a proteasome subunit that increases UCHL5 deubiquitylating activity and forms a trimetric complex with Smo, thus reducing its ubiquitylation. Moreover, UCHL5 inhibits the interaction of Smo with the hepatocyte growth factor-regulated tyrosine kinase substrate (Hrs), known to promote Smo ubiquitylation [91]. Interestingly, ubiquitylation assays performed in knockdown conditions of UCHL5 and USP8 demonstrated that this two DUBs cooperate to deubiquitylate and stabilize Smo [87]. The activation of the Hh pathway does not affect the expression levels of UCHL5, but increases the affinity between UCHL5 and Smo, stabilizing the receptor with its consequent localization at the cell membrane [87]. Importantly, this mechanism is conserved in mammals through its homolog UCH37 [87]. Many evidence show that UCH37 is upregulated in a wide spectrum of tumors, suggesting its potential oncogenic role in tumorigenesis [92][93][94]. Although the negative role of Smo ubiquitylation in the control of Hh activity is well established, only recently the E3 ligases involved in this process have been identified in Drosophila, and include Uba1, Cul4-DDB1, Smurf, and Herc4 ( Figure 4A) [26][27][28]84,95,96]. In particular, recent findings displayed that the HECT E3 ligase Herc4 binds Smo and mediates its mono-and polyubiquitylation at multiple Lys residues, thus promoting its lysosome and proteasome degradation. The interaction between Smo and Herc4 is inhibited by Hh that prevents Herc4-mediated Smo ubiquitylation in a manner independent of PKA-primed phosphorylation [95]. Importantly, Herc4 interacts with USP8 and UCHL5 and their overexpression almost abolishes Herc4-mediated Smo ubiquitylation, by blocking the association between Herc4 and Smo [95]. In mammals, HERC4 binds SMO and induces its degradation. In human NSCLC, HERC4 knockdown activates HH signaling and promotes NSCLC cell proliferation thus standing as a tumor suppressor [29]. Multiple E3 ligases and DUBs are involved in the fine regulation of SMO stability and trafficking, and the perturbation of their function could alter the HH pathway activity. In particular, given the positive role of DUBs in controlling HH signaling, they emerged as a potential drug target for HH-related tumors. DUBs Acting on GLI Factors GLI zinc finger transcription factors are the main effectors of HH signaling. Both SMO-dependent and independent HH pathway activation culminate with the nuclear translocation of GLIs, promoting the expression of HH target genes. GLIs function is widely ruled by post-translational modifications. In particular, GLI ubiquitylation is orchestrated by several E3 ligases belonging both to the RING (such as SCF βTrCP and Cullin3-HIB/Roadkill/SPOP [17,97,98]) and the HECT (Itch) families [24,99], and the non-canonical E3 ligase PCAF [25,100]. This modification leads to proteolytic cleavage of GLI2 and GLI3 factors [97,98] or massive degradation especially for GLI1 protein [17,23,24]. USP7 Ubiquitin-specific protease 7 (USP7, also called Herpes virus-associated protease, HAUSP) is the first identified deubiquitylase isolated as a partner of the herpesvirus protein [101]. USP7 is a cysteine peptidase primarily located in the nucleus where it controls the stability of multiple proteins involved in the Zhou and colleagues described USP7 as positive modulator of HH signaling in flies and vertebrates. Indeed, Usp7 in Drosophila and its homolog HAUSP in mammals antagonize multiple E3 regulation of DNA damage response, transcription, epigenetic control of gene expression, immune response, and viral infection. Indeed, among the many substrates of USP7 are included the tumor suppressor proteins p53 and PTEN, the oncoproteins C-Myc and N-Myc, the transcription factors Foxp3 and FOXO family members, the DNA methyltransferase 1 (DNMT1), the checkpoint kinase 1 (CHK1) and viral proteins, such as EBNA1 and ICP0 [102]. In mouse Usp7 knockout is lethal [103,104], while in human its mutations and deletions have been recently identified in children suffering from neurodevelopmental disorders [105]. ligases function to maintain the HH pathway activity [106]. In particular, upon Hh treatment Usp7 interacts with Ci through multiple P/AxxS motifs and increases its protein stability [106]. Usp7 localizes in both cytoplasm and nucleus and counteracts respectively SCF Slimb and Hib-Cul3-mediated Ci degradation ( Figure 5A) [106]. In mouse Usp7 knockout is lethal [103,104], while in human its mutations and deletions have been recently identified in children suffering from neurodevelopmental disorders [105]. ligases Similarly, USP7 binds all GLI factors in mammals, and these interactions are favored by HH and hindered by SUFU [106]. USP7 stands as positive regulator of HH signaling that stabilizes GLIs protein levels by antagonizing either the Itch-dependent degradation of GLI1 [24,99], and the SPOP/CUL3-dependent degradation of GLI2/GLI3 ( Figure 5A) [107,108]. Usp7 knockout in mouse cause embryonic lethality at embryonic days (E) 6.5-7.5 [103], while in human USP7 shows an oncogenic role in neoplastic diseases such as NSCLC, human prostate and liver cancers [109,110]. Zhan and collaborators also investigated the effects of USP7 modulation on human MB, the most common pediatric tumor of the cerebellum [111]. About 30% of all MBs arises from HH signaling aberrant activation (HH-MBs) [112]. USP7 depletion inhibits the proliferation rate, the migration capability and the invasiveness of human HH-MB Daoy cells due to the decrease of GLIs protein levels and of HH target genes transcription [113]. The treatment of Daoy cells with the USP7 inhibitors, P5091 and P22077, blocks their proliferation and metastasis [113] standing USP7 as potential druggable target in SHH-MBs. USP48 Ubiquitin-specific protease 48 (USP48) contains an ubiquitin C-terminal hydrolase (UCH) domain, required for its catalytic activity, and an ubiquitin-specific proteases (DUSP) domain mostly involved in protein-protein interaction ( Figure 5B) [114]. Several substrates of USP48 have been recently identified, such as the tumor necrosis factor receptor-associated factor 2 (TRAF2) related to JNK pathway, the histone H2A and RelA, a member of the avian reticuloendotheliosis/NF-κB transcription factors family [115][116][117]. Moreover USP48 is a novel binding partner of Mdm2, promoting its stability with a deubiquitinase activity-independent mechanism [118]. USP48 is expressed in almost all human tissues [119] and is upregulated in malignant melanoma [120]. Zhou and co-authors recently highlighted the USP48 involvement in HH signaling regulation and its role as promoter of glioblastoma cell proliferation and tumorigenesis [121]. USP48 and GLI1 co-localize in the nucleus, interacting through the N-terminal sequence of GLI1 and the C-terminal DUSP domain of USP48 [121]. This interaction protects GLI1 from proteasome-dependent degradation thus increasing its protein stability ( Figure 5A). The specific function of USP48 on GLI1 promotes the proliferation and the colony formation of glioma cells in vitro. Moreover, its depletion abrogates the tumor formation and extends the survival rate of orthotopic glioblastoma mouse models in vivo [121]. Zhou and colleagues sustained a positive feedback loop by which HH signaling activates USP48 through the binding of GLI1 to Usp48 promoter. Of note, USP48 and GLI1 expression levels directly correlate in human glioblastoma specimens, and they are linked to tumor malignancy grade. This evidence underlies the relevance of USP48-GLI1 regulatory axis for glioma cell proliferation and glioblastoma tumorigenesis [121]. USP21 The ubiquitin specific peptidase 21 (USP21) is the only centrosome and microtubule-associated DUB and localizes at the basal bodies in ciliated cells [73]. USP21 activity leads to the stabilization of many substrates, such as the pluripotency factor Nanog and the Mitogen-activated protein kinase kinase 2 (MEK2), a member of MAPK signaling cascade, thus sustaining stemness and cell proliferation, respectively [122,123]. Heride et al. described that USP21 positively regulates HH signaling either acting on the formation of primary cilium or altering GLI1 transcription activity [124], without excluding the interplay between these two mechanisms ( Figure 5A). The authors demonstrated that USP21 and GLI1 form a complex and, together with PKA, colocalize at the centrosome in U2OS cells. Indeed, USP21 recruits GLI1 close to active PKA thus stimulating GLI1 phosphorylation [124,125]. Both depletion and overexpression of USP21 can hinder HH signaling, highlighting its regulatory role in the modulation of this pathway [124]. USP37 Ubiquitin specific peptidase 37 (USP37) mainly localizes in the cytoplasm [126] and it has been initially described as a potent regulator of cell cycle at the G1/S transition, due to its ability to stabilize cyclin A. [127,128]. Moreover, USP37 is involved in the regulation of the stemness marker Nanog, of the EMT transcription factor Snail and of the oncoprotein C-Myc [129,130]. Qin et al., described the interplay among USP37 expression, the HH pathway, and EMT in breast cancer stem cells (BCSCs) [131]. In particular, they observed that genetic depletion of USP37 in these cells induces the reduction of HH key components at protein level (such as SMO and GLI1) as well as of stem cell markers (i.e., ALDH1 and OCT4) [131]. In contrast, the activation of HH signaling induced by the agonist purmorphamine (PM) results in enhanced USP37 gene expression that in turn stabilizes GLI1 ( Figure 5A), and impacts on EMT in BCSBs [131]. These findings confirm the role of HH signaling in the maintenance of stem cells and EMT [132,133] and the implication of DUBs deregulations on these oncogenic processes [134,135]. Indeed, USP37 downregulation attenuates cell invasion and EMT markers expression by suppressing the HH pathway [131]. Moreover, in vivo xenograft mouse model of breast cancer showed that tumors resulting from USP37 silenced cells are more sensitive to cisplatin, and have impaired HH target and stemness genes expression, together with lower proliferation ability compared to control group [131]. Overall these data indicate the relevance of USP37 in the regulation of breast cancer progression via the activation of the HH pathway. OTUB2 Ubiquitin thioesterase otubain-2 (OTUB2) is a deubiquitylating cysteine protease belonging to the ovarian tumor (OTU) superfamily of DUBs. Virus can encode DUBs to alter Ub-mediated host cell processes [136,137], and OTUB2 has been reported for its inhibitory activity on virus-triggered signaling through the deubiquitylation of TRAF3 and TRAF6 [138]. Further, OTUB2 affects DNA damage-dependent ubiquitylation, by protecting the polycomb molecule L3MBTL1 from RNF8-dependent degradation in an early phase of the DNA double-strand response (DDR) [139]. Recently, Li and co-workers described a new role for OTUB2 in the regulation of GLI2 stability ( Figure 5A) [140]. In particular, the authors demonstrated the interaction between the two proteins and elucidated their interplay. The over-expression of OTUB2, but not of its catalytically inactive mutant C51A, protects GLI2 from proteasome-dependent degradation thus stabilizing and extending its half-life in U2OS cells [140]. Since HH signaling plays a relevant role in osteogenic differentiation during embryogenesis [141], Li et al. investigated the effects of OTUB2 genetic depletion in mesenchymal stem cells (MSCs). They observed that HH stimulation promotes the expression of key drivers of osteoblast differentiation and bone formation, an effect that is inhibited in OTUB2 knockdown condition. These findings outline OTUB2 as an agonist of HH signaling demonstrating its ability to stabilize GLI2 protein levels [140]. HH-Related DUBs: Inhibitors and Therapeutic Applications Since the relevant role of DUBs in tumorigenesis, in the last decade many efforts have been devoted to the identification of selective DUBs inhibitors, demonstrating their therapeutic potential as anti-cancer agents [142][143][144]. DUBs that regulate key components of the HH pathway, such as USP7, USP8 and UCHL5/UCH37, are promising targets for the treatment of HH-dependent tumors. Their specific inhibitors with related chemical structures are summarized in Tables 1 and 2, respectively. USP7 is one of the most studied and best characterized DUB for its implication in different human diseases and in a wide spectrum of human cancers [145]. The first USP7 inhibitor was HBX 41,108, a cyanindenopyzazine derivative. HBX 41,108 acts on USP7 [146] through an uncompetitive reversible mechanism, binding this DUB after the formation of the enzyme/substrate complex. Although this molecule has shown selectivity towards USP7 in HCT116 human colon cancer cells, its weak activity against other related proteases limits the use for further pre-clinical studies [146]. Few years later, the same research team identified structurally distinct USP7 antagonists; among them, HBX 19,818 exhibits an excellent selectivity for this DUB [147]. Reverdy and colleagues demonstrated that HBX 19,818 covalently binds the Cys223 located in the USP7 active site impairing cell proliferation and promoting apoptosis and cell cycle arrest in HCT116 cells. Of note, the viability of three cancer cell lines with different p53 status is equally impaired by HBX 19,818 treatment, strongly suggesting that p53, one of the major USP7 target, is not required for the cellular response to USP7 inhibition. These findings suggest the existence of other USP7 substrates important for the proliferation of colon cancer cells [147]. To date, HBX 19,818 antitumor activity in vivo has not been yet described. P5091 is one of the most well studied first-generation USP7 inhibitors, whose structure has been used as scaffold for chemistry optimization to develop new antagonists [150,155]. P5091 shows potent selective activity against USP7, inhibiting its ability to cleave high molecular weight poly-Ub chains in a dose-dependent manner. Chauhan and colleagues provided pre-clinical data on the anti-cancer efficacy of P5091 in multiple myeloma xenograft models. Interestingly, P5091 treatment impairs tumor growth by inducing apoptosis also in cells resistant to conventional and bortezomib therapies. All these evidence strongly support the clinical investigation of USP7 inhibitors, alone or in combination, as a valid therapeutic strategy for the treatment of multiple myeloma [150]. In addition, the potential therapeutic application of P5091 has also been reported for the treatment of various malignancies (Table 1) [156][157][158][159][160][161][162]. Notably, Zhan and colleagues showed that both P22077 and P5091 block proliferation and migration of MB cells, by reducing GLI proteins levels and inhibiting HH signaling [113]. Following advances in understanding the crystal structures of USP7, USP7-ligand complexes and its functional domains, several non-covalently binding USP7 inhibitors have been identified, including the 4-hydroxypiperidines XL188 [163,164], FT671 and compound 4 [165,166], the 2-aminopyridine GNE6640, GNE776 and the thiazole derivatives C7 and C19 [167,168]. Although these molecules show good potency and selectivity against USP7, further in vivo studies will be required to evaluate their therapeutic relevance in cancer treatment. Noteworthy, USP7 inhibitors have also been identified from natural sources, such as spongiacidin C, a pyrrole alkaloid obtained from the marine sponge Stylissa massa [169]. Despite biochemical assays show a good selectivity for USP7 for these compounds, their efficacy in cells remains to be determined. Finally, in the last year, two new USP7 antagonists have been identified. XL177A, an analogue of XL188, is a small molecule that irreversibly inhibits USP7 with sub-nanomolar potency and selectivity, and whose effectiveness seems to be associated with p53 mutational status in multiple cancer lineages [170]. On the contrary, compound 41 is a reversible, highly potent, selective, and orally bioavailable USP7 inhibitor. In in vivo xenograft models of multiple myeloma, this molecule impairs the tumor growth of both p53 wild-type and mutant tumor cell lines, confirming that USP7 inhibition can suppress tumor growth affecting different pathways [171]. Currently only two molecules have been described as specific USP8 antagonists due to its pleiotropic function [85]. Colombo and colleagues identified the compound 9-ethyloxyimino-9H-indeno [1,2-b]pyrazine-2,3-dicarbonitrile as the first specific USP8 inhibitor [172]. Subsequently, the effectiveness of this molecule has been reported to markedly decrease the in vitro and in vivo tumor growth of both gefitinib-sensitive and -resistant NSCLC cells [173]. The second USP8 inhibitor, Ubv.8.2, is an engineered ubiquitin variant identified to be a highly specific and potent inhibitor of this enzyme, showing the ability to occlude its Ub-binding site [174]. The only evidence of a potential anti-cancer activity for this molecule has been reported by MacLeod and co-workers, who demonstrated that the lentiviral expression of Ubv.8.2 leads to cell viability reduction in glioblastoma stem cell lines [175]. The potential application of DUBs inhibitors for the treatment of HH-related tumors includes also the exploitation of small molecules specific for those DUBs associated with proteasome, such as UCHL5 and USP14 [176]. These enzymes broadly act on substrates addressed to degradation machinery and represent the most investigated druggable DUBs. Indeed, their inhibition might have considerable effects on tumor cells, resulting in a less toxic strategy than targeting directly the proteasome complex [142]. Conclusions The HH pathway is involved in the tumorigenesis of several malignancies and has emerged as a valid therapeutic target for anti-cancer therapy. At present, the main strategies to impair HH signaling are focused on inhibitors acting either on SMO or on GLIs, or through multi-targeting approaches working on both upstream and downstream levels [52,199,200]. A number of SMO antagonists have entered in clinical phases but only two of them, vismodegib and sonidegib, have been approved by the FDA for the treatment of BCC. Nevertheless, the response to SMO antagonists has been variable in other HH-dependent tumors such as MB, showing relapse due to lack of efficacy on SMO drug-resistant mutations and SMO-independent HH activation [201][202][203]. These limitations arose the need to develop alternative approaches. Even if GLI inhibitors have shown promising results in preclinical studies, few of them have entered in clinical studies, and only the Arsenic Trioxide (ATO) has been approved by FDA for the treatment of AML [33,34,51,200]. Currently, ATO is in several clinical trials for both solid tumors and hematological malignancies, but there are only preclinical studies for some HH-driven cancers such as MB. These results highlight that further efforts need to be spent on the development of more effective anticancer strategies for the treatment of HH-dependent tumors. In the last years, the possibility of hitting a tumorigenic pathway at multiple regulatory levels has emerging as a valid therapeutic frontier in the field of oncological research. The genetic and molecular heterogeneity of HH-driven malignancies stimulates the identification of novel molecular players of this pathway as potential druggable targets. In particular, ubiquitylation deeply rules HH signaling, and its pharmacological inhibition is an attractive tool to hinder this pathway at a further crucial level of regulation. In this regard, DUBs are emerging as interesting therapeutic targets in various HH-related tumors given their positive role in the control of the main performers of HH signaling. In addition to promoting the activity of SMO and GLIs, as here reviewed, DUBs affects HH signaling regulating ciliogenesis and the ciliary recruitment of HH regulatory proteins, as described for USP14 [177]. Moreover, multiple components of the HH pathway can be stabilized by the activity of DUBs. USP8, here presented for the function exerted on Smo, also regulates Itch, a HECT E3 ligase involved in GLI1 ubiquitylation. [204,205]. Notably, USP17, FAM/USP9X, and YOD1 have also been identified as modulators of Itch activity, enhancing its stability [206][207][208]. Recently, the involvement of βTrCPbound deubiquitylase enzyme USP47 has been described in HH signaling. The interaction of the positive HH regulator ERAP1 with USP47 induces the degradation of βTrCP, thus protecting GLIs from βTrCP-dependent degradation and stimulating HH activity [209]. UCHL5 USP14 b-AP15 [178] VLX1570 [189] USP14 IU1 [195] Conclusions The HH pathway is involved in the tumorigenesis of several malignancies and has emerged as a valid therapeutic target for anti-cancer therapy. At present, the main strategies to impair HH signaling are focused on inhibitors acting either on SMO or on GLIs, or through multi-targeting approaches working on both upstream and downstream levels [52,199,200]. A number of SMO antagonists have entered in clinical phases but only two of them, vismodegib and sonidegib, have been approved by the FDA for the treatment of BCC. Nevertheless, the response to SMO antagonists has been variable in other HH-dependent tumors such as MB, showing relapse due to lack of efficacy on SMO drug-resistant mutations and SMO-independent HH activation [201][202][203]. These limitations arose the need to develop alternative approaches. Even if GLI inhibitors have shown promising results in preclinical studies, few of them have entered in clinical studies, and only the Arsenic Trioxide (ATO) has been approved by FDA for the treatment of AML [33,34,51,200]. Currently, ATO is in several clinical trials for both solid tumors and hematological malignancies, but there are only preclinical studies for some HH-driven cancers such as MB. These results highlight that further efforts need to be spent on the development of more effective anticancer strategies for the treatment of HH-dependent tumors. In the last years, the possibility of hitting a tumorigenic pathway at multiple regulatory levels has emerging as a valid therapeutic frontier in the field of oncological research. The genetic and molecular heterogeneity of HH-driven malignancies stimulates the identification of novel molecular players of this pathway as potential druggable targets. In particular, ubiquitylation deeply rules HH signaling, and its pharmacological inhibition is an attractive tool to hinder this pathway at a further crucial level of regulation. In this regard, DUBs are emerging as interesting therapeutic targets in various HH-related tumors given their positive role in the control of the main performers of HH signaling. In addition to promoting the activity of SMO and GLIs, as here reviewed, DUBs affects HH signaling regulating ciliogenesis and the ciliary recruitment of HH regulatory proteins, as described for USP14 [177]. Moreover, multiple components of the HH pathway can be stabilized by the activity of DUBs. USP8, here presented for the function exerted on Smo, also regulates Itch, a HECT E3 ligase involved in GLI1 ubiquitylation. [204,205]. Notably, USP17, FAM/USP9X, and YOD1 have also been identified as modulators of Itch activity, enhancing its stability [206][207][208]. Recently, the involvement of βTrCPbound deubiquitylase enzyme USP47 has been described in HH signaling. The interaction of the positive HH regulator ERAP1 with USP47 induces the degradation of βTrCP, thus protecting GLIs from βTrCP-dependent degradation and stimulating HH activity [209]. UCHL5 USP14 b-AP15 [178] VLX1570 [189] USP14 IU1 [195] Conclusions The HH pathway is involved in the tumorigenesis of several malignancies and has emerged as a valid therapeutic target for anti-cancer therapy. At present, the main strategies to impair HH signaling are focused on inhibitors acting either on SMO or on GLIs, or through multi-targeting approaches working on both upstream and downstream levels [52,199,200]. A number of SMO antagonists have entered in clinical phases but only two of them, vismodegib and sonidegib, have been approved by the FDA for the treatment of BCC. Nevertheless, the response to SMO antagonists has been variable in other HH-dependent tumors such as MB, showing relapse due to lack of efficacy on SMO drug-resistant mutations and SMO-independent HH activation [201][202][203]. These limitations arose the need to develop alternative approaches. Even if GLI inhibitors have shown promising results in preclinical studies, few of them have entered in clinical studies, and only the Arsenic Trioxide (ATO) has been approved by FDA for the treatment of AML [33,34,51,200]. Currently, ATO is in several clinical trials for both solid tumors and hematological malignancies, but there are only preclinical studies for some HH-driven cancers such as MB. These results highlight that further efforts need to be spent on the development of more effective anticancer strategies for the treatment of HH-dependent tumors. In the last years, the possibility of hitting a tumorigenic pathway at multiple regulatory levels has emerging as a valid therapeutic frontier in the field of oncological research. The genetic and molecular heterogeneity of HH-driven malignancies stimulates the identification of novel molecular players of this pathway as potential druggable targets. In particular, ubiquitylation deeply rules HH signaling, and its pharmacological inhibition is an attractive tool to hinder this pathway at a further crucial level of regulation. In this regard, DUBs are emerging as interesting therapeutic targets in various HH-related tumors given their positive role in the control of the main performers of HH signaling. In addition to promoting the activity of SMO and GLIs, as here reviewed, DUBs affects HH signaling regulating ciliogenesis and the ciliary recruitment of HH regulatory proteins, as described for USP14 [177]. Moreover, multiple components of the HH pathway can be stabilized by the activity of DUBs. USP8, here presented for the function exerted on Smo, also regulates Itch, a HECT E3 ligase involved in GLI1 ubiquitylation. [204,205]. Notably, USP17, FAM/USP9X, and YOD1 have also been identified as modulators of Itch activity, enhancing its stability [206][207][208]. Recently, the involvement of βTrCPbound deubiquitylase enzyme USP47 has been described in HH signaling. The interaction of the positive HH regulator ERAP1 with USP47 induces the degradation of βTrCP, thus protecting GLIs from βTrCP-dependent degradation and stimulating HH activity [209]. [189] Conclusions The HH pathway is involved in the tumorigenesis of several malignancies and has emerged as a valid therapeutic target for anti-cancer therapy. At present, the main strategies to impair HH signaling are focused on inhibitors acting either on SMO or on GLIs, or through multi-targeting approaches working on both upstream and downstream levels [52,199,200]. A number of SMO antagonists have entered in clinical phases but only two of them, vismodegib and sonidegib, have been approved by the FDA for the treatment of BCC. Nevertheless, the response to SMO antagonists has been variable in other HH-dependent tumors such as MB, showing relapse due to lack of efficacy on SMO drug-resistant mutations and SMO-independent HH activation [201][202][203]. These limitations arose the need to develop alternative approaches. Even if GLI inhibitors have shown promising results in preclinical studies, few of them have entered in clinical studies, and only the Arsenic Trioxide (ATO) has been approved by FDA for the treatment of AML [33,34,51,200]. Currently, ATO is in several clinical trials for both solid tumors and hematological malignancies, but there are only preclinical studies for some HH-driven cancers such as MB. These results highlight that further efforts need to be spent on the development of more effective anticancer strategies for the treatment of HH-dependent tumors. In the last years, the possibility of hitting a tumorigenic pathway at multiple regulatory levels has emerging as a valid therapeutic frontier in the field of oncological research. The genetic and molecular heterogeneity of HH-driven malignancies stimulates the identification of novel molecular players of this pathway as potential druggable targets. In particular, ubiquitylation deeply rules HH signaling, and its pharmacological inhibition is an attractive tool to hinder this pathway at a further crucial level of regulation. In this regard, DUBs are emerging as interesting therapeutic targets in various HH-related tumors given their positive role in the control of the main performers of HH signaling. In addition to promoting the activity of SMO and GLIs, as here reviewed, DUBs affects HH signaling regulating ciliogenesis and the ciliary recruitment of HH regulatory proteins, as described for USP14 [177]. Moreover, multiple components of the HH pathway can be stabilized by the activity of DUBs. USP8, here presented for the function exerted on Smo, also regulates Itch, a HECT E3 ligase involved in GLI1 ubiquitylation. [204,205]. Notably, USP17, FAM/USP9X, and YOD1 have also been identified as modulators of Itch activity, enhancing its stability [206][207][208]. Recently, the involvement of βTrCPbound deubiquitylase enzyme USP47 has been described in HH signaling. The interaction of the positive HH regulator ERAP1 with USP47 induces the degradation of βTrCP, thus protecting GLIs from βTrCP-dependent degradation and stimulating HH activity [209]. [195] Conclusions The HH pathway is involved in the tumorigenesis of several malignancies and has emerged as a valid therapeutic target for anti-cancer therapy. At present, the main strategies to impair HH signaling are focused on inhibitors acting either on SMO or on GLIs, or through multi-targeting approaches working on both upstream and downstream levels [52,199,200]. A number of SMO antagonists have entered in clinical phases but only two of them, vismodegib and sonidegib, have been approved by the FDA for the treatment of BCC. Nevertheless, the response to SMO antagonists has been variable in other HH-dependent tumors such as MB, showing relapse due to lack of efficacy on SMO drug-resistant mutations and SMO-independent HH activation [201][202][203]. These limitations arose the need to develop alternative approaches. Even if GLI inhibitors have shown promising results in preclinical studies, few of them have entered in clinical studies, and only the Arsenic Trioxide (ATO) has been approved by FDA for the treatment of AML [33,34,51,200]. Currently, ATO is in several clinical trials for both solid tumors and hematological malignancies, but there are only preclinical studies for some HH-driven cancers such as MB. These results highlight that further efforts need to be spent on the development of more effective anti-cancer strategies for the treatment of HH-dependent tumors. In the last years, the possibility of hitting a tumorigenic pathway at multiple regulatory levels has emerging as a valid therapeutic frontier in the field of oncological research. The genetic and molecular heterogeneity of HH-driven malignancies stimulates the identification of novel molecular players of this pathway as potential druggable targets. In particular, ubiquitylation deeply rules HH signaling, and its pharmacological inhibition is an attractive tool to hinder this pathway at a further crucial level of regulation. In this regard, DUBs are emerging as interesting therapeutic targets in various HH-related tumors given their positive role in the control of the main performers of HH signaling. In addition to promoting the activity of SMO and GLIs, as here reviewed, DUBs affects HH signaling regulating ciliogenesis and the ciliary recruitment of HH regulatory proteins, as described for USP14 [177]. Moreover, multiple components of the HH pathway can be stabilized by the activity of DUBs. USP8, here presented for the function exerted on Smo, also regulates Itch, a HECT E3 ligase involved in GLI1 ubiquitylation. [204,205]. Notably, USP17, FAM/USP9X, and YOD1 have also been identified as modulators of Itch activity, enhancing its stability [206][207][208]. Recently, the involvement of βTrCP-bound deubiquitylase enzyme USP47 has been described in HH signaling. The interaction of the positive HH regulator ERAP1 with USP47 induces the degradation of βTrCP, thus protecting GLIs from βTrCP-dependent degradation and stimulating HH activity [209]. Increasing findings in this field of study highlight the interest in the development of more efficient
2020-06-14T13:02:47.475Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "c783ee8a0a8af4fdd8a7b7859561bca513969254", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/12/6/1518/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "33446f0d555ec1facd466e5dd1e35de8c771af60", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14560831
pes2o/s2orc
v3-fos-license
Skein theory for the D_{2n} planar algebras We give a combinatorial description of the ``$D_{2n}$ planar algebra,'' by generators and relations. We explain how the generator interacts with the Temperley-Lieb braiding. This shows the previously known braiding on the even part extends to a `braiding up to sign' on the entire planar algebra. We give a direct proof that our relations are consistent (using this `braiding up to sign'), give a complete description of the associated tensor category and principal graph, and show that the planar algebra is positive definite. These facts allow us to identify our combinatorial construction with the standard invariant of the subfactor $D_{2n}$. A A brief note on T n , a related planar algebra 30 1 Introduction Start with a category with tensor products and a good theory of duals (technically a spherical tensor category [1], or slightly more generally a spherical 2-category 1 ), such as the category of representations of a quantum group, or the category of bimodules coming from a subfactor. Fix your favorite object in this tensor category. Then the Hom-spaces between arbitrary tensor products of the chosen object and its dual fit together into a structure called a planar algebra (a notion due to Jones [3]) or the roughly equivalent structure called a spider (a notion due to Kuperberg [4]). Encountering such an object should tempt you to participate in: The Kuperberg Program Give a presentation by generators and relations for every interesting planar algebra. Generally it's easy to guess some generators, and not too hard to determine that certain relations hold. You should then aim to prove that the combinatorial planar algebra given by these generators and relations agrees with your original planar algebra. Ideally, you also understand other properties of the original category (for example positivity, being spherical, or being braided) in terms of the presentation. The difficulty with this approach is often in proving combinatorially that your relations are self-consistent, without appealing to the original planar algebra. Going further, you could try to find explicit 'diagrammatic' bases for all the original Hom spaces, as well as the combinatorial details of 6− j symbols or 'recombination' rules. This program has been fulfilled completely for the A n subfactors (equivalently, for the representation theory of U q (sl 2 ) at a root of unity), for all the subfactors coming from Hopf algebras [5,6,7,8], and for the representation categories of the rank 2 quantum groups [4,9]. Some progress has been made on the representation categories of U q (sl n ) for n ≥ 4 [10,11,12]. Other examples of planar algebras which have been described or constructed by generators and relations include the BMW and Hecke algebras [3,13], the Haagerup subfactor [14], and the Bisch-Haagerup subfactors [15,16]. In this paper we apply the Kuperberg program to the subfactor planar algebras corresponding to D 2n . The D 2n subfactors are one of the two infinite families (the other being A n ) of subfactors of index less than 4. Also with index less than 4 there are two sporadic examples, the E 6 and E 8 subfactors. See [17,18,19,20] for the story of this classification. The reader familiar with quantum groups should be warned that although D 2n is related to the Dynkin diagram D 2n , it is not in any way related to the quantum group U q (so 4n ). To get from U q (so 4n ) to the D 2n diagram you look at its roots. To get from the D 2n subfactor to the D 2n diagram you look at its fusion graph. The fusion graph of a quantum group is closely related to its fundamental alcove, not to its roots. Nonetheless the D 2n subfactor is related to quantum groups! First, It is a quantum subgroup of U q (sl 2 ) in the sense of [20]. To make matters even more confusing, the D 2n subfactor is related via level-rank duality to the quantum group U q (so 2n−2 ); see [21] for details. The D 2n subfactors were first constructed in [22], using an automorphism of the subfactor A 4n−3 . (This 'orbifold method' was studied further in [23,24].) Since then, several papers have offered alternative constructions; via planar algebras, in [25], and as a module category over an algebra object in A 4n−3 , in [20]. In this paper we'll show an explicit description of the associated D 2n planar algebra, and via the results of [3,26] or of [27] this gives an indirect construction of the subfactor itself. Our goal in this paper is to understand as much as possible about the the D 2n planar algebra on the level of planar algebras -that is, without appealing to subfactors, or any structure beyond the combinatorics of diagrams. We also hope that our treatment of the planar algebra for D 2n by generators and relations nicely illustrates the goals of the Kuperberg program, although more complicated examples will require different methods. Our main object of study is a planar algebra PA(S) defined by generators and relations. Definition 1.1 Fix q = exp( πi 4n−2 ). Let PA(S) be the planar algebra generated by a single "box" S with 4n − 4 strands, modulo the following relations. S (3) Capping relation: This paper uses direct calculations on diagrams to establish the following theorem: The Main Theorem PA(S) is the D 2n subfactor planar algebra; that is, (1) the space of closed diagrams is 1-dimensional, (2) PA(S) is spherical, (3) the principal graph of PA(S) is the Dynkin diagram D 2n , and (4) PA(S) is unitary, that is, it has a star structure for which S * = S , and the associated inner product is positive definite. Many of the terms appearing in this statement will be given definitions later, although a reader already acquainted with the theory of subfactors should not find anything unfamiliar. 2 In this paper our approach is to start with the generators and relations for PA(S) and to prove the Main Theorem from scratch. The first part of the Main Theorem in fact comes in two subparts; first that the relations given in Definition 1.1 are consistent (that is, PA(S) 0 = 0), and second that every closed diagram can be evaluated as a multiple of the empty diagram using the relations. These statements appear as Corollary 3.4 and as Theorem 3.6. Corollary 3.5 proves that PA(S) is spherical. Our main tool in showing all of this is a 'braiding up to sign' on the entire planar algebra D 2n ; the details are in Theorem 3.2. It is well-known that the even part of D 2n is braided (for example [20]), but we extend that braiding to the whole planar algebra with the caveat that if you pull S over a strand it becomes −S . In a second paper [21], we will give results about the knot and link invariants which can be constructed using this planar algebra. From these, we can derive a number of new identities between classical knot polynomials. In Section 4.1, we will describe the structure of the tensor category of projections, essentially rephrasing the concepts of fusion algebras in planar algebra language. Some easy diagrammatic calculations then establish the third part of the main theorem. Section 4.2 exhibits an orthogonal basis for the planar algebra, and the final part of the main theorem becomes an easy consequence. Finally, Appendix A describes a family of related planar algebras, and sketches the corresponding results. In addition to our direct approach, one could also prove the main theorem in the following indirect way. First take one of the known constructions of the subfactor D 2n . By [3] the standard invariant of D 2n gives a planar algebra. Using the techniques in [25] and [30], find the generator and some of the relations for this planar algebra. At this point you'll have reconstructed our list of generators and relations for PA(S). However, even at this point you will only know that the D 2n planar algebra is a quotient of PA(S). To prove that D 2n = PA(S) you would still need many of the techniques from this paper. In particular, using all the above results only allows you to skip Section 3.3 and parts of Section 4.2 (since positive definiteness would follow from non-degeneracy of the inner product and positivity for D 2n ). We'd like to thank Stephen Bigelow, Vaughan Jones, and Kevin Walker for interesting conversations. During our work on this paper, Scott Morrison was at Microsoft Station Q, Emily Peters was supported in part by NSF Grant DMS0401734 and Noah Snyder was supported in part by RTG grant DMS-0354321. Background In this section we remind the reader what a planar algebra is, and recall a few facts about the simplest planar algebra, Temperley-Lieb. What is a planar algebra? A planar algebra is a gadget specifying how to combine elements in planar ways, rather as a "linear" algebra is a gadget in which one can combine elements, given a linear ordering. For example, Planar algebras were introduced in [3] to study subfactors, and have since found more general use. In the simplest version, a planar algebra P associates a vector space P k to each natural number k (thought of as a disc in the plane with k points on its boundary) and associates a linear map P(T ) : P k 1 ⊗ P k 2 ⊗ · · · ⊗ P kr → P k 0 to each 'spaghetti and meatballs' or 'planar tangle' diagram T with internal discs with k 1 , k 2 , . . . , k r points and k 0 points on the external disc. For example, gives a map from V 7 ⊗V 5 ⊗V 5 → V 7 . Such maps (the 'planar operations') must satisfy certain properties: radial spaghetti induces the identity map, and composition of the maps P(T ) is compatible with the obvious composition of spaghetti and meatballs diagrams by gluing some inside another. When we glue, we match up base points; each disk's base point is specified by a bullet. The reason for these bullets in the definition is that they allow us to keep track of pictures which are not rotationally invariant. For example, in Definition 1.1 we have used the marked points to indicate the way the generator S behaves under rotation. Nevertheless we use the following conventions to avoid always drawing a bullet. Instead of using marked points we will often instead use a "rectangular" picture in which some of the strings go up, some go down, but none come out of the sides of generators. This leaves a gap on the left side of every picture and the convention is that the marked points always lie in this gap. Further, if we neglect to draw a bounding disk, the reader should imagine a rectangle around the picture (and therefore put the marked point on the left). For example, the following pictures represent the same element of a planar algebra: S rotated by one 'click'. S , S There are some special planar tangles which induce operations with familiar names. First, each even vector space P 2k becomes an associative algebra using the 'multiplication' tangle: Second, there is an involution : P 2k → P 2k given by the 'dualising' tangle: Third, for each k there is a trace tr : P 2k → P 0 : If P 0 is one-dimensional, this map really is a trace, and we can use it (along with multiplication) to build a bilinear form on P 2k in the usual way. A subfactor planar algebra is the best kind of planar algebra; it has additional properties which make it a nice place to work. First and foremost, P 0 must be one-dimensional. In particular, a closed circle is equal to a multiple of the empty diagram, and the square of this multiple is called the index of the planar algebra. Note that this implies that the zero-ary planar operations, namely the 'vegetarian' diagrams without any meatballs, induce the Temperley-Lieb diagrams (see below, §2.2) as elements of the subfactor planar algebra. There is thus a map T L 2k → P 2k , although it need be neither surjective nor injective. Second, subfactor planar algebras have the property that only spaces for discs with an even number of boundary points are nonzero. Third, subfactor planar algebras must be spherical, that is, for each element T ∈ P 2 , we have an identity in P 0 : = . Fourth, there must be an anti-linear adjoint operation * : P k → P k such that the sesquilinear form given by x, y = tr(y * x) is positive definite. Further, * on P should be compatible with the horizontal reflection operation * on planar tangles. In particular, this means that the adjoint operation on Temperley-Lieb is reflection in a horizontal line. Finally note that we use "star" to indicate the adjoint, and "bar" to indicate the dual. We apologise to confused readers for this notation. One useful way to generalize the definition of a planar algebra is by introducing a 'spaghetti label set' and a 'region label set,' and perhaps insist that only certain labels can appear next to each other. When talking about subfactor planar algebras, only two simple cases of this are required: a 'standard' subfactor planar algebra has just two region labels, shaded and unshaded, which must alternate across spaghetti, while an 'unshaded' subfactor planar algebra has no interesting labels at all. From this point onwards, we'll be using the unshaded variety of planar algebra, essentially just for simplicity of exposition. The reader can easily reconstruct the shaded version of everything we say; checkerboard shade the regions, ensuring that the marked point of an S box is always in an unshaded region. This necessitates replacing relation 2 in Definition 1.1, so that instead the "2 click" rotation of the S box is −1 times the original unrotated box. The one point at which reintroducing the shading becomes subtle is when we discuss braidings in §3.2. The Temperley-Lieb (planar) algebra We work over the field C(q) of rational functions in a formal variable q . It is often notationally convenient to use quantum numbers. Now let's recall some facts about the Temperley-Lieb algebra. Definition 2.2 A Temperley-Lieb picture is a non-crossing matching of 2n points around the boundary of a disc, with a chosen first point. In practice, Temperley-Lieb pictures are often drawn with the points on two lines, and the chosen first point is the one on the top left. Definition 2.3 The vector space T L 2n has as a basis the Temperley-Lieb pictures for 2n points. These assemble into a planar algebra by gluing diagrams into planar tangles, and removing each closed circle formed in exchange for a coefficient of Temperley-Lieb is a subfactor planar algebra (with the adjoint operation being horizontal reflection) except that the sesquilinear form need not be positive definite (see §2. 3). Some important elements of the Temperley-Lieb algebra are • The identity (so-called because it's the identity for the multiplication given by vertical stacking): • The Jones projections in T L 2n : • The crossing in T L 4 : Recall that the crossing satisfies Reidemeister relations 2 and 3, but not Reidemeister 1. Instead the positive twist factor is iq 3/2 . Temperley-Lieb when f (4n−3) = 0 At any 'special value' q = e iπ k+2 (equivalently δ = q + q −1 = 2 cos( π k+2 )), the Temperley-Lieb planar algebra is degenerate, with radical generated by the Jones-Wenzl projection f (k+1) . We therefore pass to a quotient, by imposing the relation f (k+1) = 0. In the physics literature k would be called the level. We're interested in the case k = 4n − 4, so q = e iπ/(4n−2) and δ = 2 cos π 4n−2 . For this value of q , We record several facts about this quotient of Temperley-Lieb which we'll need later. (In the following diagrams, we're just drawing 3 or 4 parallel strands where we really mean 4n − 5 or 4n − 4 respectively; make sure you read the labels of the boxes.) = Remark. Any relation in Temperley-Lieb also holds if superimposed on top of, or behind, another Temperley-Lieb diagram; this is just the statement that Temperley-Lieb is braided. We'll need to use all these variations of the identity in the above lemma later. (the twisted strand here indicates just a single strand, while the 3 parallel strands actually represent 4n-5 strands) and as an easy consequence = . The first two equalities hold in Temperley-Lieb at any value of q . The third equality simply specialises to the relevant value. Note that the crossings in the above lemma are all undercrossings for the single strand. Changing each of these to an overcrossing for that strand, we have the same identities, with q replaced by q −1 , and i replaced by −i. First consequences of the relations Recall from the introduction that we are considering the planar algebra PA(S) generated by a single box S with 4n − 4 strands, with q = exp( πi 4n−2 ), modulo the following relations. Remark. Relation (1) fixes the index [2] 2 q of the planar algebra as a 'special value' as in §2.3 of the form [2] q = 2 cos( π k+2 ). Note that usually at special values, one imposes a further relation, that the corresponding Jones-Wenzl idempotent f (k) is zero, in order that the planar algebra be positive definite. As it turns out, we don't need to impose this relation by hand; it will follow, in Theorem 3.1, from the other relations. According to the philosophy of [25,30] any planar algebra is generated by boxes which satisfy "annular relations" like (2) and (3), while particularly nice planar algebras require in addition only "quadratic relations" which involve two boxes. Our quadratic relation (4), in which the two S boxes are not connected, is unusually strong and makes many of our subsequent arguments possible. Notice that this relation also implies relations with a pair of S boxes connected by an arbitrary number of strands. We record for future use some easy consequences of the relations of Definition 1.1. · · · · · · (Here 2n − 2 strands connect the two S boxes on the left hand side.) More generally, if T, T ′ ∈ PA(S) m for m ≥ 4n − 4, and 4n − 5 consecutive cappings of T and T ′ are equal, then T = T ′ . Proof (1) This follows from taking a partial trace (that is, connecting top right strings to bottom right strings) of the diagrams of the two-S relation (4), and applying the partial trace relation from Equation (2.2). (2) This is a straightforward application of the rotation relation (2) and the capping relation (3). We can then use the two-S relation on the middle two S boxes of the second picture, and apply the partial trace relation (Equation (2.2)) to the resulting f (4n−4) . We thus see ... (4) Thanks to Stephen Bigelow for pointing out this fact. On the one hand, f (4n−3) is a weighted sum of Temperley-Lieb pictures, with the weight of 1 being 1: On the other hand, f (4n−3) = 0. Therefore If P ∈ T L 4n−3 and P = 1, then P has a cap somewhere along the boundary, so it follows from our hypotheses that P T = P T ′ , and therefore A partial braiding Recall the definition of a crossing given in §2.2. This still defines an element of PA(S) and, away from S boxes, diagrams related by a framed three-dimensional isotopy are equal in the planar algebra. However, one needs to be careful manipulating these crossings and S boxes at the same time. Theorem 3.2 You can isotope a strand above an S box, but isotoping a strand below an S box introduces a factor of −1. Thus these two pictures are equal. (2) This is essentially identical to the previous argument, except that the factor picked up by resolving the crossings of the second picture is hence the minus sign in the relation. Remark. Upon reading this paper, one might hope that all subfactor planar algebras are braided, or partially braided. Unfortunately this is far from being the case. For the representation theory of the annular Temperley-Lieb category for [2] q > 2, set out in [32] and in the language of planar algebras in [25], implies that one cannot pull strands across lowest weight generators, even up to a multiple. To see this, resolve all crossings in either of the equations in Theorem 3.2; such an identity would give a linear dependence between "annular consequences" of the generator. For the other [2] q < 2 examples, namely E 6 and E 8 , [33] shows that Equation (1) holds, but not Equation (2), even up to a coefficient. The [2] q = 2 cases remain interesting. Proof When a diagram has more than one S , use the above relations to move one of the S 's next to another one, then apply relation (4) of Definition 1.1 to replace the two S 's with a Jones-Wenzl idempotent. Resolve all the crossings and proceed inductively. Corollary 3.4 Every closed diagram is a multiple of the empty diagram. Proof By the previous corollary, a closed diagram can be written in terms of closed diagrams with at most one S . If a closed diagram has exactly one S , it must be zero, because the S must have a cap attached to it somewhere. If a closed diagram has no S 's, it can be rewritten as a multiple of the empty diagram using Relation 1, which allows us to remove closed loops. Corollary 3.5 The planar algebra PA(S) is spherical. Proof A braiding always suffices to show that a planar algebra is spherical; even though there are signs when a strand passes underneath an S , we can check that PA(S) is spherical simply by passing strands above everything else in the diagram. The planar algebra PA(S) is non-zero In this section, we prove the following reassuring result. Theorem 3.6 In the planar algebra PA(S) described in Definition 1.1, the empty diagram is not equal to zero. The proof is fairly straightforward. We describe an algorithm for evaluating any closed diagram in PA(S), producing a number. Trivially, the algorithm evaluates the empty diagram to 1. We show that modifying a closed diagram by one of the generating relations does not change the result of the evaluation algorithm. The algorithm we'll use actually allows quite a few choices along the way, and the hard work will all be in showing that the answer we get doesn't depend on these choices. 3 After that, checking that using a relation does not change the result will be easy. Figure 1). Further, for each S box do the following. Starting at the marked point walk clockwise around the box counting the number of strands you pass before you reach the point where the arc attaches. This gives two numbers; multiply the new picture by −i raised to the sum of these two numbers. Restart the algorithm on the result. (2) If there is exactly one S box in the diagram, evaluate as 0. Theorem 3.8 The algorithm is well-defined, and doesn't depend on any of the choices made. Proof We'll prove this in five stages. i If two applications of the algorithm use the same pairing of S boxes, and the same arcs, but replace the pairs in different orders, we get the same answer. ii If we apply the algorithm to a diagram with exactly two S boxes, then we can isotope the arc connecting them without affecting the answer. iii Isotoping any arc does not change the answer. iv Changing the point at which an arc attaches to an S box does not change the answer. v Two applications of the algorithm which use different pairings of the S boxes give the same answers. Stage ii This follows easily from the fact that Temperley-Lieb is braided, and the final statement in Lemma 2.5. Stage iii In order to isotope an arbitrary arc, we make use of Stage i to arrange that this arc corresponds to the final pair of S boxes chosen. Stage ii then allows us to move the arc. Stage iv Changing the point of attachment of an arc by one step clockwise results in a Temperley-Lieb diagram at Step 3 which differs just by a factor of i, according to the first part of Lemma 2.5. See the second part of Figure 1, which illustrates exactly this situation. This exactly cancels with the factor of −i put in by hand by the algorithm. Furthermore, moving the point of attachment across the marked point does not change the diagram, but does multiply it by a factor of (−i) 4n−4 = 1. Stage v We induct on the number of S boxes in the diagram. If there are fewer than 3 S boxes, there is no choice in the pairing. If there are exactly 3 S boxes, the evaluation is automatically 0. Otherwise, consider two possible first choices of a pair of S boxes. Suppose one choice involves boxes which we'll call A and B , while the other involves boxes C and D. There are two cases depending on whether the sets {A, B} and {C, D} are disjoint, or have one common element, say D = A. If the sets are disjoint, we (making use of the inductive hypothesis), continue the algorithm which first removes A and B by next removing C and D, and continue the algorithm which first removes C and D by next removing A and B . The argument given in Stage i shows that the final results are the same. Alternatively, if the sets overlap, say with A = D, we choose some fourth S box, say E . After removing A and B , we remove C and E , while after removing A and C we remove B and E , and in each case we then finish the algorithm making the same choices in either application. The resulting Temperley-Lieb diagrams which we finally evaluate in Step 3 differ exactly by the two sides of the identity in Lemma 2.4. (More accurately, in the case that the arcs connecting these pairs of S boxes cross strands in the original diagram, the resulting Temperley-Lieb diagrams differ by the two sides of that equation sandwiched between some fixed Temperley-Lieb diagram; see the remark following Lemma 2.4.) Proof of Theorem 3. 6 We just need to check that modifying a closed diagram by one of the relations from Definition 1.1 does not change the answer. Relation (1) Make some set of choices for running the algorithm, choosing arcs that avoid the disc in which the relation is being applied. The set of choices is trivially valid both before and after applying the relation. Once we reach Step 3 of the algorithm, the Temperley-Lieb diagrams differ only by the relation, which we know holds in Temperley-Lieb! Relation (2) Run the algorithm, choosing the S we want to rotate as one of the first pair of S boxes, using the same arc both before and after rotating the S . The algorithm gives answers differing just by a factor of −i, agreeing with the relation. See Figure 2. Relation (3) If there's exactly one S box, the algorithm gives zero anyway. If there's at least two S boxes, choose the S with a cap on it as a member of one of the pairs. Once we reach Step 3, the S with a cap on it will have been replaced with an f (4n−4) with a cap on one end, which gives 0 in Temperley-Lieb. Relation (4) When running the algorithm, on the diagram with more S boxes, ensure that the pair of S boxes affected by the relation are chosen as a pair in Step 1, with an arc compatible with the desired application of the relation. The planar algebra PA(S) is D 2n This planar algebra is called D 2n because it is the unique subfactor planar algebra with principal graph D 2n . To prove this we will need two key facts: first that its principal graph is D 2n , and second that it is a subfactor planar algebra. In §4.1, we describe a tensor category associated to any planar algebra, and using that define the principal graph. We then check that the principal graph for PA(S) is indeed the Dynkin diagram D 2n . In §4.2, we exhibit an explicit basis for the planar algebra. This makes checking positivity straightforward. The tensor category of projections of a planar algebra In this section we describe a tensor category associated to a planar algebra, whose objects are the 'projections'. This is essentially parallel to the construction of the tensor category of bimodules over a subfactor [29]. The tensor category described here is in fact isomorphic to that one, although we won't need to make use of this fact. We describe the category independently here, to emphasize that it can be constructed directly from the planar algebra, without reference to the associated subfactor. Definition 4.1 Given a planar algebra P we construct a tensor category C P as follows. • An object of C P is a projection in one of the 2n-box algebras P 2n . • Given two projections π 1 ∈ P 2n and π 2 ∈ P 2m we define Hom (π 1 , π 2 ) to be the space π 2 P n→m π 1 (P n→m is a convenient way of denoting P n+m , drawn with n strands going down and m going up.) • The tensor product π 1 ⊗ π 2 is the disjoint union of the two projections in P 2n+2m . • The trivial object 1 is the empty picture (which is a projection in P 0 ). • The dual π of a projection π is given by rotating it 180 degrees. This category comes with a special object X ∈ P 2 which is the single strand. Note that X = X . We would like to be able to take direct sums in this category. If π 1 and π 2 are orthogonal projections in the same box space P n (i.e. if π 1 π 2 = 0 = π 2 π 1 ), then their direct sum is just π 1 +π 2 . However, if they are not orthogonal the situation is a bit more difficult. One solution to this problem is to replace the projections with isomorphic projections which are orthogonal. However, this construction only makes sense on equivalence classes, so we use another construction. If C is a tensor category then Mat (C) has an obvious tensor product (on objects, formally distribute, and on morphisms, use the usual tensor product of matrices and the tensor product for C on matrix entries). If C is spherical then so is Mat (C) (where the dual on objects is just the dual of each summand and on morphisms the dual transposes the matrix and dualizes each matrix entry). Definition 4.5 A planar algebra is called semisimple if every projection is a direct sum of minimal projections and for any pair of non-isomorphic minimal projections π 1 and π 2 , we have that Hom (π 1 , π 2 ) = 0. Our definitions here are particularly simple because we work in the context of unshaded planar algebras. A slight variation works for a shaded planar algebra as well. Theorem 4.7 The planar algebra D 2n is semi-simple, with minimal projections f (k) for k = 0, . . . , 2n − 3 along with P and Q defined by The principal graph is the Dynkin diagram D 2n . Proof Observe that f (2n−2) · S = S (as the identity has weight 1 in f (2n−2) and all non-identity Temperley-Lieb pictures have product 0 with S ) and S 2 = f (2n−2) . We see that P and Q are projections. Let M = {f (1) , . . . f (2n−3) , P, Q}. By Lemmas 4.8 and 4.9 (below) every projection in M is minimal. Lemma 4.10 says there are no nonzero morphisms between different elements of M. By Lemmas 4.11, 4.12, and 4.13, we see that for each Y ∈ M, the projection Y ⊗ f (1) is isomorphic to a direct sum of projections in M. Thus, because every projection is a summand of 1 ∈ P n for some n, every minimal projection is in M. Finally from Lemmas 4.11, 4.12, and 4.13 we read off that the principal graph for our planar algebra is the Dynkin diagram D 2n . Remark. Since S * = S , all the projections are self-adjoint. The projections f (k) are all self-dual. The projections P and Q are self-dual when n is odd, and when n is even, P = Q and Q = P . These facts follow immediately from the definitions, and the rotation relation (2). Remark. The minimality of the empty diagram, f (0) , is exactly the fact that any closed diagram evaluates to a multiple of the empty diagram; that is, dim PA(S) 0 = 1. Proof The space Hom f (i) , f (i) consists of all diagrams obtained by filling in the empty ellipse in the following diagram. We want to show that any such diagram which is non-zero is equal to a multiple of the diagram gotten by inserting the identity into the empty ellipse. By Corollary 3.3, we need only consider diagrams with 0 or 1 S boxes. First consider inserting any Temperley-Lieb diagram. Since any cap applied to a Jones-Wenzl is zero, the ellipse must contain no cups or caps, hence it is a multiple of the identity. Now consider any diagram with exactly one S . Since S has 4n − 4 strands, and 2i ≤ 4n − 6, any such diagram must cap off the S , hence it vanishes. Proof The two proofs are identical, so we do the P case. The space Hom (P, P ) consists of all ways of filling in the following diagram. We want to show that any such diagram which is non-zero is equal to a multiple of the diagram with the identity inserted. Again we use Corollary ??. First consider any Temperley-Lieb diagram drawn there. Since any cap applied to P is zero, the diagram must have no cups or caps, hence it is a multiple of the identity. Now consider any diagram with exactly one S . Since S has 4n − 4 strands, any such diagram which does not cap off S must be (up to rotation) the following diagram. Since P S = P , this diagram is a multiple of the diagram with the identity inserted. Proof Suppose A and B are distinct Jones-Wenzl projections. Any morphism between them with exactly one S must cap off the S , and so is 0. Any morphism between them in Temperley-Lieb must cap off either A or B and so is zero. If A is a Jones-Wenzl projection, while B is P or Q, exactly the same argument holds. If A = P and B = Q, we see that the morphism space is spanned by Temperley-Lieb diagrams and the diagram with a single S box. Changing basis, the morphism space is spanned by non-identity Temperley-Lieb diagrams, along with P and Q. Nonidentity Temperley-Lieb diagrams are all zero as morphisms, because they result in attaching a cap to both P and Q. The elements P and Q are themselves zero as morphisms from P to Q, because P Q = QP = 0. Proof This is a well known result about Temperley-Lieb. The explicit isomorphisms are The fact that these are inverses to each other is exactly Wenzl's relation (2.1). Proof The explicit isomorphisms are: The fact that these are inverses to each other follows from Wenzl's relation and the fact that P and Q absorb Jones-Wenzl idempotents (ie, f (2n−3) · P = P and f (2n−3) · Q = Q). Proof We claim that the maps ... : and .. : f (2n−3) → P ⊗ f (1) are isomorphisms inverse to each other. To check this, we need to verify ... The first equality is straightforward: capping P = 1 2 (f (2n−2) + S) on the right side kills its S component, and then the equality follows from the partial trace relation and the observation that [2]q . The second will take a bit more work to establish, but it's not hard. We first observe that P · f (2n−3) = P , then expand both P s as 1 2 (f (2n−2) + S). Thus ... , and applying Wenzl's relation to the first term and the two-S relation to the fourth term yields The first two of these follows from the partial trace relation (Equation (2.2)) and f (2n−3) · S = S , and the third follows from f (2n−2) · S = S . Therefore, we conclude ... ... , which is what we wanted to show. The tensor product decompositions We do not prove the formulas that follow, and they are not essential to this paper. Nevertheless, we include the full tensor product table of D 2n for the sake of making this description of D 2n as complete as possible. Partial tensor product tables appears in [19, §3.5] and [20, §7]. Using the methods of this paper, one could prove that these tensor product formulas hold by producing explicit bases for all the appropriate Hom spaces in the tensor category of projections. However, this method would not show that these formulas are the only extension of the data encoded in the principal graph. Much of this is proved in [19], except for the formula for f (j) ⊗ f (k) when 2n − 2 ≤ j + k ≤ 4n − 4 in Equation (4.1) and the formula for P ⊗ f (2k+1) and Q ⊗ f (2k+1) in Equation (4.2). Nonetheless, the methods of [19] readily extend to give the remaining formulas. With the same exceptions, along with Equations (4.3) and (4.4), these are proved in [20], by quite different methods. Further, [19] proves there is no associative tensor product extending the tensor product data encoded in the principal graphs D 2n+1≥5 with an odd number of vertices. Theorem 4.14 The tensor product structure is commutative, and described by the following isomorphisms. and when n is even and when n is odd A basis for the planar algebra In this section we present an explicit basis for the planar algebra, and use this to show that the generators and relations presentation from Definition 1.1 really does result in a positive definite planar algebra. Each vector space PA(S) m of the planar algebra also appears as a Hom space of the corresponding tensor category of projections, specifically as Hom (1, X ⊗m ). We'll use a standard approach for describing bases for semisimple tensor categories, based on tree-diagrams. For each triple of self-adjoint minimal projections p, q, r, we need to fix an orthogonal basis for Hom (1, p ⊗ q ⊗ r). Call these bases {v λ } λ∈B(p,q,r) . If we take the adjoint of v λ ∈ Hom (1, p ⊗ q ⊗ r), we get v * λ ∈ Hom (p ⊗ q ⊗ r, 1). In fact, we'll only need to do this when one of the the three projections p, q and r is just X . In these cases, we've already implicitly described the Hom spaces in Lemmas 4.11, 4.12 and 4.13. We can now interpret certain planar trivalent graphs as notations for elements of the planar algebra. The graphs have oriented edges labelled by projections, but where we allow reversing the orientation and replacing the projection with its dual. 5 The graphs have vertices labelled by elements of the sets B(p, q, r) described above (where p, q and r are the projections on the edges leaving the vertex). If ♯B(p, q, r) = 1 we may leave off the label at that vertex. To produce an element of the planar algebra from such a graph, we simply replace each edge labelled by a projection p in PA(S) 2m with m parallel strands, with the projection p drawn across them, and each trivalent vertex labelled by λ with the element v λ ∈ Hom (1, p ⊗ q ⊗ r). As a first example, Definition 4. 15 We call the norm of the element v λ , with λ ∈ B(p, q, r), the theta-symbol: θ(p, q, r; λ) := Definition 4.16 Fix a list of minimal projections (p i ) 0≤i≤k+1 , called the boundary. A tree diagram for this boundary is a trivalent graph of the form: It is labelled by • another list of minimal projections (q i ) 1≤i≤k−1 such that q i is a summand of q i−1 ⊗ p i for each 1 ≤ i ≤ k , or equivalently that B(q i , q i−1 , p i ) = ∅ (here we make the identifications q 0 = p 0 and q k = p k+1 ), • and for each 1 ≤ i ≤ k , a choice of orthogonal basis vector v λ i , with λ i ∈ B(q i , q i−1 , p i ). Theorem 4.17 The k -strand identity can be written as a sum of tree diagrams. (We'll assume there are no multiple edges in the principal graph for the exposition here; otherwise, we need to remember labels at vertices.) Let Γ k−1 be the set of length k − 1 paths on the principal graph starting at X . (Thus if γ ∈ Γ k−1 , γ 0 = X and the endpoint of the path is γ k−1 .) Then Proof We induct on k . When k = 1, the result is trivially true; the only path in Γ 0 is the constant path, with γ(0) = X , and there's no coefficient. To prove the result for k + 1, we replace the first k strands on the left, obtaining and then use the identity = γ k adjacent to γ k−1 tr(γ k ) θ(γ k−1 , γ k , X) (which certainly holds with some coefficients, by the definition of the principal graph, and with these particular coefficients by multiplying in turn both sides by each of the terms on the right) to obtain the desired result. Remark. Actually, the tree diagrams with boundary (p i ) give an orthogonal basis for the invariant space Hom (1, i p i ), but we won't prove that here. We'd need to exhibit explicit bases for all the triple invariant spaces in order to check positivity, and a slightly stronger version of Theorem 4.17. Proof To see that the tree diagrams are all orthogonal is just part of the standard machinery of semisimple tensor categories -make repeated use of the formulas = tr(p) = δ p=q δ µ=λ * θ(p, r, s; λ) tr(p) where λ ∈ B(p, r, s) and µ ∈ B(s, r, q). (In fact, this proves that the tree diagrams are orthogonal for arbitrary boundaries). The norm of a tree diagram is a ratio of theta symbols and traces of projections. The trace of f (k) is [k + 1] q , and tr(P ) = tr(Q) = [2n−2]q 2 , and these quantities are all positive at our value of q . Further, the theta symbols with one edge labelled by X are all easy to calculate (recall the relevant one-dimensional bases for the Hom spaces were described in Lemmas 4.11,4.12 and 4.13), and in fact are just equal to traces of these same projections: To see that the tree diagrams span, we make use of Theorem 4.17, and Lemma 4.10. Take an arbitrary open diagram D with k boundary points, and write it as D · 1 k . Apply Theorem 4.17 to 1 k , and observe that all terms indexed by paths not ending at f (0) are zero, by Lemma 4.10 (here we think of D as having an extra boundary point labeled by f (0) , so we get a map from f (0) to the endpoint of the path). In the remaining terms, we have the disjoint union (after erasing the innermost edge labeled by f (0) ) of a closed diagram and a tree diagram. Since all closed diagrams can be evaluated, by Corollary 3.4, we see we have rewritten an arbitrary diagram as a linear combination of tree diagrams. Therefore, PA(S) is indeed the subfactor planar algebra with principal graph D 2n . A A brief note on T n , a related planar algebra In this section we briefly describe modifications of the skein relations for D 2n which give rise to the planar algebras T n . The planar algebras T n have appeared previously in [20,35,36]. They are unshaded subfactor planar algebras in the sense we've described in 2.1, but they are not shaded subfactor planar algebras (the more usual sense). The most direct construction of the T n planar algebra is to interpret the single strand as f (2n−2) in the Temperley-Lieb planar algebra A 2n , allowing arbitrary Temperley-Lieb diagrams with (2n − 2)m boundary points in the m-boxes. (Another way to say this, in the langauge of tensor categories with a distinguished tensor generator, is to take the even subcategory of A 2n , thought of as generated by f (2n−2) .) This certainly ensures that T n exists; below we give a presentation by generators and relations. We consider a skein theory with a (k = 2n + 1) strand generator (allowing in this appendix boxes with odd numbers of boundary points), at the special value q = e iπ k+2 , and relations analogous to those of Definition 1.1: (1) a closed loop is equal to 2 cos( π k+2 ), ... where Z − = Z + = (−1) k+1 2 . (Recall that in the D 2n case discussed in the body of the paper we had Z ± = ±1.) These relations allow us to repeat the arguments showing that closed diagrams can be evaluated, and that the planar algebra is spherical. When k ≡ 3 (mod 4) and Z ± = +1, the planar algebra T n is braided. When k ≡ 1 (mod 4) and Z ± = −1, one can replace the usual crossing in Temperley-Lieb with minus itself; this is still a braiding on Temperley-Lieb. One then has instead Z ± = +1, and so the entire planar algebra is then honestly braided. Notice that T n is related to A 2n in two different ways. First, T n contains A 2n as a subplanar algebra (simply because any planar algebra at a special value of [2] q contains the corresponding Temperley-Lieb planar algebra). Second, T n is actually the even part of A 2n , with an unusual choice of generator (see above). The first gives a candidate braiding -as we've seen it's only an 'almost braiding' when k ≡ 1 (mod 4). The second automatically gives an honest braiding, and in the k ≡ 1 (mod 4) case it's the negative of the first one. Following through the consistency argument of §3.3, mutatis mutandi, we see that these relations do not collapse the planar algebra to zero. Further, along the lines of §4.1, we can show that the tensor category of projections is semisimple, with {f (0) , f (1) , . . . , f ( k−1 2 ) } forming a complete orthogonal set of minimal projections. The element S in the planar algebra gives rise to isomorphisms f (i) ∼ = f (k−i) for i = 0, . . . , k−1 2 . Further, the principal graph is T k−1 2 , the tadpole graph: .
2013-11-13T02:01:17.000Z
2008-08-06T00:00:00.000
{ "year": 2008, "sha1": "0dc87090f0f8ba526c23fbfd77292237b66a4087", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jpaa.2009.04.010", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "9435a12929e449c3d55b782730bdb8830d365be9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
234964216
pes2o/s2orc
v3-fos-license
Endobronchial use of gastrointestinal retrieval net for an aspirated dental crown Introduction/aim Flexible fiberoptic bronchoscopy is generally the first line procedure for airway foreign body removal. However, removal may be challenging when surface and/or other characteristics make grasping the object difficult. We present a case in which we used a gastrointestinal retrieval net to successfully extract a dental crown, a type of foreign body with difficult-to-grasp surface characteristics. Methods A 72-year-old male aspirated a dental crown during an attempted molar crown fitting. Semi-emergent flexible fiberoptic bronchoscopy was undertaken using an Olympus bronchoscope with a 2.8mm working channel. Attempts at retrieval using standard forceps, and a four-wire airway retrieval basket were unsuccessful. The retrieval net (RescueNet, Boston Scientific) is a device used in gastrointestinal procedures to retrieve foreign objects, food boluses and tissue fragments. The device's external catheter is 2.5mm in diameter and is passed through the working channel of an endoscope. The handle operates in a similar manner to conventional biopsy forceps and deploys a one-sided fishnet mesh basket with an adjustable string collar that can be manipulated to enclose a target. Results The dental crown was easily removed with the retrieval net on the second attempt. Upon review of the literature, endobronchial usage of retrieval nets was found to be rare. Conclusion Clinicians should be aware that gastrointestinal retrieval nets are an option for the retrieval of airway foreign bodies. Case presentation A 72-year-old male was urgently referred by his dentist after an attempted molar crown fitting during which the dental crown was dropped into the oropharynx and aspirated. On assessment, the patient was in no respiratory distress, but a monophonic right sided wheeze was evident. Chest x-ray (Fig. 1A) showed the dental crown lodged in the right bronchus intermedius. Semi-emergent flexible fiberoptic bronchoscopy was undertaken using an Olympus videobronchoscope (BF-1TH190) with a 2.8mm working channel. In anticipation of a technically challenging extraction, biopsy forceps, an airway four-wire (Dormia) retrieval basket and gastrointestinal retrieval net were made available. The retrieval net (RescueNet Retrieval Net, Boston Scientific) is a device used in gastrointestinal procedures to retrieve foreign objects, food boluses and tissue fragments such as polyps. The device's external catheter is 2.5mm in diameter and is passed through the working channel of an endoscope. The handle operates in a similar manner to conventional biopsy forceps and deploys a one-sided fishnet-style mesh basket with an adjustable string collar that can be manipulated to enclose a target (Fig. 1B). As anticipated, attempts at removal with toothed biopsy forceps and a four-wire airway retrieval basket were unsuccessful as the smooth metallic surface of the dental crown precluded a secure grip with either device. The dental crown was successfully removed with the retrieval net on the second attempt (Fig. 1C); the size of the object necessitated the en bloc removal of the bronchoscope, retrieval net and crown. The same dental crown was successfully fitted at the patient's next dental visit and the patient's respiratory status continued to be unremarkable. Review of the literature Flexible fiberoptic bronchoscopy is generally considered to be the first line procedure for airway foreign body removal and is successful in majority of cases [1,2]. However, this can present technical challenges due to the object's location, size, rotation, surface, its organic or inorganic nature and propensity to fragment. Various tools have been advocated, including simple suction, forceps, wire baskets, snares and cryobiopsy [3]. The best option is likely to vary from case to case and may be operator dependent. Where flexible bronchoscopy fails, rigid bronchoscopy may be considered and in rare cases, surgical thoracotomy may be warranted [4]. Publications detailing the bronchoscopic use of retrieval nets are rare [5]. Retrieval nets have characteristics that make them particularly suitable for situations such as extraction of small smooth objects. They may be considered when difficulties are encountered in securely grasping the foreign body with tools such as forceps or a basket, or the target object may be fragmented by manipulation. A particular advantage over Dormia airway wire baskets is the presence of a net to retain the encapsulated object. This case also highlights the importance of pre-procedural planning and ensuring the availability of suitable instrumentation. If the retrieval net had not been available, the procedure would have been abandoned and another procedure would have to be undertaken, at considerable expense and inconvenience. Retrieval nets may be available at centers where gastrointestinal procedures are performed, independent of whether interventional pulmonology is present at that site. Medical and dental practitioners should be aware of the potential, albeit rare, for foreign body aspiration during dental procedures. Risk factors for aspiration in adults include an altered level of consciousness, cerebrovascular disease, and dental procedures [3,6]. During dental procedures an additional risk factor for aspiration is performing the procedure in a supine position [7]. Prompt intervention may prevent complications such as post obstructive pneumonia and hemoptysis [6]. Mortality is rare but has been reported [8]. Conclusion Foreign body aspiration can occur during dental procedures. Pulmonologists should consider using gastrointestinal retrieval nets to facilitate the extraction of airway foreign bodies during flexible bronchoscopy. Retrieval nets may be available in facilities where interventional pulmonology is unavailable. Funding and conflicts of interest Nil. Prior presentation Nil. Declaration of competing interest I confirm that the co-authors have no conflict of interest or sources of funding to declare. Fig. 1. A, Chest x-ray demonstrating the dental crown lodged in the distal bronchus intermedius. B, Retrieval net device: objects may be captured in the white mesh net. The handle controls the extension and retraction of the blue collar string. C, Dental crown following endoscopic removal with a retrieval net. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)
2021-05-22T00:02:51.379Z
2021-04-13T00:00:00.000
{ "year": 2021, "sha1": "917feaf7097a1597b065aeb9df0661c3aae6b4b5", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.rmcr.2021.101412", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22f33f1a4c67b6769af6bd34cf26721b1d29e293", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221134271
pes2o/s2orc
v3-fos-license
WASTAGE OF SCHOOL MATERIAL RESOURCES AND SECONDARY SCHOOL SYSTEM EFFECTIVENESS: EVIDENCE FROM A SURVEY IN NIGERIA Article History Received: 16 April 2020 Revised: 18 May 2020 Accepted: 22 June 2020 Published: 10 July 2020 This study was undertaken to assess the causal effects of the wastage of school material resources and school system effectiveness in secondary schools in Cross River State of Nigeria. A survey was conducted using a sample of 1,480 respondents (271 principals, 396 vice principals, and 813 teachers. Data were collected through two instruments Wastage of School Material Resources Questionnaire (WSMRQ) designed by the researchers, validated by three experts, and with Cronbach alpha reliability of .895; School System Effectiveness Scale (SSES) designed and validated psychometrically by Bassey, Owan, and Eze (2019) with Cronbach reliability values of .982, .983, and .930 for the three sub-scales, and a reliability coefficient of .941 for the overall instrument. Descriptive statistics and structural equation modelling were employed in analysing collected data. Findings of the study revealed that school material resources are wasted in different and numerous ways. Some of these ways are significant while others are not. Teaching equipment that are wasted in schools include manuals, desks, workbooks, charts, projectors, playgrounds, drawing books, chalks, textbooks, laboratories, chalkboard/whiteboards, markers, handbooks, and computers. It was discovered also that wastage of school farm resources, buildings, and teaching equipment have a direct significant but negative effect on the effectiveness of schools, accounting for 17% of the total variance in the effectiveness of the school. It was concluded, that increment in the wastage of school material resources will cause the effectiveness of schools to decline. The implication of the finding of this study is discussed for global best practices. INTRODUCTION The attainment of full inclusion, rapid economic development, religious unity, and political stability in any country will be a mirage if schools that are supposed to inculcate good virtues in the life of students are failing in their responsibilities. This calls for effective school systems where school leaders are able to integrate and unify available human and material resources towards reaching set goals. Some key characteristics of effective secondary Indices of effective schools, revealed that the effectiveness of many secondary schools in Nigeria seems to be very poor. As such, it requires urgent attention. This is manifested first in the poor service delivery of teachers and secondly, the poor quality of products supplied to both the tertiary education level and the society in general (Bassey et al., 2019). A lot of problems seem to be bedevilling the effectiveness prospect of many secondary schools including the persistent poor performance of secondary school students in normed-and criterion-referenced examinations. Many secondary school leavers are still struggling to pass the Unified Tertiary Matriculation Examination (UTME) as well as aptitude test conducted by universities (Robert & Owan, 2019). For instance, it was revealed that out of 10,423,187 secondary school leavers who sat for the UTME in 2018, only 4.46% scored from 200 and above; 13.51% scored between 180 -199; 21.59% scored between 160 and 179; 29.71% ; scored 140 -159; 30.73% scored between 120 -139 marks (Robert & Owan, 2019). This high rate of failure in UTME has been attributed to the high rate of examination malpractice among students while in secondary schools . The prevailing issues of cultism among students, indiscipline and high level of truancy among teachers and students, poor students' attitudes towards academic activities, sales of wares by teachers when they are supposed to be teaching, poor human relations between staff in schools are some compelling reasons of ineffectiveness. Naivety and lackadaisical attitudes of many secondary school principals towards school administration, among others, are other pointers that many secondary schools are ineffective (Madukwe, Owan, & Nwannunu, 2019). Many stakeholders have been raising questions regarding the quality of schools these days, and are pointing accusing fingers consequently, calling for immediate attention. Efforts has been made by government and other stakeholders to curb the menace of secondary school system ineffectiveness which is prevalence in the provision of school facilities, such as laboratory, library, classroom, staff rooms, and library facilities; frequent payment of staff salaries; provision of retraining opportunities and supply of ICT gadgets. Such remedy of the school system also includes recruitment of new personnel; participative school management with community collaboration; and improved parental support in children education (Aina, Olanipekun, & Garuba, 2015;Arop, Owan, & Ekpang, 2018;. As a matter of fact, teachers are no longer witnessing delays in receiving their monthly salaries in Cross River State (Owan, 2018). All these measures were anticipated to boost the overall effectiveness of schools through teachers and students, even though the reality still appears to be a nightmare. It was based on this failure that this study was undertaken to shift the paradigm from the highly suspected correlates of school effectiveness to an area that little or no attention appears to have been paid, which could also affect the effectiveness level of schools. This emerging area presumed to affect the quality of schools is the wastage of school material resources. Wastage of school material resources was considered in this study because these resources offer support to the school and could be used to facilitate effective teaching and learning. Wastages is an unprofitable and uneconomical use of time and resources (Adamu, 2000;Oyetakin, 2011;Samuel, 2004). "Wastage in respect to education refers to human and material resources spent or 'wasted' on students who have to repeat a grade or who drop out of school before completing a cycle" (Ngome & Kikechi, 2015). Wastage denotes the school's inability or inefficiency of a school to make use of available opportunities and resources in the development of students' cognitive, affective, and psychomotor attributes, that are needed for a productive living and life-long learning (UNESCO, 1998). It is also wastage when students cannot pass examinations and other qualifying tests they have registered for after attaining a certain level of education (Akindele, 2015;Charles, 2013;Muhammad & Muhammad, 2011). The dropout and repetition rates in schools are usually considered as two components of educational wastage (Ngome & Kikechi, 2015). Wastage of school material resources could be seen as the complete or partial destruction, over-utilisation, and under-utilisation of materials available in the school. Thus, when materials are provided or supplied to schools, and such materials are left to be destroyed by either man, animals, insect, or other biological processes, without serving the need for which they were provided, it is seen as school material resource wastage in the context of this study. Imagine a school where buildings are dilapidated without any efforts made to refurbish them or a school where desk are left in an open atmosphere for both rain, sun, and other natural processes to impede on them. It will lead to destruction and such materials may no longer be available or in good shape for effective utilization. School wastage and stagnation could be affected by various factors such as poverty (inability to pay school fees), school distance, transport communications, teachers' quality, school regime, social environment, school size, school type, violence, and many others (Achoka, 2007;Lyngkhoi, 2017). Materials that are not optimally utilised in line with provisional prescriptions, specifications, and guidelines are also considered as wastage. This is because in cases where lots of facilities are provided beyond the enrolment figures or beyond the number of available users in schools, a large portion of such resources are left unutilized and may rot consequently. This type of wastage is borne out of the over-supply of materials or the under-utilisation of materials that have been provided. Another instance of school material wastage can be seen in a hypothetical example where textbooks are provided by the Government for distribution to students, and for selfish reasons, the principals of some secondary schools refused to distribute same to the target recipients. These books were then stocked in the school library only to be attacked by white ants and termites. In another case, wastage can also be seen in situations where resources are unevenly distributed across classes, sections, schools, or regions. Such that some classes, schools or areas have more than their needs, while other classes, schools, or areas have little or nothing. Over-utilisation of materials constitutes a waste if available resources are used beyond their carrying capacity or above the degree, they ought to be utilised. Over-utilisation of school material resources occurs due to undersupply of school facilities such that the number of available users exceeds the carrying capacity of facilities. For instance, many school desks could be easily destroyed if the number of students sitting on them is more than the expected number. Imagine 10 students sitting in seats designed for six students. All these forms of material resource wastages could affect the quality of service delivery by teachers, students' academic performance, and the overall effectiveness of schools. This is because many schools could lose their staff, students or both to other wellequipped institutions due to the lack or poor management of required facilities (Dike, 2005). Poor maintenance of school facilities also promotes poor academic performance among learners (Danestry, 2004). There is a nexus between availability of school facilities, students learning and academic excellence in schools (Danestry, 2004). Thus, the optimal use of meagre resources allocated to education and the minimization of wastages can only be guaranteed to ensure efficiency in the educational sector (Akindele, 2005). More so, poor maintenance of facilities in schools can lead to health and sanitary condition problems. For instance, broken toilets that are not repaired in schools would encourage indiscriminate defecation, that would in turn, give rise to epidemics, infection and other contagious diseases. Thus, putting the lives of the students, members of staff, the immediate neighbourhood and the nation at large at risk (Oladipo & Oni, 2010). It was based on this background that this study was undertaken to assess school material resource wastage and how it affects the effectiveness of the school system. THEORETICAL FRAMEWORK This study adopts the school effectiveness model (Bassey et al., 2019). This model prescribes that school effectiveness can be traced to two key determinant components -the effectiveness of teachers and the effectiveness of students (Bassey et al., 2019) as shown in Figure 1. In the model Figure 1, teachers' effectiveness was measured based on indices such as "physical appearance/dressing, subject mastery, lesson preparation, punctuality, instructional delivery, classroom management techniques, students' engagement in learning, understanding of learners' individual differences, monitoring of students' progress, students reinforcement, keeping of students' records, relationships with students, getting students feedback during lessons, lesson evaluation techniques, and academic performance of students" (Bassey et al., 2019). The authors theorised that students effectiveness is concerned with proxies such as "punctuality to classes, time management, students classroom behaviour, class attendance frequency, communication skills, note-taking, attitudes towards assignment, study rate, adherence to school rules and regulation, attitudes towards co-curricular activities, relationship with other students, level of creativity, and examination results" (Bassey et al., 2019). The model predicted that the effectiveness of both teachers and students manifested by the variables listed above will create an effective school system that will be able to implement planned policies, raise high expectations, achieve quality school leadership, maintain cohesion among staff, and build good relationship with the host community. Other benefits of effective school system are provision of good school climate, improving teachers' dedication and students' academic performance, increased graduates enrolment into tertiary institutions, provision of environmental safety, and the attainment of set goals (Bassey et al., 2019). The implication of this model to the present study is based on the moderating roles teachers' and students' effectiveness will play in linking the wastage of school material resources to school system effectiveness. According to Bassey et al. (2019) teachers' and students' effectiveness are two important measures of school system effectiveness that cannot be left out. Therefore, this study intends to modify this model slightly, by introducing three independent variables -wastage of school farm resources, buildings, and teaching equipment. The modified model of Bassey et al. (2019) based on the variables of this study, is hypothesised in Figure 2. The model in Figure 2 was hypothesised by the researchers to indicate that wastage of school farm resources, buildings, and teaching facilities have a direct relationship to teachers' effectiveness, students' effectiveness, and school system effectiveness respectively. Teachers and students' effectiveness respectively, were hypothesised to moderate the relationship between wastage of school farm resources, buildings, and teaching facilities to school system effectiveness respectively. Thus, wastage of school material resources was assessed in this study in terms of wastage of school farm resources, wastage of school buildings and wastage of teaching equipment. Figure-2. A hypothesised causal model of wastage of school material resources and system effectiveness with teachers' and students' effectiveness as moderating variables. Wastage of School Farm Resources Wastage of school farm resources refers to the inability of the school management to effectively maintain and efficiently utilize agricultural outputs and related resources gathered from farming activities in the school. Farming activities in schools is a good source of generating internal revenues to supplement external funding from the Government and other stakeholders. Therefore, the way school leaders manage the output from these farms could go a long way to affect the funding patterns in schools. To Oladele and Akinsorotan (2007) school farms form part of the methods adopted by school heads to generate revenue internally. The harvest of agricultural products coupled with students' engagement in practical agricultural activities both in terms of crop production and animal husbandry, could also generate revenues for the school. Thus, if these materials are not commercialized for income purposes, the funds they would have generated will be missing. Farm products could be wasted if school leaders divert the monies recovered from the sales of agricultural materials produced in the school into personal projects that do not benefit the school. The consumption of school farm products by staff without any financial returns to the school account is also a waste. Another way in which farm products can be wasted is if they are allowed to be destroyed, stolen or depreciate in value before they are sold. It had been stated that many school principals rated the school farm as a very important source which provides students with practical experience in agriculture, and promotes agricultural skills in students through agricultural experiments and other opportunities to carryout agricultural demonstration in school plots, among others (Agili, 2014). Unfortunately, many secondary school heads fail to utilize available school farms as a result of nonavailability of seeds, fertilizers, feeds and other operating devices; inadequate training offered to teachers on the use of modern and sophisticated farming implements for practical and instructional purposes; the unserious attitude on the part of the administrators (Ikeoji, Agwubike, & Disi, 2007). The stated problems have contributed in no small measure to the wastage of school farm resources which leaves the school only in the hands of Parents Teaching Association and other stakeholders for survival and funding. Empirically, a study found that agricultural orientation is given to students through the school farm, especially those with poor or no agricultural background (Chukwudum & Ogbuehi, 2013). Furthermore, it was reported that school farms are used to stir learners' interest and love for agriculture, and help schools to generate revenue internally (Chukwudum & Ogbuehi, 2013). Thus, the funds generated from agricultural proceeds could be used to improve other sectors of the school such as the provision of new facilities, employment of part-time teachers to supplement the available full-time teachers, building new blocks, and so on. Wastage of School Buildings It is widely believed that school building to a larger extent has a significant influence on the effectiveness of a school system (McGowen, 2007). By school building we mean physical structures, classrooms, laboratories, libraries, toilets, staff rooms, walls, roofs, drains, doors, windows, floors and also fix furniture. One of the most important facilities necessary to aid rapid school progress and ensure instructional flow is the school buildings (Douglas & Ransom, 2006). Generally, the maintenance culture of Nigerians is very low as many individuals believe that school buildings and other facilities are government properties. Thus, it does not affect them whether they are maintained or vandalized. It is no surprise because there is a popular saying in the country that "government property is no man's property", hence, the high rate of negligence by even students who should be the direct beneficiaries of school buildings and other material resources. These acts of negligence seem to have affected schools' administration as many secondary schools today, lack classrooms, adequate buildings, and other shortcomings that may be attributed to students' failure to protect material resources provided by the government and other stakeholders. Consequently, it commonplace to see many streams of classes merged into one single room with a class size of over 50 students. This increases the teacher-student ratio above the recommended ratio of 1:35 students by the federal ministry of education (Federal Republic of Nigeria, 2013). Empirically, Munyi and Orodho (2015) investigated the causes of wastage of school building in public secondary schools from a Kenyan perspective. Findings of the study indicated that many schools lacked enough resources to maintain school buildings, and as a result, there is also increased dropout, repetition as well as low completion and transition rates. This finding implies that there is ineffectiveness of many schools. Wastage of Teaching Equipment Teaching equipment refers to physical and observable resources that are used to facilitate teaching, learning, or both. Teaching facilities include chalk/whiteboards, chalks/markers, textbooks, globes, charts, specimens, map /atlas, workbooks, drawing books, School field (for teaching during co-curricular activities), and so on. This variable was considered because these materials could be wasted through stealing, careless use, malicious destruction, and under-utilization. The wastage of these teaching materials may affect the quality of instruction that will be passed from teachers to students, and the quality of teaching goes a long way to affect teachers' effectiveness and students' performance. It was shown in a study that students' overall academic performance is affected by the relative and composite effects of three critical factors -the condition, effective management and adequacy of educational resources; the combined effects of these three factors outweighed the composite interaction of family background, school attendance, socio-economic status and behaviour on students' overall performance (Morgan, 2000). It is warned that for educational objectives to be attained, teaching resources are indispensable and should be given a central priority (Abdulkareem, 2011). The only way to achieve this is through principals' proficient leadership and management capabilities; as well as the timely and adequate provision of school facilities (Abdulkareem, 2011). Using both quantitative and qualitative techniques, a study established that a significant correlation between independent variables such as principal's proficiency, creativity, educational objectives, and the management of school facilities (Uko, 2015). Similarly, Ogbuanya, Nweke, and Ugwokem (2017) discovered that planning, organizing, controlling and coordinating are needed in the management of material resources for effective teaching and learning. The results of the null hypotheses tested revealed that there was no significant difference in mean responses of the respondents on the planning, organizing, controlling and coordinating strategies for proper management of material resources for effective teaching of electrical/electronic technology education. The Present Study Having explored and reviewed literature related to the various areas in this study, it was discovered that there appears to be scanty literature on wastage of school material resources. Previous studies on educational wastage have focused more on areas such non-employment of school leavers, class repetition rate, premature withdrawals, brain drain, school drop-out rate, misguided education, high rate of failures, and stagnation (Ajayi & Mbah, 2008;Babalola, 2014;Durosaro, 2012;Murithi, 2006;Ngome & Kikechi, 2015;Orwasa & John, 2017;Oyetakin, 2011;Rajesh & Prohlad, 2014;Shiba, 2010;UNESCO, 1998;Yusuf, 2014). These studies have uniqueness since they were carried out in different locations, but with similarity in focus. The present study takes a shift from the conventional model to explore wastage in terms of school material resources. This study was designed to extend the works in the literature into an area that currently appears unattended to. Thus, this study was anticipated to break new grounds and contributes to the literature by filling existing gaps (which is the paucity of research literature on wastage of school material resources). METHODS This study adopted the descriptive survey research design which is aimed at observing and describing events, facts, and phenomena as they occur in the population. This design does not warrant the manipulation of independent variables. The design was considered appropriate to the study because the researcher made use of data obtained through questionnaires to describe the school materials that are wasted and how they are wasted. The population of this study comprised all the principals (N = 271), vice-principals (N =396) and teachers (6,233) distributed across 271 public secondary schools. Proportionate stratified random sampling technique was adopted in selecting 100% of the principals and vice-principals, and 13% of the available teachers. Thus, all the 271 principals, 396 vice principals were all included in the study, while 813 teachers were randomly selected. This resulted in an overall sample of 1480 respondents. The instruments used for data collection were two sets of questionnaires - Wastage of School Material Resources Questionnaire (WSMRQ) and School System Effectiveness Scale (SSES). The former was designed by the researchers, validated by three experts in educational management, with Cronbach alpha reliability of .895. The instrument has three sub-scales measuring wastage of school farm resources (14 items), school buildings (10 items), and teaching equipment (15 items). In total, the instrument has a total of 39 items. All the items in the questionnaire (WSMRQ) were all organised into the revised four-points Likert scale of Strongly Agree, Agree, Disagree, and Strongly Disagree. The latter (SSES) was designed and validated psychometrically by Bassey et al. (2019) with Cronbach reliability values of .982, .983, and .930 for the three subscales, and a reliability coefficient of .941 for the overall instrument (see Bassey et al. (2019)). Data for the study were obtained from primary sources, as copies of the instrument were administered by the researchers to the respondents. The administration exercise was done on different days based on the schedule prepared by the researchers. Three trained research assistants supported the researchers in collecting data for the study, with efforts made to avoid any loss. In the end, all the administered copies of the instruments were successfully retrieved from the respondents without any shortage. Thus, representing a 100 per cent rate of return on the administered copies of the instrument. All the collected data were scored accordingly for negative and positively worded items, while a computer spreadsheet program (Microsoft Excel 2019 version) was used in coding the data on a person-by-item matrix. Descriptive statistics such as frequency counts, percentage, mean, standard deviation, and bar chart was used to analyse the coded data and answer the research questions; while a structural equation modelling approach (Path analysis) was employed in testing null hypothesis and in building the causal model of the study. Demographic Characteristics The participants of this study were 62% males and 38% females. Those who are principals are 18.3%, vice principals are 26.8% and teachers are 54.9%. Most of the participants (55.3%) were first degree holders, while masters, doctorate, and OND/NCE degree holders were 29.5%, 9%, and 6.1% respectively. A higher percentage of respondents (26.4%) had between 10-14 years' work experience, 24.2% had between 5-9 years' work experience, 18.5% had less than 5 years' experience, 17.4% had 15-19 years' experience, and only 13.4% of the respondents had 20 years work experience and above. In terms of marital status, 66.1% of the respondents are married, 30.1% are single, and only 3.8% had divorced or witnessed a divorce. For age, it was discovered that 23.2% of the respondents are aged 35-44 years, 22.6% are less than 25 years, 21.6% are between 45-54 years, 16.6% are between 25-34 years and 16% of the respondents are either 55 years or older. Furthermore, male respondents who are principals, viceprincipals, and teachers stood at 18.8%, 27.8% and 53.4% respectively; and female respondents who are principals, vice-principals and teachers are 17.4%, 25.1%, and 57.5% respectively. Research Question 1 In what ways are school farm resources wasted in public secondary schools in Cross River State? This research question was answered using descriptive statistics such as mean and standard deviation. The results of the analysis revealed several significant ways in which school farm resources are wasted in public secondary schools. It was discovered that school farm products are usually stolen by members of the external community before or after they are harvested ( =2.520, SD = 1.108). Students usually steal farm products before, during, and after harvest ( =2.500, SD = 1.106). The output from farms are often shared only among staff in the school for private consumption purposes ( = 2.501, SD=1.104). Farm produce are usually allowed to get rotten while being stored for future purposes ( =2.500, SD= 1.104). Monies realised from the sales of farm output are usually shared among members of the school farm management committee ( = 2.500, SD=1.122). Farm resources are also wasted due to poor farm management practices that decrease the yield of crops due to attacks from diseases ( = 2.500, SD=1.100). Students usually cause damages to farm products due to carelessness while gathering harvested crops ( = 2.500, SD=1.105). Damages are usually caused to crops during harvest which makes them non-marketable ( = 2.500, SD=1.100). School farms are operated as staff personal resource rather than a source of generating internal revenue for the school ( = 2.530, SD=1.100). Fertilizers are not usually applied to crops in the school farm to increase yields ( = 2.510, SD=1.108). All these ways mentioned above are considered significant ways in which school farm resources are wasted since their corresponding mean are equal to or greater than the criterion mean value of 2.500. However, other nonsignificant ways (mean less than 2.500) in which school farm resources are wasted include: damages caused by rodents when farm output are gathered after harvest ( = 2.484, SD=1.107); poor preservation of farm products due to lack of storage facilities ( = 2.484, SD=1.107); funds derived from the sales of school farm resources are not used in running the school ( = 2.490, SD=1.120); animals are not prevented from grazing crops in school farms due to poor fencing ( = 2.480, SD=1.120). Research Question 2 What is the extent to which school buildings are wasted in public secondary schools in Cross River State? Descriptive statistics such as mean and standard deviation were used in analysing the responses. The results indicated that school buildings are wasted to a significant extent in the following ways: Buildings are not regularly maintained to prevent against damage ( =2.523, SD= 1.120); buildings are not used in accordance to provisional prescriptions ( =2.524, SD= 1.109); damaged school buildings are not usually repaired ( =2.526, SD= 1.107); rare inspection of school buildings to determine their state ( =2.511, SD= 1.118); poor re-painting of school buildings when they are washed out ( =2.525, SD= 1.120); Many students cause damage to school building due to poor security ( =2.505, SD= 1.202); rural dwellers sometimes destroy available buildings in schools ( =2.520, SD= 1.119); and school buildings are often used beyond their specified carrying capacity ( =2.516, SD= 1.122). However, the following are non-significant ways in which school buildings are wasted in secondary schools: many school buildings are not used for any purpose ( = 2.482, 1.122); and woods attached to school buildings are not often treated with chemicals to prevent early damage ( = 2.478, 1.125). Mean values equal to or greater than 2.500 are considered are significant ways while values below 2.500 are considered as non-significant ways based on the criterion mean of 2.500. Research Question 3 What teaching equipment are wasted in public secondary schools in Cross River State? This research question was answered using mean and simple percentage as descriptive statistics. The result of the analysis is presented in Table 1 to show the various school materials that are wasted in secondary schools. The result presented in Table 1 indicated that the following teaching facilities such as textbooks, chalk charts, markers, chalkboard/whiteboards, workbooks, drawing books, computers, desks, handbooks, manuals, laboratories, libraries, playgrounds, and projectors are wasted in secondary schools. These materials are wasted at different rates ranging from 6.61% -6.73% with their mean values ranging from 2.957 -3.01. Based on the criterion mean value of 2.500, the mean wastage levels of the teaching facilities contained in Table 1 are considered as significantly high since all their mean values are greater than the criterion mean of 2.5. This result is further presented in Figure 3 below for proper visualisation. Figure-3. Teaching equipment and the rate of their wastage in secondary schools. Figure 3 shows that, out of all the teaching facilities that are wasted in secondary schools as indicated by the respondents of this study, manuals were the most wasted. This is followed closely by desks, workbooks, charts, projectors, playgrounds, drawing books, chalks, textbooks, laboratories, chalkboard/whiteboards, markers, handbooks, and computers, in that order. Research Question 4 What are the significant paths connecting the association between wastage of school farm resources, school buildings, and teaching equipment to school system effectiveness, with teachers and students' effectiveness as moderating variables? This research question was answered using a structural equation modelling approach based on the hypothesised causal model see Figure 2. The model was tested to determine the significant and nonsignificant causal paths. However, non-significant paths were removed in the measurement model as shown in The results in Figure 4 revealed that wastage of school farm resources have no significant direct relationship to teachers' effectiveness and students' effectiveness respectively. Wastage of school farm resources was discovered to have a negative effect (β= -.056, t = -2.072, p<.05) on school system effectiveness. Wastage of school buildings have a significant, direct, and negative effect on teachers' effectiveness (β= -.292, t = -10.926, p<.05), students' effectiveness (β= -.209, t = -7.788, p<.05), and school system effectiveness (β= -.056, t = -2.072, p<.05) respectively. Wastage of school teaching equipment has a direct, significant positive effect on teachers' effectiveness (β= .106, t = 4.087, p<.05) and students' effectiveness (β= .059, t = 2.364, p<.05) respectively; and a direct significant negative effect on school system effectiveness (β= -.094, t = -3.788, p<.05). Teachers and students' effectiveness moderated the effect of wastage of school buildings and teaching equipment on school system effectiveness. Teachers' effectiveness has a direct positive and significant effect (β= .270, t = 10.789, p<.05) and school system effectiveness It was also discovered through Figure 4 that wastage of school buildings and teaching equipment contributed on a joint basis, 9% to the total variance in teachers' effectiveness; with the remaining 91% accountable by other independent variables not included in the model. It was also shown that 16% of the total variance in students' effectiveness is explained by the joint effect of wastage of school buildings, wastage of teaching equipment and teachers' effectiveness; with the remaining 84% of the variance explained by other predictors not included in the model. Jointly, wastage school farm resources, school buildings, teaching equipment, students and teachers' effectiveness could be held accountable for 17% of the total variance in school system effectiveness; with the remaining 83% of the variance explained by other independent variables not included in the model. DISCUSSION OF FINDINGS This study revealed that school farm resources are wasted in numerous ranging from stealing by students, rural dwellers, and attack by pest and diseases, to the non-commercialisation of agricultural products for private consumption by teachers. This finding suggests that school farm resources are given little or no priority as a source of generating internal revenues for the school due to the poor maintenance of school farms, poor security of crops before and after they are harvested, as well as the gluttonous and selfish attitudes of principals and teachers. This finding tallies with the results of Ikeoji et al. (2007) which discovered earlier that many principals failed in the management of school farm resources due non-availability of seeds, fertilizers, feeds and other operating devices; inadequate training offered to teachers on the use of modern and sophisticated farming implements for practical and instructional purposes; the unserious attitude on the part of the administrators. It was also discovered in this study that school buildings are wasted generally, to a significant extent. These significant ways include poor maintenance, non-utilisation of school buildings according to prescriptions, poor repairs of damaged buildings, rare inspection and assessments of school buildings, poor re-painting practices, destruction of structures by students and rural dwellers, the poor security and over-usage of school buildings. However, the following are non-significant ways in which school buildings are wasted in secondary schools: many school buildings are not used for any purpose and woods attached to school buildings are not often treated with chemicals to prevent early damages. This finding corroborates the results of Munyi and Orodho (2015) who discovered through findings of a study that many schools lacked enough resources to maintain school buildings, and as a result, there is also increased dropout, repetition as well as low completion and transition rates. The finding of Munyi and Orodho (2015) has implications to the finding of the present study because, where there are inadequate resources to maintain school buildings, the buildings which will be left unattended to, may become dilapidated with serious consequences on the students' and school performance. It was discovered through the third finding of this study that numerous teaching equipment are wasted in secondary schools. The teaching equipment presented in descending order of wastage include manuals, desks, workbooks, charts, projectors, playgrounds, drawing books, chalks, textbooks, laboratories, chalkboard/whiteboards, markers, handbooks, and computers. This finding is not surprising since many schools either over-utilise or under-utilised these materials. Some of these materials such as workbooks, handbooks, manuals, and other library facilities are usually left to be damaged by insects and due to the poor condition of libraries in many schools. Computers and projectors are also wasted due to ineffective and inconsistent power supply coupled with the unemployment of professional computer science educators. Similar to the finding of this study, the study of Ogbuanya et al. (2017) revealed that there was no significant difference in mean responses of the respondents on the planning, organizing, controlling and coordinating strategies for proper management of material resources. By implication, the finding of Ogbuanya et al. (2017) suggests that poor management of material resources is common among different principals. This study revealed also that wastage of school farm resources has no significant direct relationship to teachers' effectiveness and students' effectiveness respectively, but was discovered to have a negative effect on school system effectiveness. This finding suggests that there is no business between the wastage of school farm resources and teachers or students' effectiveness. In other words, whether school farm resources are wasted or not, it does not affect the way teachers and students go about their duties but affect the way the school operates. This finding may be because most farm resources are easily converted into monetary values for school improvement. These monies especially when they are not used to motivate teachers or students for their hard work, will not cause any increase in their effectiveness levels. Even when teachers or students are motivated through the proceeds from school farms, the thinking of many of them will be that they are only receiving the rewards of their labour in the school farms. Thus, they are likely not to see such efforts as anything special to warrant a change in their behaviour or attitude to work. The school system will witness a decline because where school farm products are wasted, limited capital will be raised from school farms into the internally generated revenue income stream. This finding supports Chukwudum and Ogbuehi (2013) who discovered that that agricultural orientation is given to students through the school farm, especially those with poor or no agricultural background. Wastage of school buildings was discovered to have a significant, direct, and negative effect on teachers' effectiveness, students' effectiveness, and school system effectiveness, respectively. By implication, a decrease in the wastage of school buildings will cause teachers, students, and school system effectiveness to increase, other things being equal. This finding is not a surprise because when school buildings are not wasted, a conducive environment is bound to be available for students and teachers' utilisation. This provides sufficient space for academic and office activities to thrive, thus, increasing the effectiveness of the school directly, or through the moderating effect of the teachers and students' effectiveness. Wastage of school teaching equipment has a direct, significant positive effect on teachers' effectiveness and students' effectiveness respectively; and a direct significant negative effect on school system effectiveness. This finding is quite surprising that teachers' or students' effectiveness will increase as the wastage of teaching facilities increases. The result may have appeared this way because of the high level of effectiveness of many students towards studies and teachers towards service delivery. Thus, wastage school of farm resources do not seem to affect the quality of their effectiveness. The finding may have appeared also like this, based on how these materials are wasted. If the materials are squandered or over-utilised then it may improve teachers and students' effectiveness, and decrease school system effectiveness (Since it is the school that provides some of the teaching equipment and teachers or students are the ones utilising them) in the short-run. For instance, imagine a teacher that uses a complete packet of chalk to teach a 40 minutes lesson and throwing the remaining chalks in the packet away. The teacher may go on to teach the lesson effectively, and students may also benefit (since the teacher had taught well), but throwing the remainder of the chalk away will be a cost to the school's management. This finding agrees with the results of Morgan (2000) that students' overall academic performance is affected by the relative and composite effects of three critical factors -the condition, effective management and adequacy of educational resources. It was discovered that teachers' effectiveness has a direct positive and significant effect on school system effectiveness respectively which supports the model of Bassey et al. (2019). The study also discovered that students' effectiveness has a direct significant positive effect on school system effectiveness which is similar to the results of CONCLUSION Based on the findings of this study, it was concluded that school farms resources and school buildings are wasted in so many ways by principals (through poor management), students and rural dwellers (through theft and malicious damages), insects, pest, disease and other natural processes. Different school teaching facilities ranging from chalks to computers are wasted by teachers, students and other users in the school. It is also concluded that wastage of school material resources has a significant, direct and inverse effect on school system effectiveness generally. Increment in the wastage of school material resources will cause the effectiveness of schools to decline. The implication of the finding of this study is that many school material resources will continue to be wasted in the future unless something urgent is done to address this issue in a timely manner. Secondary school managers will also be able to raise the quantity of internally generated revenues if they are aware of the benefits that proper management of school material resources can provide. Principal awareness of the minimisation of school material resource wastage will also lead to cost efficiency for the attainment of desired educational outcomes in secondary schools in Cross River State, Nigeria. Principals will also be able to use the most efficient ways of achieving secondary school objectives, given a specific amount of school material resources (cost effectiveness) in secondary schools in Cross River State, Nigeria. LIMITATIONS OF THE STUDY This study made use of only a sample of 1,480 respondents which limits the generalisations made to the entire population, thus a broader study is required in order to validate the results of this study. The study' scope in terms of variables covered and area of study was delimited to one state in Nigeria and three variables of school material resources wastage. Meaning that future researches in related areas need to integrate more variables, and a largescale assessment is required for a better understanding. Lastly, the dearth of empirical literature in related areas narrowed the base and theoretical grounds of the study's findings. These limitations have opened up further gaps for prospective researches in the area of wastage of school material resources and school system effectiveness. RECOMMENDATIONS Based on the conclusion of this study, the following recommendations were made: i. Secondary school managers should be sensitised through workshops or conferences on the importance of managing school material resources to eliminate wastage in schools. ii. Proceeds from school farms should not purely be shared among staff for private consumption, but should also be marketed to increase internally generated funds in the school. iii. Students and rural dwellers should be prevented from gaining unauthorised access to school farm sites or storage location through proper fencing and security at all times. iv. Insecticides and pesticides should be applied to school material resources such as library facilities, crops, woods, and barns. This will help in curtailing the rate at which insects and pest will cause damage to school material resources. v. Teachers, as well as students, should be enlightened by school principals on the need to avoid negligent utilisation of school material resources which causes wastage that reduces the level of school effectiveness. vi. The government at all levels, should support secondary schools in the management of school material resources through an adequate supply of facilities and proper supervision or audit of their utilisation. Funding: This study received no specific financial support.
2020-07-23T09:02:02.703Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "9d8f4582fa88bba7c124e07bb731738f16d6dadd", "oa_license": null, "oa_url": "http://www.conscientiabeam.com/pdf-files/soc/73/HSSL-2020-8(3)-252-267.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fbd5862fd048707b0199b2205358d5bb01b2eb22", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
254058319
pes2o/s2orc
v3-fos-license
A study on harvesting of PKL electricity The efficiency of any electric cell or battery is very important. To keep it in mind it has been studied the columbic efficiency, voltaic efficiency and energy efficiency of a PKL (Pathor Kuchi Leaf) Quasi Voltaic Cell or Modified Voltaic Cell. It was found that the columbic efficiency data illustrated that this efficiency was lower comparing to other efficiencies may be the absence of salt bridge or separator between the electrodes. Because, our designed and fabricated PKL cell does not have any salt bridge. So that the internal resistance is lower than the traditional voltaic cell and as a result more current was found. The voltage and current changes with time and I–V characteristics for PKL unit cell, module, panel and array have also been studied. It is shown that the voltaic and energy efficiency have been studied. However, the highest efficiency was obtained for 40% PKL sap with 5% secondary salt in 55% aqueous solution, which implies that the concentration of PKL juice can play an important role regarding efficiency. It was also found that the average energy efficiency was 97.43% and it was also found that the average voltaic efficiency was 57.29%. Finally, morphological study FESEM (Field Emission Scanning Electron Microscopy) has also been performed. It is seen that the results confirmed that Zn was deposited on the Cu surface during the electro deposition process in PKL solution. Using AAS, it has been measured the concentration of [Cu2+] as a reactant ion and the concentration of [Zn2+] as a product ion those have been tabulated and graphically discussed. The variation of pH has also been studied with time and which was also tabulated and graphically discussed. Introduction PKL power is good and least expensive option today in the small power range in remote and rural areas (Akter et al. 2017;Guha et al. 2018;Hamid 2013;Hamid et al. 2016). PKL electric systems are now available everywhere for easy installation with all necessary accessories in a competitive market. These are being used along with the conventional system in many developed and developing countries Khan 2016, 2018a). In Bangladesh most of the electricity generators are run by indigenous has and generation is also very low (Hasan and Khan 2018b;Hasan et al. 2016a). The actual demand of electricity is much higher than the electricity supplied at present PV electricity is receiving wider acceptance every year in Bangladesh especially in remote and rural areas due to their various advantages (Hasan et al. 2016b(Hasan et al. , 2017a(Hasan et al. , b, 2018Hassan et al. 2018). For utilizing PKL energy efficiently and cost effectively optical design of the PKL systems with proper knowledge of the devices and system components is very important. Some examples of application of PKL system in our rural areas are as: education, charging cellular phone, lighting rice mill, lighting saw mill, lighting grocery shop, lighting tailoring shop, lighting clinic, lighting restaurants, bazaar, water pump, radio TV and computer train. To keep it in mind, we have studied the output behavior of the PKL bio-electrochemical cells. By analyzing the experimental data obtained from AAS, UV-Vis, pH metric analysis and visual inspection of the PKL cell we can conclude the findings as follows-from AAS, UV-Vis, and pH metric analysis it is found that both Cu 2? and H ? ions simultaneously reduces with the progress of electrochemical reaction whereas the concentration of Zn 2? increases rapidly. Thus we can infer that H ? and Cu 2? ions behave as reactant species i.e., act as oxidant while Zn behaves as reluctant species. However, the visual inspection and the reduction of weight of Zn plates also strongly support that Zn electrode is the main source of electron. On the other hand from the collected data we can decide that the potential and current flow decreases with the decrease of concentration of H ? and Cu 2? ions in solutions. The researcher has also studied the characterizations for PKL electrochemical cells. The morphology of zinc (Zn) deposits was investigated as anode for aqueous PKL batteries. The Zn was deposited from zinc to PKL extract in direct current conditions on a copper surface at different current densities. The surface morphology characterization of Zn deposits was performed via field emission scanning electron microscopy. Reactions at anode and cathode When the electrolysis process occurs via direct current, more electrons move toward the negative electrode. Within the PKL electrolytes, the negative electrodes are surrounded with Zn 2? and H ? ions (Khan et al. 2018b, c, d, e, f). These ions are adsorbed onto the substrate surface via a weak Vander Waals bond, which allows surface diffusion. This diffusion results in the reduction of ions at more favorable sites. The reduction of Zn 2? involves gaining two electrons to form zero-valent Zn metal deposits on the Cu plate (Khan et al. 2018g, h, i, j). A simplified cell reaction can be illustrated as follows: Reduction process: Cu 2þ þ 2e À ¼ Cu: This Redox reaction concurrently occurs without changing the original composition of the PKL electrolyte and maintaining the solution more or less uniformly . In fact, the reduction of H ? to form hydrogen (H 2 ) gas also completes with the reduction of Zn in an acidic PKL solution, which follows the Eq. (3): The Zn metal losses electrons and the Cu metal gains electrons (Khan 2008a, b). The numbers of electrons come from the Zn metal to the Cu metal is 2. These electrons reacts with the H ? and Cu 2? ions and converts into the H and Cu atoms. The H atoms then convert into H 2 and release from the Cell. The Cu atom deposits onto the Cu plate and gains more weight than the initial state. 2.1.3 Discharging process (Khan 2008b(Khan , 2009(Khan , 2018Khan and Alam 2010;Khan and Arafat 2010;Khan and Bosu 2010;Khan and Hossain 2010;Khan and Paul 2013) When Zn and Cu plates dipped into the PKL extract then the discharging (charge with load) reactions at anode and cathode compartments are given by the following: Cathode compartment: B(n ? x)? ? xe--? B n? and anode compartment: The columbic efficiency (g C ) is the ratio between output charge and the input charge defined as: where, g C is the columbic efficiency, C discharge is the charge output, and C charge is the charge input in coulombs (C) of the PKL cell. The voltaic efficiency It is defined as the ratio between average discharge and charge voltages and is given by Khan and Rasel 2018a, b;Khan and Yesmin 2019a): where, g V is the voltaic efficiency, V discharge is the average discharging voltage, and V charge is the average charging voltage (V) of the PKL Cell. The average charging and discharging voltages are defined as the timeintegral of the voltage, where I is the current in amps as a function of the time during charging and discharging (Khan et al. 2013a(Khan et al. , b, 2014. Normally, the rate of charging or discharging is kept constant during testing of electrochemical PKL cell. The energy efficiency It can then be defined as the ratio between the output energy and input energy by combining the columbic and voltaic efficiencies (Khan et al. , c, 2017: where, g E is the energy efficiency, E discharge is the energy output, and E charge is the energy input of the PKL cell in terms of watt seconds (W-S). Energy density It is defined as the theoretical energy stored per unit volume of electrolyte (Khan et al. 2018p, q;Paul et al. 2012;Ruhane et al. 2017). This is highly dependent on maximum solubility of the active species in the solvent being used. Energy density can be defined as: u = OCV 9 Dc i Dz i F, where, OCV is the open circuit potential (V) of a cell, Dci is the change in concentration of the active species of one half-cell (mol/L), zi is the change in valence of that active species, and F is Faraday's constant (A-h/mol). This gives the theoretical energy density, u, in watt-hours per liter of electrolyte (W-h/L). The voltage of the PKL cell based on the Nernst equation (Khan et al. 2018k, l, m, n, o) Cell = cell voltage at standard ste condition, R = universal gas constant = 8.314 J mol -1 K -1 , T = extract temperature, n = number of transferred electrons and F = Faraday constant = 96,500 C). PKL unit cell It is shown in Fig. 1 a unit PKL cell consists of a voltammeter, two electrodes and connecting wires. Voltammeter, ammeter, voltmeter and resistance box are used for making a unit cell and then module. This unit cells are the building block of the PKL cell. PKL cell is the structural unit or building block. It is made of PKL extract/malt/juice. The voltage of the fabricated PKL unit cell is around 1.10 V. The PKL electricity depends on various parameters. The parameters are given by the following: concentration of the malt, area of the electrodes, distance between the two electrodes, the constituent elements of the electrodes, the volume of the PKL extract/malt/juice, the temperature of the PKL malt, the age of the PKL and pH of the PKL juice etc. PKL electric module It is made more than one unit cell. The PKL unit cells are connected by wires. The voltage of the unit cell is more than 1.1 V. Using a voltammeter and two electrodes made a unit PKL cell. It is shown in Fig. 2 that the PKL module as a finished product. These modules have been used for practical utilizations for electricity generation. The voltage and current of the PKL module depends on number of the unit cell. In a panel PKL module can be connected by the series or parallel combination. PKL electric panel It is made of one more than one PKL electric modules by physically and electrically connected. The voltage of the PKL electric Panel is higher than the PKL electric modules. The voltage and current of the PKL panel depends on number of the modules (Fig. 3). PKL electric array It is made of one or more than one panel. In the similar way the voltage of the PKL electric array is higher than the PKL electric panel. The current, voltage and power of the array depend on number of panel and their arrangement (Fig. 4). We consider this stage for growing of PKL within 15 days old. The conversion efficiency is low at this stage. Middle stage PKL electric panel We consider this stage for growing of PKL within 30 days old. The conversion efficiency is higher than the early stage. Pre-matured stage PKL electric panel We consider this stage for growing of PKL within 45 days old. The conversion efficiency is higher than the Middle stage. • Matured stage PKL electric panel We consider this stage for growing of PKL within 60 days old. The conversion efficiency is higher than the premature stage. The maximum output of the PKL electric Panel depends on different parameters. Such as: age of the PKL, concentration of the PKL extract/malt/juice, area of the electrodes, distance between two electrodes, temperature of the extract/malt/juice, ambient temperature of the laboratory, influence of the light and p H of the PKL extract/malt/juice etc. (i) Age of the PKL In the research work it is shown that the efficiency for electricity generation from the PKL varies with the age of the PKL (Sultana et al. 2011). (ii) Concentration of the PKL malt/juice The voltage is generated from the PKL varies with the concentration of the PKL malt/juice. That is voltage, V µ q, where q is the concentration of the juice (Khan and Yesmin 2019b). (iv) Distance between two electrodes The voltage generation varies with the distance between two electrodes. It is shown that voltage decreases with the increase of the distance between two electrodes (Khan and Rasel 2019c). It is shown that the voltage is proportional to the distance between two plates. (v) Temperature effect of the extract. It is shown that the voltage variation can be expressed by the following relation (Khan and Rasel 2019a): DV = K 9 DT 9 N cs , where, DV = change of voltage, K = coefficient factor, DT = change in temperature, N cs-= no. of PKL Unit cell connected in series. (vi) Ambient temperature of the laboratory. It is shown that the efficiency is not influenced at all. (vii) Influence of the light. The constitute compounds of the PKL are citric acid, iso-cytric acid and malic acid. In presence of the sun light the performance is less than the absence of the sun light. So that PKL cell acts equally in the day and night time. But the solar cell does not act properly in the rainy season and does not act totally in the night time. Results and discussion with graphical analysis The aim of this project is production of Electricity by Pathor Kuchi Leaf (PKL). The PKL cell was run both day and night time after starting the chemical reaction in the PKL cell but data was collected during day time. From Fig. 5, it is shown that the current decreases with the voltage increases directly for the range of voltage between (0.05 and 0.08) V. The current almost constant for voltage range (0.08-0.27) V. Finally, it is shown that after 0.27 V current decreases with the voltage increases. From Fig. 6, it is shown that the open circuit voltage of PKL module was 12 V and when a LED Lamp was connected as a load, the voltage of the PKL module suddenly decreases to 7.24 V. It is shown that the voltage after 2 months interval is almost equal for long time. Figure 7 shows the variation of the consuming voltage (volt) with the variation of the consuming time (seconds). The consuming voltage was taken by 10 days interval and it is seen that the consuming voltage decreases directly with the increasing of consuming time and after 20 days, the consuming voltage increases with the increasing of consuming time. Figure 8 shows the charging behavior of the lead acid battery by PKL electric cells. It is shown that the voltage difference between the PKL electric cells and lead acid battery is almost equal. The charging characteristics of the PKL electricity and the lead acid battery are exponentially increasing with time. Figure 9 shows the minimum voltage variation with the variation of the different dates of the month for different loads. It is very interesting that after 2nd days, the voltage is increasing almost linearly with the increasing of local time. Figure 10 shows the variation of the PKL module voltage with the variation of different dates for a LED lamp as a load. It is shown that for 1st day the voltage of the PKL module decreases and from the 2nd days the voltage of PKL module almost linearly increasing with the increasing of the local time. Figure 11 shows the variation of voltage with the variation of local time for without load. It is shown that the voltage without load decreases With local time exponentially for a few minutes and after a few minutes the voltage varies almost linearly with the increasing of the local time. Figure 12 shows the variation of the consuming voltage by LED lamp with the variation of the local time of a day. It is shown that the consuming voltage is almost constant for each time of the day. It does mean the PKL electric module supplies constant voltage to the load. Figure 13 shows the variation of consuming voltage by LED lamp with the variation of local time of a day. It is shown that the consuming voltage was almost constant with the variation of tome. Table 1 shows the method of the determination of the voltaic efficiency of a PKL electrochemical cell. The data has been collected with calibrated multi meter and tabulated carefully. It is shown (Fig. 14) in the variation of voltaic efficiency with the variation of time. Initially it is changed exponentially and then after the change was almost constant with time. It is shown (Table 2) the energy efficiency of a PKL electrochemical cell for the internal resistance, R = 0.6 X. The data was collected and the energy efficiency was also calculated and finally tabulated. Morphological characteristics of the PKL electrochemical cell It was considered for two half cell system (one Cu and one Zn). The area of the anode (Zn) and the area of the cathode (Cu) were tried to keep same areas (4.5 cm 2 ) respectively. The weight of the Cu and Zn plates were measured by a weigh meter before and after immerged into the extract. As a result, it was shown the morphological change of the plates has been occurred. According to Faraday's laws of electrolysis [115], we have, where, m is the mass of the deposits, F = Faraday constant (96,500 C mol -1 ), Q = electric charge passed, M = molar mass of the species, and n = electrical charge involved in the reaction (Fig. 15). It is shown in Fig. 16a, b, the surface morphological change of the Cu plate was occurred for use in the PKL extract. So that before and after using the Cu plates the surface morphological change was studied using FESEM (Field Emission Scanning Electron Microscopy). As a result electron resistances were not grown there before and after using as an anode (Fig. 17a, b). Because no huge H 2 gas layer formed on the Cu-plate for a few time duration. As a result electron resistances not grow there. But for long time duration a few H 2 gas layer formed on the Cu-plate and as a result a few electron resistances were grown there. So that it is concluded that the electron resistance is inversely proportional to the time duration of the chemical reaction between Cu electrodes and the PKL extract. Moreover, the weight of the Cu Plate after use in the PKL extract became slightly greater than the weight of the Cu plate before use in the PKL extract. That is the gain of the Cu plate follows the theoretical value (Eq. 1). It is shown in Fig. 17a, b the surface morphological change of the Zn plate was occurred for use in the PKL extract. So that before and after using the Zn plates the surface morphological change was studied using FESEM (Field Emission Scanning Electron Microscopy). As a result electron resistances were not grown there before using as an anode ( Fig. 17a) but some electron resistances were grown there after using as an anode (Fig. 17b). Moreover the variation of weight of the Zn plate for before and after use in the PKL extract, which was acceptable. It is shown that the weight of the Zn plate after use in the PKL extract became slightly less than the weight of the Zn plate before use in the PKL extract. Finally, it is also shown that both the theoretical (using Eq. 4) and practical (measuring the weight by a weigh meter) weight loss were also coincide with each other. Effect of the concentration of Cu 21 and Zn 21 ion during PKL electricity generation Cu 2? ion presence in PKL juice solution as a secondary salt acts as a reactant ion. Thus the presence of Cu 2? ion increases both potential and current flow with time Cu 2? reduces to Cu and so the concentration of Cu 2? ion decreases (Khan et al. 2018j, k, l). Reactions: Again the anode undergoes corrosion to give the product ion Zn 2? by the following the reaction: So the variation of concentration of Zn 2? ion will be helpful to this study. But Zn 2? cannot be determined by UV-Vis spectrophotometer (Khan et al. 2018l). For this reason AAS has been used to determine this. The variation of concentration of Cu 2? and Zn 2? ion during Electricity Generation with the variation of time is shown in Table 3 and Fig. 18. It is also shown (Table 3 and Fig. 19), the variation of pH with the variation of time duration. Results It is shown in Fig. 18, for a particular specification, the PKL extract was 60% and the water was 40%. From Fig. 18, it is shown that the [Zn 2? ] increased as a product ion with the variation of time and the [Cu 2? ] was const as reactant ion with the variation of time during electricity generation. It is shown (Fig. 18) that the variation of the concentration of [Zn 2? ] increases almost exponentially with time whereas the variation of the concentration of [Cu 2? ] decreases almost constant with time duration during electricity generation. It is also shown that the variation of pH with time duration (specification is the PKL extract was 60% and the water was 40%). It is shown that pH increases firstly linearly and then almost exponentially. Conclusion It has been found the energy efficiency, voltaic efficiency and columbic of the PKL electrochemical cell. It has been also found the pH variation of the PKL extract during electricity generation period. The morphological change of the electrodes has been studied by SEM analysis. It has been found the concentration of product and reactant ions by AAS technique and their variations with time during electricity generation. Furthermore some electrical parameters have been studied in this work. Electricity from Pathor Kuchi Leaf (PKL) is the new innovation. It is the innovated in Bangladesh. Bangladesh. In Bangladesh perspectives it has a great impact in our society. Now a days, electricity is becoming an essential part of the life. We cannot keep running even a mobile telephone without electricity, although it needs a very low amount of electricity to charge it. In our country a few people are getting electricity. There are a large number of people in large part of the country like coastal areas, small islands; remote areas are not getting electricity yet. The production of electricity from PKL is so easy. So that it can be produced by any one even a handicapped and an illiterate people of the country. It is simple and affordable technology by all users among the society. Its need no advance knowledge on production of electricity. They can use it instead of Karocin lantern especially at the off-grid areas across the world. Fig. 19 Variation of pH with time duration specification, the PKL extract was 60% and the water was 40%
2022-11-29T14:40:50.209Z
2019-09-23T00:00:00.000
{ "year": 2019, "sha1": "e79b013a7406c69bfd0f70c52894299e884bbb30", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00542-019-04625-7.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "e79b013a7406c69bfd0f70c52894299e884bbb30", "s2fieldsofstudy": [ "Environmental Science", "Materials Science" ], "extfieldsofstudy": [] }
215808084
pes2o/s2orc
v3-fos-license
The Discriminative Efficacy of Retinal Characteristics on Two Traditional Chinese Syndromes in Association with Ischemic Stroke We aimed to investigate the efficacy of an objective method using AI-based retinal characteristic analysis to automatically differentiate between two traditional Chinese syndromes that are associated with ischemic stroke. Inpatient clinical and retinal data were retrospectively retrieved from the archive of our hospital. Patients diagnosed with cerebral infarction in the department of acupuncture and moxibustion between 2014 and 2018 were examined. Of these, the patients with Qi deficiency blood stasis syndrome (QDBS) and phlegm stasis in channels (PSIC) syndrome were selected. Those without retinal photos were excluded. To measure and analyze the patients' retinal vessel characteristics, we applied a patented AI-assisted automated retinal image analysis system developed by the Chinese University of Hong Kong. The demographic, clinical, and retinal information was compared between the QDBS and PSIC patients. The t-test and chi-squared test were used to analyze continuous data and categorical data, respectively. All the selected clinical information and retinal vessel measures were used to develop different discriminative models for QDBS and PSIC using logistic regression. Discriminative efficacy and model performances were evaluated by plotting a receiver operating characteristic curve. As compared to QDBS, the PSIC patients had a lower incidence of insomnia problems (46% versus 29% respectively, p=0.023) and a higher tortuosity index (0.45 ± 0.07 versus 0.47 ± 0.07, p=0.027). Moreover, the area under the curve of the logistic model showed that its discriminative efficacy based on both retinal and clinical characteristics was 86.7%, which was better than the model that employed retinal or clinical characteristics individually. Thus, the discriminative model using AI-assisted retinal characteristic analysis showed statistically significantly better performance in QDBS and PSIC syndrome differentiation among stroke patients. Therefore, we concluded that retinal characteristics added value to the clinical differentiation between QDBS and PSIC. Introduction Since 2012, stroke has been a leading cause of death and disability in China, and its incidence has been increasing at a rate of 8.7% per year [1]. e most common subtype of stroke in China is ischemic stroke, accounting for 43-79% of all stroke patients [2]. Traditional Chinese medicine (TCM) has been used in stroke treatment and recovery for thousands of years, and it is still commonly used in the clinical management of this condition [3]. One of the basic features of TCM for stroke is a treatment plan based on syndrome differentiation [4], that is, every TCM physician makes a diagnosis of stroke and individually prescribes medication based on each patient's syndrome differentiation. erefore, the accuracy of the differentiation is the key to efficiently treating this disease. According to the "Guidelines for the Diagnosis and Treatment of Common Diseases in the Traditional Chinese Medicine" [5], the syndromes ("ZHENG" in Chinese) of stroke are described in six aspects: wind pattern (Feng Zheng), heat pattern (Huo Re Zheng), phlegm pattern (Tan Zheng), blood stasis pattern (Xue Yu Zheng), qi deficiency pattern (Qi Xu Zheng), and yin deficiency pattern (Yin Xu Zheng). According to these syndromes, we arranged and combined them into seven syndrome types [6]. Of these, the Qi deficiency and blood stasis syndrome (QDBS) and the phlegm stasis in channels syndrome (PSIC) are most common syndrome types in ischemic stroke patients, occurring in about 53.6% of cases [7]. TCM treatment selection for stroke is based on the TCM syndrome types. Patients with QDBS are characterized by hemiplegia, weakness of limbs, numbness of the body, deviation of the tongue, swelling of hands and feet, pale complexion, shortness of breath, lack of strength, palpitation and spontaneous sweating, deviated and pale tongue with a thin white coating, and a fine gentle pulse, therefore improving Qi and promoting blood circulation is the first principle of treatment [7]. On the other hand, patients with PSIC are characterized by hemiplegia, deviation of the eye and mouth, stiff tongue, numbness of limbs, rapid arrest of hands and feet, dizziness, yellow sticky or greasy tongue coating, and a thready and slippery pulse; therefore, the treatment plan is focused on eliminating phlegm and freeing the channels [7]. An essential component of TCM diagnosis is an overall observation of human symptoms, which is defined as the TCM syndrome. However, due to the lack of standard and objective evaluation criteria, such diagnoses, can be influenced by the doctors' personal experience, which puts the repeatability and reliability of the diagnosis in question [8]. In lieu of this disadvantage, modern artificial intelligence and image analysis technology could build a possible link between biological measurements and clinical outcomes. AI technology may offer a new objective diagnostic method for TCM syndrome diagnosis, which may help to improve diagnostic accuracy of a veteran TCM practitioner. One of the critical pathological changes of stroke is cerebral vascular change [9]. According to the current knowledge of embryogenesis and histology, the retinal vessel circulation system and cerebral vascular system share the same origin [10], and it has been demonstrated that the retinal vascular system is similar in function and morphology to the cerebral vascular system [10]. One of the hypotheses is that the alteration of the retinal image can reflect cerebral vascular changes, and this can be used as risk predictor for ischemic stroke [11]. Previous studies have shown that a number of retinal characteristics were significantly associated with stroke [12][13][14][15][16]. Furthermore, the retina is the only organ in the body whose vascular system can be observed noninvasively; therefore, the characteristics of retinal vasculature are considered as potential tools for stroke risk assessment. Unfortunately, to date, there has been no systematic investigation about the differences in retinal vessels between the various TCM syndrome types of stroke patients. In this study, we have explored the diagnosis of stroke syndromes in the context of TCM based on retinal images. Ethical Statement. is study was approved by the Ethics Committee of the Shenzhen Traditional Chinese Medicine Hospital (Approval Number: 2018-75) and was performed in accordance with the guidelines of the Declaration of Helsinki (1964). All patients provided written informed consent for their participation in the study. Study Design. In the case-control study, a total of 328 ischemic stroke patients from the Shenzhen Traditional Chinese Medicine Hospital were included. Patients were divided into 2 groups according to their TCM syndrome types: the QDBS group and the PSIC group. e patients' demographic and clinical data, including age, sex, medical history, physical examination, laboratory test, and electrocardiography results, were collected by trained doctors. Patient Selection. e inclusion criteria were as follows: the subject was in the recovery stage of ischemic stroke, aged between 30-80 and was adequately able to maintain his/her posture while sitting for the duration of the retinal photography procedure. If the subjects were found to be suffering from any of the following conditions, they were excluded from the study: clinically unstable and requiring close monitoring, moribund, had an eye disease that severely affected retinal vessel structures, or was physically or subjectively unable to comply with magnetic resonance (MR) examination. In addition, if the patient was suspected to have cerebral diseases or conditions that may potentially alter retinal vessel morphology, he or she was excluded as well. Eventually, 196 of the 328 patients with ischemic stroke were included in our study ( Figure 1). Data Collection. Risk factors related to cerebral infarction, such as hypertension, diabetes, dyslipidemia, high homocysteine, coronary heart disease, atrial fibrillation, smoking history, drinking history, and sleep disorders, were collected in this study. Hypertension was defined as a systolic blood pressure greater than 140 mmHg and a diastolic blood pressure above 90 mmHg or the use of antihypertensive medication for up to 2 weeks prior to the start of the study. According to diagnostic criteria from the National Diabetes Data Group, diabetes mellitus was defined as a fasting serum glucose level of more than 6.99 mmol/L, a nonfasting value of more than 11.1 mmol/L, or a history of treatment for diabetes [17]. Based on the National Cholesterol Education Program guidelines, dyslipidemia was classified as desirable (serum cholesterol level: <5.17 mmol/L), borderline-high (serum cholesterol level: 5.17-6.21 mmol/L), or history of administration of lipid-lowering drugs [18,19]. Smoking and drinking status were evaluated by designating exsmokers, current smokers, or nonsmokers and ex-drinkers, current drinkers, or nondrinkers, respectively. Physical activity and mental status were also investigated by assessing if the patients exercised regularly, felt despair, had a poor appetite, and slept enough. Retinal images were taken on the 2 nd day of hospital admission using the SmartScope Ey4 Camera (Optomed, Finland). To ensure the compatibility of the parameters, all the retinal images were scaled to 1365 * 1024 pixels and saved in the jpg format. A patented, fully automatic retinal image analysis was applied to measure the retinal vessel characteristics, including vessel diameter, vessel branching angle and bifurcation measures, vessel tortuosity, and fractal dimensions [12,13,[20][21][22][23][24]. Statistical Analysis. We reported data as mean and standard deviation (mean ± SD) for continuous variables and as proportions for categorical variables. To analyze the manually measured clinical and retinal characteristics, we used 2 sample independent t-tests to compare the continuous data and chi-squared tests to compare the categorical data. A p value of <0.05 was considered statistically significant. We used nonparametric test (Mann-Whitney test) if the normality test showed rejection of normality assumption. For categorical data, exact Fisher's test was used if the expected count was less than five. A logistic regression model was used to build classification models. e steps used to establish these models were drawn from the method proposed by Hosmer and Lemeshow [25] for selecting independent variables that result in the best model. e classification accuracy and the area under the curve (AUC) of the receiver operating characteristic (ROC) were measured. All the data were analyzed using the Statistical Package for Social Science software (SPSS version 22.0, IBM Corp., Armonk, New York, USA). Results e parameters of sex, insomnia status, and tortuosity of retinal vessels were significantly different between the two TCM syndrome groups. Compared to the QDBS group, patients in the PSIC group had a significantly higher proportion of females (65% versus 81%, respectively, p � 0.019), fewer insomnia problem (46% versus 29%, respectively, p � 0.023), and a higher tortuosity index syndrome (0.45 ± 0.07 versus 0.47 ± 0.07, p � 0.039). e drinking status presented a certain difference between the two syndromes with borderline significant difference (p � 0.051). On the other hand, no statistically difference was found between the two groups in terms of the other clinical risk factors such as hypertension, diabetes, coronary heart disease (CHD), lipid level, or homocysteine level (Table 1). e stepwise logistic regression method was used to build classification models that included only clinical variables, only retinal variables, and combination of both clinical and retinal variables together. For the clinical model, the insomnia status (p � 0.017) and drinking history (p � 0.039) were significant. e percentages of correct classification for PSIC and QDBS were 39.5% and 80.8%, respectively. e model that used retinal characteristic variables alone included AVR (p � 0.053), bifurcation coefficient of venule (p � 0.003), hemorrhage (p � 0.025), and arterial occlusion (p � 0.033), and a composite score of retinal interactions, showed that the percentages of correct classification were 70.4% and 80.8% for PSIC and QDBS, respectively. A final logistic regression model combined both clinical and retinal variables included insomnia status (p � 0.020), AVR (p � 0.099), BCV (p � 0.006), hemorrhage (p � 0.019), arterial occlusion (p � 0.030), and a composite score of interactions (p � 0.001). e percentages of correct classification were 76.5% and 84.8% in PSIC and QDBS, respectively. e odds ratios (OR) and the corresponding 95% confidence intervals (95% CI) of each of the Discussion In practice, the four clinical TCM diagnosing techniques, that is, observation, auscultation and olfaction, interrogation, and pulse feeling and palpation are combined to identify a specific disease. However, obtaining a correct diagnosis using these techniques is highly dependent on the domain knowledge of the TCM physicians. For instance, most of the diagnostic information is gathered by the naked eye and subjective feeling during assessment of each physician. erefore, our proposed method of using an objective AI-based retinal characteristic analysis is considered a highly valuable approach in the clinical practice of traditional Chinese medicine. At present, there is a vast difference between the procedure of standardization of the TCM syndrome classification of syndrome type and the method of statistical processing. e clinical methods suitable for standardizing TCM syndrome classification include cluster analysis, factor analysis, principal component analysis, artificial neural networks, regression analysis, and discriminant analysis [26][27][28]. Collecting accurate representative clinical characteristics and using a correct method of data analysis can ensure a very high reliability of TCM syndrome classification. Our research was based on the clinical history data, so we could establish an optimal discriminant function and regression equation, in effect, achieving a mathematical summary of the data. In recent years, there have been many clinical studies on the correlation between objective indicators and the TCM syndrome types of ischemic stroke, of which the indicators in question have been biochemical indicators, imaging indicators, and scale evaluations [29][30][31]. In 2013, a prospective cohort study of the Asian population in Malaysia proposed that the addition of retinopathy tests can improve the physician's ability to predict a stroke [32]. erefore, we included retinal characteristics and clinical risk factors to classify the syndrome types of ischemic stroke in our study. Insomnia and alcohol consumption are related to the occurrence ischemic stroke and are considered to be its risk factors. Insomnia is a common symptom that is associated with increased risk of mortality in first-time stroke patients [33]. A high frequency of drinking before the stroke is related to an all-cause mortality in patients with ischemic stroke [34]. In our study, we focused on the clinical risks of ischemic stroke and demonstrated that a history of insomnia or drinking could indicate the presence the two TCM syndromes, QDBS and PSIC. Shi [35] found that Qi deficiency was diagnosed in 87.43% of patients with insomnia and spiritlessness. is finding coincides with our result that patients of ischemic stroke with QDBS syndrome were more likely to manifest insomnia. is can be explained from the TCM theory that fatigue due to lack of sleep is the key component of Qi deficiency and eventually results in blood stasis. According to TCM somatology, alcohol promotes sweet and bitter tastes and excessive drinking can lead to a Evidence-Based Complementary and Alternative Medicine hot and damp manifestation in the body, eventually causing the production of phlegm. Vascular damage appears to become worse with the accumulation of phlegm, which results in ischemic stroke. Furthermore, in our study, patients with PSIC syndrome were found to have a higher percentage of drinking history, which is concurrent with the study by Zhu et al. [36], which stated that the risk factors of a phlegmwetness body type were caused by alcohol consumption. A cross-sectional study based on two community populations in southeastern United States explored the positive relationship between fundic vascular anomalies and cerebral infarction and confirmed it via MRI [37]. is study found that there was a correlation between cerebral infarction and fundic vascular anomalies, evidenced by arteriovenous local stenosis, local vasoconstriction, punctate hemorrhage, soft exudates, and microaneurysms. e ARIC [38] study in 2010 found that the decrease of the central retinal arteriole equivalent (CRAE), the increase of the central retinal vein equivalent (CRVE), the stenosis of small arteries, and the cross-pressure of arteries and veins were associated with lacunar infarction. A meta-analysis showed that thinning of the retinal arterioles, arteriovenous cross-pressure, hemorrhage, microaneurysms, and a reduction of fractal dimension were also associated with stroke [39]. In our study, we found that vascular distortion and soft exudates were consistent with the risk of cerebral infarction, apart from the retinal characteristics. Our study also found that some retinal characteristics could be used to classify these 2 TCM syndromes. Characteristics of vascular morphology and integrity, such as AVR, BCV, hemorrhage, arterial occlusion, were more significantly altered in the QDBS group. In this study, we differentiated between the two TCM syndromes of ischemia stroke using different retinal characteristics. A model that facilitated this differentiation model displayed higher diagnostic efficacy based on both retinal vessel characteristics and clinical variables, rather than being dependent only on clinical variables. In fact, the model based on retinal variables alone is almost as good as the combined model with both clinical and retinal variables. erefore, the retinal vessel features that were obtained by us could be used for interpretation and guidance of stroke syndromes in TCM clinical practice. Several potential study limitations should be considered. First, all the participants included in this study were recruited in the same hospital. A multicenter clinical study should be designed to fully investigate our results. Second, in addition to QDBS and PSIC, the other five TCM syndrome types should be investigated in future studies. ird, due to the small sample size of our study, a test data was not conducted in this study. Future studies on this topic should include larger sample sizes. However, we have carried out a leave-one-out cross-validation for the final model. e percentages of correct classification for PSIC and QDBS were 77.8% and 74.7%, respectively. Conclusions In this study, we developed a logistic regression diagnosis model by combining clinical variables and retinal characteristics. We collected information on clinical variables and used the features extracted automatically from retinal images to create an objective method of diagnosis of ischemic stroke. is model is effective for distinguishing between the two TCM syndromes of ischemic stroke. erefore, we concluded that retinal characteristics are useful for clinically differentiating between QDBS and PSIC. Data Availability e EXCEL data used to support the findings of this study are available from the corresponding author upon request. Ethical Approval is study was approved by the Ethics Committee of the Shenzhen Traditional Chinese Medicine Hospital and carried out in accordance with the guidelines of the Declaration of Helsinki (Approval number: 2018-75). Consent All patients provided written informed consent prior to participation in any study-related procedures. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2020-04-02T09:18:07.607Z
2020-03-26T00:00:00.000
{ "year": 2020, "sha1": "5a1b5e77aa9da0f6f69ca3da48390a8363b1b2c1", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/6051831", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed65cae5e7467d07c54e61aaeded433958724e42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233697750
pes2o/s2orc
v3-fos-license
Spontaneous rupture of splenic artery aneurysm in pregnancy: an autopsy-based case report Splenic artery aneurysm (SAA) is an infrequent form of vascular disease and has a significant potential for rupture and life threatening hemorrhage. Complications may occur in 10% of all cases with a mortality rate of 10–25% in nonpregnant patient and up to 70% during pregnancy. It presents as sudden, unexpected deaths to be diagnosed mostly at the time of autopsy. Aneurysm is the thinning and dilatation of vascular wall. It can be true or pseudo aneurysm. In a true aneurysm all layers of vessel wall are intact while in a pseudoaneurysm there is tear in vascular wall leading to perivascular hematoma with no associated thinning or dilatation. The causes of true SAA are not well known, however, atherosclerosis and congenital defects of the arterial wall have been described as the major causes. Acute and chronic pancreatitis are thought to be the major causes of pseudo aneurysm. INTRODUCTION Splenic artery aneurysm (SAA) is an infrequent form of vascular disease and has a significant potential for rupture and life threatening hemorrhage. Complications may occur in 10% of all cases with a mortality rate of 10-25% in nonpregnant patient and up to 70% during pregnancy. It presents as sudden, unexpected deaths to be diagnosed mostly at the time of autopsy. 1,2 Aneurysm is the thinning and dilatation of vascular wall. It can be true or pseudo aneurysm. In a true aneurysm all layers of vessel wall are intact while in a pseudoaneurysm there is tear in vascular wall leading to perivascular hematoma with no associated thinning or dilatation. 1,3 The causes of true SAA are not well known, however, atherosclerosis and congenital defects of the arterial wall have been described as the major causes. Acute and chronic pancreatitis are thought to be the major causes of pseudo aneurysm. 3 Splenic artery aneurysm has high risk of rupture, mostly during third trimester, due to the metabolic and hormonal changes occurring in pregnancy. [4][5][6] We are reporting one such case of splenic artery aneurysm rupture in a pregnant female leading to sudden unexpected death. CASE REPORT A 36 years old, primigravida, full term female weighing 80 kgs was brought to the emergency department of a tertiary care hospital with history of two episodes of vomiting while relaxing in a chair and watching television. She was declared brought dead and body was sent to mortuary for medicolegal autopsy. She had past history of hypertension and diabetes. On external examination no ante-mortem external injury was present. Conjunctiva of both eyes was pale. On internal examination, abdominal cavity contained about 2.0 kg of clotted blood and about one litre of liquid blood as shown in Figure 1. On further dissection and retracting the stomach and pancreas, blood clots were seen in the area of splenic hilum as shown in Figure 2 and 3. On removal of blood clots, ruptured splenic artery aneurism about 2.0 cm in diameter was seen near hilum. Weight of spleen was 335 gms. All other organs were pale. Cause of death was opined hemorrhagic shock consequent to spontaneous rupture of splenic artery aneurysm. DISCUSSION SAA has an estimated prevalence of 0.01% and 0.98%. 7 After aortic and iliac artery aneurysm, SAA is the third most common intra-abdominal arterial aneurysm. It accounts for about 60% of all splanchnic arterial aneurysm. 8,9 Once ruptured it has a mortality rate of 10-25% in non-pregnant individuals, which increases to 70% during pregnancy and is associated with fetus mortality rate of 95%. 1,2,9 About 65% of all SAA are present in pregnant women of which 20-50% may rupture. Most commonly this occurs during third trimester (69%), at child birth (13%), during first two trimesters (12%), 13% and at puerperium (6%). [9][10][11][12] Although the pathogenesis of the condition is obscure, during pregnancy there are many hormonal changes like increase in circulating hormones like estrogen, progesterone, relaxin and metabolic changes leading to increase in blood volume and heart rate, which ultimately leads to raised blood pressure. All these changes along with an enlarged uterus compressing the iliac artery and aorta, increases blood flow within splenic artery leading to SAA. Many risk factors have been associated with SAA rupture like systemic hypertension, portal congestion, liver diseases, multiple pregnancies and size of aneurysm. SAA of size more than 2 cm in diameter is more prone to rupture. 9,12,13 In our case, though the female was primigravida, the other risk factors like hypertension, obesity and aneurysm size of 2 cm may have contributed in the rupture of aneurysm. SAA are generally asymptomatic and the diagnosis of unruptured aneurysm during pregnancy is very difficult. As a result the initial recognition and diagnosis of SAA take place only after they have ruptured or at autopsy. A phenomenon known as "double rupture" is reported in 25% of cases where the bleeding remains confined to the lesser sac initially for around 6-96 hours which may cause pain and transient hypotension, followed by free intraperitoneal hemorrhage and collapse of the patient. This time period between initial hemorrhage in lesser sac and free intraperitoneal hemorrhage provides an opportunity to diagnose the aneurysm and intervene. 9,12 As in the present case the patient with ruptured SAA presents with symptoms of nausea, vomiting, hypotension, these are similar to other more common obstetrical emergencies like uterine rupture leading to misdiagnosis of the condition in about 70% of cases. Hence, pregnant women who are experiencing pain in the left upper part of abdomen or in hypovolemic shock with complain of nausea, vomiting and no other findings at obstetric examination should be screened immediately for possible SAA. [12][13][14] CONCLUSION Even though SAA is a rare condition in pregnancy, its rupture poses a very high risk of maternal and fetal mortality. It is important to be aware about SAA and its complications, for early diagnosis and timely management, to increase the chances of maternal and fetal survival. The case highlights that routine screening for SAA should be done in all pregnant women including primigravida and special attention should be given to females with risk factors. ACKNOWLEDGMENTS Authors would like to thank Dr. Nagendra Singh Sonwani, Department of Forensic Medicine, University college of Medical Sciences and Guru Teg Bahadur Hospital, New
2021-05-05T00:08:17.457Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "0dd65e059135f66de3816ba8702e90a35a702bc3", "oa_license": null, "oa_url": "https://www.ijrcog.org/index.php/ijrcog/article/download/9775/6472", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e92c699e67c8b65e526845bbb6191d6432648ef4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
30700659
pes2o/s2orc
v3-fos-license
Exploring expectation effects in EMDR: does prior treatment knowledge affect the degrading effects of eye movements on memories? ABSTRACT Background: Eye movement desensitization and reprocessing (EMDR) is an effective psychological treatment for posttraumatic stress disorder. Recalling a memory while simultaneously making eye movements (EM) decreases a memory’s vividness and/or emotionality. It has been argued that non-specific factors, such as treatment expectancy and experimental demand, may contribute to the EMDR’s effectiveness. Objective: The present study was designed to test whether expectations about the working mechanism of EMDR would alter the memory attenuating effects of EM. Two experiments were conducted. In Experiment 1, we examined the effects of pre-existing (non-manipulated) knowledge of EMDR in participants with and without prior knowledge. In Experiment 2, we experimentally manipulated prior knowledge by providing participants without prior knowledge with correct or incorrect information about EMDR’s working mechanism. Method: Participants in both experiments recalled two aversive, autobiographical memories during brief sets of EM (Recall+EM) or keeping eyes stationary (Recall Only). Before and after the intervention, participants scored their memories on vividness and emotionality. A Bayesian approach was used to compare two competing hypotheses on the effects of (existing/given) prior knowledge: (1) Prior (correct) knowledge increases the effects of Recall+EM vs. Recall Only, vs. (2) prior knowledge does not affect the effects of Recall+EM. Results: Recall+EM caused greater reductions in memory vividness and emotionality than Recall Only in all groups, including the incorrect information group. In Experiment 1, both hypotheses were supported by the data: prior knowledge boosted the effects of EM, but only modestly. In Experiment 2, the second hypothesis was clearly supported over the first: providing knowledge of the underlying mechanism of EMDR did not alter the effects of EM. Conclusions: Recall+EM appears to be quite robust against the effects of prior expectations. As Recall+EM is the core component of EMDR, expectancy effects probably contribute little to the effectiveness of EMDR treatment. Eye movement desensitization and reprocessing (EMDR) has been extensively validated as an effective psychological treatment for posttraumatic stress disorder (PTSD; Bisson et al., 2007;Bradley, Greene, Russ, Dutra, & Westen, 2005;Chen et al., 2014). It involves focusing simultaneously on traumatic memories along with associated thoughts, emotions, and bodily sensations, as well as bilateral stimulation, generally in the form of horizontal eye movements (EM). A recent meta-analysis shows that these EM are crucial, as they have a significant, additive memory degrading effect to mere exposure to the traumatic memory (Lee & Cuijpers, 2013). Currently, the most evidenced account for the beneficial effects of EM is provided by the working memory (WM) theory (Andrade, Kavanagh, & Baddeley, 1997;Maxfield, Melnyk, & Hayman, 2008;van den Hout & Engelhard, 2012). WM is a cognitive system for temporal storage and manipulation of information (Baddeley, 2012). It has been consistently demonstrated that performance deteriorates when two tasks make demands on the same WM resources, indicating that WM has limited capacity. Focussing on a memory (van Veen et al., 2015) and engaging in EM, both tax WM resources (Engelhard, van Uijen, & van den Hout, 2010;van den Hout et al., 2011van den Hout et al., , 2010. Simultaneously performing these tasks therefore reduces the sensory quality of the memory, making it less vivid and less emotional. It is likely that, after EMDR, the less rich, degraded memory is reconsolidated into long-term storage which may explain longterm effects (Gunter & Bodner, 2008;Nader & Hardt, 2009;van den Hout & Engelhard, 2012). The memory degrading effects of EM, the core intervention of EMDR, are reproducible in a laboratory setting (van den Hout & Engelhard, 2012;van den Hout, Muris, Salemink, & Kindt, 2001). During a procedurally simple 'EMDR-lab model', healthy participants recall aversive autobiographical memories and rate them in terms of vividness and emotionality. Then, the memories are recalled while tracking a moving dot on a computer screen, which induces horizontal eye movements (Recall +EM), or memories are recalled while keeping the eyes still (control condition; Recall Only). Afterwards, both memories are again recalled and rated on vividness and emotionality. Using this laboratory design, different variables can be manipulated in order to examine the underlying working mechanisms of EMDR. Studies that have adopted the model have shown, for example, that memories are not only degraded by EM, but also by other WM taxing dual tasks, such as complex counting (van den Hout et al., 2010) and mindful breathing , and that not only negative memories can be altered by EM, but also positive memories (Engelhard et al., 2010;Hornsveld et al., 2011;Littel, van den Hout, & Engelhard, 2016), providing evidence for the abovementioned WM account. As with all biomedical and psychological interventions, the question arises to what extent the high effectiveness of EMDR can be attributed to non-specific factors, such as treatment credibility and expectancy. Both psychotherapy outcome expectancy and treatment credibility are shown to be positively related to treatment outcomes (Constantino, Arnkoff, Glass, Ametrano, & Smith, 2011;Taylor, 2003). With regard to EMDR specifically, previous authors have asserted the view that beneficial effects of the treatment are incidental and might be explained by credibility, expectation for improvement, experimental demand, therapist enthusiasm, and therapist allegiance (Devilly, 2005;Herbert et al., 2000;Lohr et al., 1992;Lohr, Lilienfeld, Tolin, & Herbert, 1999). Within the EMDR-lab model, non-specific intervention effects are by default controlled for by excluding participants with detailed prior knowledge of EMDR's effectiveness and/or the underlying mechanism. Hence, the commonly observed superiority of Recall+EM over Recall Only cannot be attributed to positive expectancies of the EM manipulation. Nevertheless, whether prior knowledge of EMDR truly affects the results remains an empirical question. Addressing this question is relevant for EMDR research, as it will indicate whether it is necessary to exclude participants with prior knowledge. Most importantly, however, it will reveal whether expectancy effects might contribute to the beneficial effects of EMDR treatment, which has high clinical relevance. Therefore, in Experiment 1 we tested two pre-specified and competing hypotheses regarding the role of non-experimentally manipulated, prior knowledge of EMDR on the effects of Recall+EM vs. Recall Only: (1) prior knowledge strengthens the decreases in memory vividness and emotionality after Recall+EM vs. Recall Only; and (2) prior knowledge does not affect memory degrading by Recall+EM vs. Recall Only. We used an experimental design, and included individuals with and without prior knowledge of EMDR. We used a Bayesian approach to critically test which of the two hypotheses is most likely. Ethics statement The research was conducted according to the principles expressed in the Declaration of Helsinki. In both experiments in this article, healthy human participants were tested. All participants provided written informed consent. In giving their consent, participants acknowledged to have read and to have agreed with the rules regarding participation, and the researchers' commitments and privacy policy. Participants were informed that they could stop the experiment at any time without the need to provide a reason for stopping. All gathered data were analysed anonymously. Afterwards participants were debriefed. Participants Forty-three individuals (M age = 21.58, SD age = 1.87, 9 males, 34 females) participated in the first study. Inclusion was limited to individuals over 18 without current self-reported psychopathology, and who reported never to have received EMDR therapy. Participants were mainly students recruited at Utrecht University. Based on their self-reported specific knowledge of the memory degrading effects of EMDR and/or its proposed underlying working mechanism, they were divided into a 'prior knowledge group' (n = 22) or a 'no knowledge group' (n = 21). See Table 1 for the specific inclusion criteria. See Table 2 for demographics per group. All participants provided written informed consent and received course credit or financial reimbursement. Materials and procedure At the start of the experiment, participants were instructed to select two aversive, autobiographical memories and grade the emotional intensity of the memories on a scale from 1 to 10. If memories were graded < 6 or > 9, they were considered either not aversive enough or too aversive, and participants were instructed to select a different memory. In line with the Dutch EMDR protocol (de Jongh & Ten Broeke, 2012), they had to 'play' these memories in their minds as vividly as possible and take a 'screenshot' of the most emotionally intense moment. Participants labelled the resulting images with a keyword, which was used to refer to the selected memories in the remainder of the experiment. For counterbalancing purposes, participants then ranked the images based on emotional intensity. A computerized dual taxation task was used to simulate the EM component of EMDR. Participants were instructed to recall one of their aversive memories. Meanwhile they had to track a horizontally moving dot (1 Hz) on a black screen (Recall+EM), or watch a black screen without a dot (Recall Only). The moving dots and blank screens were displayed during six intervals of 24 s separated by 10 s breaks (cf. van Schie, van Veen, Engelhard, Klugkist, & van den Hout, 2016;van Veen et al., 2015). Before (pretest) and after (posttest) each intervention participants recalled the aversive memory for 10 s and rated it on vividness and emotionality using Visual Analog Scales (VASs) ranging from 0 (not vivid/not unpleasant) to 100 (very vivid/very unpleasant). The experimental task was programmed in and presented with OpenSesame (Mathôt, Schreij, & Theeuwes, 2012). Participants were seated approximately 50 cm from the screen. Data analysis Data were analysed with the BIEMS software package, which uses a Bayesian model selection criterion (see Mulder et al., 2009;Mulder, Hoijtink, & de Leeuw, 2012;Mulder, Hoijtink, & Klugkist, 2010). BIEMS evaluates the relative likelihood of the data for different competing hypotheses, which is expressed as a Bayes Factor (BF; Kass & Raftery, 1995). BIEMS specifically computes a BF for a constrained hypothesis against an unconstrained hypothesis. A BF of 1 means that there is equal support for a specified constrained hypothesis and the unconstrained model; one does not outperform the other. BF > 1 indicates that the study hypothesis outperforms the unconstrained model, whereas BF < 1 means the opposite. It is possible to directly compare BFs of different models, when each constrained BF is calculated against the same unconstrained model. Because we were interested in evaluating the relative likelihood of the data under different competing hypotheses, we did not use null hypothesis significance testing (NHST). In NHST one can only gather evidence against the null hypothesis, but never in favour of the null hypothesis, which makes evaluating hypotheses within the current study and within the field of experimental psychopathology largely unsuitable (Krypotos, Blanken, Arnaudova, Matzke, & Beckers, 2016). In the current study, there were two groups (prior knowledge and no knowledge) and two intervention conditions (Recall+EM and Recall Only). Assessment of memory vividness and emotionality took place before the interventions (pretest) and immediately after the interventions (posttest). Pre-post difference scores were calculated for vividness and emotionality ratings in Recall+EM and Recall Only conditions, with higher scores indicating larger decreases in response to an intervention. The following two competing hypotheses were compared: (1) prior knowledge strengthens decreases in memory vividness and emotionality after EM compared to Recall Only; and (2) prior knowledge does not affect effects of EM relative to Recall Only on memory vividness and emotionality. See Table 3 for the constraints of both hypotheses. Results Figure 1 suggests that the observed data patterns for memory vividness are in line with hypothesis 1. Bayesian analyses showed a BF of 4.99 for hypothesis 1, and a BF of 3.40 for hypothesis 2. Therefore, both models are supported by the data, but model 1 appears to be somewhat (1.5 times) more likely than model 2. This is confirmed by the raw data-difference scores (i.e., the decrease after Recall+EM minus decrease after Recall Only), showing a slightly larger difference for the prior knowledge group (M dif = 18.74) than for the no knowledge group (M dif = 12.18). For memory emotionality, Bayesian analyses showed a BF of 3.91 for hypothesis 1, and a BF of 4.85 for hypothesis 2. Again, both models are supported by the data, although model 2 appears to be slightly (1.2 times) more likely than model 1 (see Figure 1). This is confirmed by the raw data-difference scores, showing no approximately equal differences for the prior knowledge group (M dif = 11.01) and the no knowledge group (M dif = 11.07). See Appendix for mean (SD) vividness and emotionality decreases per group. Discussion The aim of this first experiment was to test whether preexisting knowledge of the proposed mechanism of EMDR would increase the commonly observed, memory degrading effects of EM in a laboratory setting. The contradictory observation that both hypotheses are supported by the data can be explained by the reductions in memory emotionality after Recall Only (see Figure 1, grey bars). Because previous studies only show small reductions (e.g., Gunter & Bodner, 2008) or even increases in emotionality after Recall Only (e.g., Engelhard et al., 2010;van den Hout et al., 2001), these reductions are unexpectedly high. It is evident from the data that the actual decrease in emotionality for Recall +EM is larger for the prior knowledge group (M = 22.60) compared to the no knowledge group (M = 8.80; see Figure 1, black bars). However, because both models encompassed constraints that defined decreases in Recall+EM in relation to decreases in Recall Only, the relative decrease was not larger for the prior knowledge group (M dif = 11.01) compared to the no knowledge group (M dif = 11.07; see Figure 1, black bars vs. grey bars). Although the role of prior knowledge seems to be relatively small, the results suggest that, in future experimental studies investigating EMDR components, individuals with prior knowledge of the underlying mechanism of EMDR should (continue to) be excluded from participation, at least when a between-group design is adopted. It is important to note that the current study was not randomized. Therefore, it cannot be ruled out that the effects were attributable to other specific characteristics of the samples. Furthermore, specific type and amount of prior knowledge were not assessed and might have varied in the prior knowledge group. As such, we decided to manipulate participants' expectations about the effects of Recall+EM in a second experiment. We first considered inducing positive expectations in one group and comparing effects of Recall+EM vs. Recall Only with a control group without prior knowledge or expectations. However, a stronger test would be to see if the positive effects of Recall+EM would survive a manipulation of participants' knowledge of the proposed working mechanism of EMDR. Therefore, one group was told that, as a result of EM, memories should become less vivid and emotional (correct information) and the other that, as a result of EM, memories should become more vivid and emotional (incorrect information). As in Experiment 1, we measured memory vividness and emotionality before and after the Recall+EM and Recall Only interventions. Participants Forty-two participants were tested in Experiment 2. Inclusion was limited to individuals over 18 without current psychopathology, who reported to never have received EMDR therapy, and reported to have no specific knowledge of the memory degrading effects of EMDR and/or its proposed underlying working mechanism (similar to participants in the no knowledge group of Experiment 1). Two participants were excluded from the analyses because the pretest emotionality scores of their selected memories deviated ≥ 2.5 SD from the mean (Ratcliff, 1993) and low arousing, emotionally neutral memories have been found to be insensitive to the Recall+EM intervention (Littel, Remijn, Engelhard, & van den Hout, 2017;van den Hout, Eidhof, Verboom, Littel, & Engelhard, 2014). The final sample comprised 40 participants (M age = 21.85, SD age = 3.51, 13 males, 27 females). They were randomly assigned to a 'correct information group' (n = 20) or an 'incorrect information group' (n = 20). The two groups did not differ with regard to any of the measured demographic variables (see Table 4). All participants provided written informed consent and received course credit or financial compensation for participation. Materials and procedure Prior to the experiment, participants were given information about EMDR. All participants were correctly informed that EMDR is used to treat PTSD, that it is highly effective, and that it is often called 'a wonder therapy', but that researchers are only just beginning to investigate how it works. Participants in the correct information condition were then told that this research repeatedly shows that, due to EMDR, traumatic memories become less vivid, less clear, and less accessible, and that, consequently, the memories become less unpleasant. Participants in the incorrect information condition were falsely explained that research repeatedly shows that, as a consequence of EMDR, traumatic memories become more vivid, clearer, and better accessible, and that, because of this, the memories temporarily become more unpleasant. However, because memories temporarily become better available, people can process them better in the long run. After having received correct or incorrect information about the working mechanism of EMDR, participants were instructed to select two aversive, autobiographical memories. Then, participants proceeded to the dual taxation task, during which they recalled their memories while making EM (Recall +EM) or keeping eyes stationary (Recall Only). Before and after the interventions, memories were scored on vividness and emotionality (cf. procedure Experiment 1). Finally, as a retrospective manipulation check, the participants indicated how credible they found the provided information about the working mechanism of EMDR, both before and after they had received the EM intervention. They used 10 cm Visual Analog Scales (VASs) ranging from 0 (not very credible) to 100 (very credible). Data analysis Data were again analysed with the BIEMS software package, using a Bayesian model selection criterion (Mulder et al., 2009(Mulder et al., , 2010(Mulder et al., , 2012. Three hypotheses were compared in the analysis of self-reported credibility of the provided correct and incorrect information before and after the EM intervention: (1) both types of information are equally credible at first, but after the EM intervention the correct information becomes more credible; (2) the correct information is more credible than the incorrect information at first, which is still the case after the EM intervention; and (3) both types of information are equally credible at first, and remain so after the EM intervention. See Table 5 for the constraints of the three hypotheses. The main analysis of memory vividness and emotionality concerned two groups (correct and incorrect information), and two intervention conditions (Recall +EM and Recall Only). Pre-post change scores were calculated for vividness and emotionality ratings in Recall+EM and Recall Only conditions, with higher scores indicating larger decreases over time. Here, two hypotheses were compared: (1) providing information on the working mechanism of EMDR affects changes in memory vividness and emotionality after EM vs. Recall Only, with larger decreases after correct information than incorrect information; and (2) providing such information does not affect the effects of EM vs. Recall Only on memory vividness and emotionality. See Table 6 for the constraints of both hypotheses. Manipulation check for credibility Results of self-reported credibility of the provided information on the working mechanism of EMDR correspond best with model 1. As can be seen in Figure 2, participants assessed the correct and incorrect information as approximately equally credible. Mean scores are Vividness and emotionality As can be seen in Figure 3, there is a drop in memory vividness after Recall+EM relative to Recall Only. There appear to be no or only small differences between correct and incorrect information groups. The difference score (i.e., the decrease after Recall+EM minus decrease after Recall Only) for the correct group (M dif = 13.68) is only slightly smaller than for the incorrect information group (M dif = 15.64). In addition, a drop in memory emotionality can be observed after Recall+EM relative to Recall Only. Again, there appear to be no or only small differences between correct (M dif = 5.73) and incorrect information groups (M dif = 3.19). In line with the observed data patterns, Bayesian analyses showed BFs of only 1.17 and 1.77 for vividness and emotionality for hypothesis 1, but BFs of 4.07 and 3.95 for hypothesis 2. Overall, hypothesis 2 outperforms hypothesis 1. Given the data, hypothesis 2 is 3.5 times more likely than hypothesis 1 for memory vividness and 2.2 times more likely for memory emotionality. Raw mean (SD) vividness and emotionality decreases can be found in the Appendix. Discussion Using a randomized design, it was demonstrated that providing information about the mechanism of EMDR to individuals who have no prior knowledge of EMDR did not increase the degrading effects of EM on the vividness and emotionality of their memories. Furthermore, the effects of Recall+EM vs. Recall Only survived the induction of negative expectations. Importantly, these findings could not be attributed to the credibility of the information. The correct and incorrect descriptions of EMDR mechanism were found highly (and equally) credible at the start of the experiment. Interestingly, the correct information became more credible after the EM intervention, whereas the incorrect information became less credible. Participants might therefore be aware that their memories become less vivid and emotional due to EM. It must be noted that the credibility of the provided information was assessed retrospectively, and therefore might have been biased by the effects of the intervention. Assessing credibility directly after giving the information was not possible, as it could have caused participants to question the authenticity of the information. Nevertheless, despite retrospective assessment, credibility still changed from pretest to posttest. Experiment 2 replicated the observation that Recall +EM is effective in reducing memory vividness and emotionality. It also showed that this EM effect is minimally affected by a priori raised expectations. Even when participants were told and expected that memories become more vivid and more emotional after making EM, they still reported strong reductions in vividness and emotionality. The participants started to question the provided information, whereas they believed it at first. These results contrast the view held by several previous authors that beneficial effects of EMDR treatment are incidental and can be explained by credibility, expectancy, or experimental demand (Devilly, 2005;Herbert et al., 2000;Lohr et al., 1992Lohr et al., , 1999. General discussion We investigated whether expectations about the underlying mechanism of EMDR would alter the commonly observed, memory degrading effects of Recall+EM using two experiments: in the first we examined role of preexisting knowledge, in the second we experimentally manipulated prior knowledge. Overall, data provided more evidence for the hypothesis that knowledge of the working mechanism of EMDR does not influence its effects. As observed in Experiment 2, providing information prior to an EMDR lab experiment does not affect the memory degrading effects of Recall+EM. However, as seen in Experiment 1, previously obtained knowledge of the mechanism of EMDR could boost the effects of Recall+EM. Because we did not find compelling evidence for one hypothesis over the other, the influence of prior knowledge seems to be relatively modest. These results are highly relevant to the clinical practice. For many EMDR therapists it is common practice to explain to their patients how EMDR works and what to expect, thereby raising expectations (Shapiro & Forrest, 2016). Other therapists do the opposite and make very clear to the patients not to expect anything, as not experiencing an expected treatment outcome might be counter-therapeutic and lead to drop-out from therapy. The current results indicate that both strategies presumably have little impact, if any, on the beneficial effects of EMDR on emotional memories. Although the BFs reported here indicate to what extent the current data supports one hypothesis over the other, it must be noted that the BFs (range 3.40-4.99) are not substantial or decisive, but only indicate 'moderate support' (Kass & Raftery, 1995). Therefore, replication or Bayesian updating with new data is recommended (Konijn, van de Schoot, Winter, & Ferguson, 2015). Furthermore, results should be replicated or updated in a treatment setting. It is possible that positive demand characteristics might affect memory attenuation differently compared to a laboratory setting. To summarize, results of the present study indicate that memory Recall+EM decreases memory vividness and emotionality to a greater extent than Recall Only, and that it is quite robust against the effects of prior expectations. As Recall+EM is the core component of EMDR, it can be speculated that credibility and expectancy effects contribute little to the effectiveness of EMDR treatment. Highlights (1) Pre-existing knowledge of the working mechanism of EMDR has a relatively small impact on the memory degrading effects of memory recall + eye movements. (2) Providing participants with knowledge of the working mechanism of EMDR does not affect the memory degrading effects of memory recall + eye movements. (3) Recall + eye movements, the core intervention in EMDR, appears to be quite robust against prior expectations.
2018-04-03T05:06:29.608Z
2017-06-19T00:00:00.000
{ "year": 2017, "sha1": "158bb7b62a51484f59545c6b39af6f0c1f40166c", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/20008198.2017.1328954?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "158bb7b62a51484f59545c6b39af6f0c1f40166c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
229356735
pes2o/s2orc
v3-fos-license
Image Patch-Based Net Water Uptake and Radiomics Models Predict Malignant Cerebral Edema After Ischemic Stroke Malignant cerebral edema (MCE) after an ischemic stroke results in a poor outcome or death. Early prediction of MCE helps to identify subjects that could benefit from a surgical decompressive craniectomy. Net water uptake (NWU) in an ischemic lesion is a predictor of MCE; however, CT perfusion and lesion segmentation are required. This paper proposes a new Image Patch-based Net Water Uptake (IP-NWU) procedure that only uses non-enhanced admission CT and does not need lesion segmentation. IP-NWU is calculated by comparing the density of ischemic and contralateral normal patches selected from the middle cerebral artery (MCA) area using standard reference images. We also compared IP-NWU with the Segmented Region-based NWU (SR-NWU) procedure in which segmented ischemic regions from follow-up CT images are overlaid onto admission images. Furthermore, IP-NWU and its combination with imaging features are used to construct predictive models of MCE with a radiomics approach. In total, 116 patients with an MCA infarction (39 with MCE and 77 without MCE) were included in the study. IP-NWU was significantly higher for patients with MCE than those without MCE (p < 0.05). IP-NWU can predict MCE with an AUC of 0.86. There was no significant difference between IP-NWU and SR-NWU, nor between their predictive efficacy for MCE. The inter-reader and interoperation agreement of IP-NWU was exceptional according to the Intraclass Correlation Coefficient (ICC) analysis (inter-reader: ICC = 0.92; interoperation: ICC = 0.95). By combining IP-NWU with imaging features through a random forest classifier, the radiomics model achieved the highest AUC (0.96). In summary, IP-NWU and radiomics models that combine IP-NWU with imaging features can precisely predict MCE using only admission non-enhanced CT images scanned within 24 h from onset. INTRODUCTION Stroke is the leading cause of death and disability, resulting in 5.9 million deaths and 102 million disability-adjusted life-years worldwide (1). Ischemic stroke accounts for about 85% of the total incidence (2). A focal occlusion at the middle cerebral artery (MCA) leads to large hemispheric infarctions in some patients since the MCA supplies a large amount of blood to the brain. Progressive cerebral edema usually results in a spaceoccupying infarct. The edema increases both brain volume and intracranial pressure. In the first 1-3 days after the onset of stroke, an abrupt neurological decline associated with displacement of midline brain structures may occur in ∼10% of the patients with ischemic stroke of the MCA (3)(4)(5). These tissue shifts and subsequent brain herniation make the mortality rate increase to nearly 80% and thus are termed malignant cerebral edema (MCE) or malignant MCA infarction (6,7). MCA can be relieved by decompressive craniectomy performed within 48 h of stroke onset or before herniation (3,8). Early and precise prediction of MCE can help identify the patients who could potentially benefit from a surgical decompressive craniectomy. Moreover, it can also help clinicians prepare for possible deterioration and communicate with patients and their family members about the goals of care (5). Compared with MRI, CT is the most favorable imaging modality for the prediction of MCE due to its fast acquisition and widespread availability. For example, Minnerup et al. (6) proposed the use of CT-based cerebrospinal fluid (CSF) volume as a predictor of MCE. The ischemic lesion volume must be measured manually from the admission perfusion CT images (cerebral blood volume, CBV). The clot burden score and collateral score measured from CT angiography have also been considered as predictors of MCE (9). Ong et al. developed the Enhanced Detection of Edema in Malignant Anterior Circulation Stroke (EDEMA) to predict the risk of lethal malignant edema; it includes two CT imaging variables at 24 h after the midline shift and basal cistern effacement (10). The EDEMA score showed a higher positive predictive value (93%) than the baseline image markers, such as the Alberta Stroke Program Early CT score (ASPECTS) or hyperdense vessel sign. Cheng et al. added the National Institute of Health Stroke Score (NIHSS) into EDEMA and validated it using a dataset of Chinese patients (11). Two recent meta-analysis studies summarized additional potential predictors (12,13). Net water uptake (NWU) measured by non-enhanced CT is useful for predicting malignant infarctions. A CBV map driven from CT perfusion (CTP) can be used to precisely locate the early ischemic infarct core and a non-enhanced CT is applied to quantitatively measure density changes. The NWU is calculated using the formula 1-D Ischemic /D Normal , where D Ischemic (HU) is the density of the ischemic core with hypoattenuation and D Normal is the density of the area of the contralateral normal tissue (14)(15)(16). Radiomics aims to extract high-dimensional and quantitative features from medical images that can be used to build predictive models with machine learning methods to support clinical decisions (17,18). Radiomics has played an important role in the study of many diseases such as cancers (19,20). For stroke management, radiomics have been used to predict recanalization in ischemic stroke and hematoma expansion (21,22). However, no study on predicting MCE by radiomics has been reported. Regarding NWU and the prediction of MCE, most previous studies required multimodal CT images including CTP or CTA. Some dedicated software packages involve tedious semiautomatic or even manual segmentation. Hence, we propose a new way of calculating NWU that uses only non-enhanced admission CT and does not require CTP, CTA, or segmentation of the ischemic core. We hypothesize that in patients with ischemic stroke due to MCA occlusion, a non-enhanced admission CT can be used to predict MCE by calculating NWU through pre-defined image patches on the affected and non-affected MCA areas. Moreover, combining NWU with clinical and imaging features enables the construction of radiomics models that can predict MCE at an early stage after an ischemic MCA stroke. Participants and the Dataset This retrospective single-center study was approved by the Medical Ethics Committee of the General Hospital of Northern Theater Command and no informed consent was required by the committee. The selection of patients was carried out in accordance with inclusion and exclusion criteria. The inclusion criteria for this study were the following: (1) patients who were diagnosed with an MCA infarction with the occlusion at the MCA M1 segment; (2) patients that had both nonenhanced CT images within 24 h on admission and nonenhanced CT images after 24 h as the follow-up scan; (3) demographic information was available from the time of the stroke onset to the CT scans, including NIHSS score, the use of interventional thrombectomy (IT), the use of bone flap surgery, and the outcome (death for stroke or not); and (4) the development of an MCE was known. Using these criteria, we selected 125 patients from archive data on patients who were admitted to the General Hospital of Northern Theater Command between April 2017 and December 2018. Nine patients were further excluded due to the poor quality of admission CT images at 24 h. Finally, a total of 116 patients were included in the study. We declared patients to have an MCE if they had infarcts with a mass effect during the follow-up non-contrast CT after admission, had clinically experienced a cerebral hernia due to the mass effect of edema, received bone flap surgery, or died due to the mass effect. This definition is the same as that given by Broocks et al. (16). CT images were acquired with a Discovery CT750 HD scanner (GE Healthcare, Milwaukee, WI, USA) with a tube voltage of 120 kVp, x-ray tube current of 300 mA, the protocol of Axial Head, a slice thickness of 5.0 mm, 20 mm spacing between slices, a matrix of 512 × 512, and voxel spacing of 0.449/0.449 mm. The CT image data is available upon request after approval from the General Hospital of Northern Theater Command, China. Net Water Uptake Calculated by Image Patches Given the fact that an early hypoattenuated infarct (lesion core) is often not visible or that it can be difficult to precisely locate in non-enhanced CT images, we propose a new way of calculating net water uptake using CT patches determined using the standard reference images. After reviewing all the images, two experienced neuroradiologists selected four slices as the standard reference images and marked two mirrored patches of 30 × 30 voxels from the right and left MCA areas at each slice, as shown in Figure 1. The criteria for determining the reference images included: (1) that patches should be located in the upper temporal lobe, the lateral parietal lobe, or the border area of the frontal, temporal, and parietal lobes; (2) the patches should avoid old lesions; (3) that patches should be located in the infarct area if there is an obvious infarct area; and, (4) the regions with CSF should be avoided to eliminate its effect on NWU. Subsequently, blinded to any clinical information, two other neuroradiologists independently located four pairs of patches from the images of each patient using these reference images and following the criteria mentioned above simultaneously. Among each pair of patches, the example with hypoattenuation was considered to be ischemic and became density (HU) of D Ischemic ; the other was the normal patch with the density of D Normal . Image patch-based net water uptake (IP-NWU) was calculated with the formula: Net Water Uptake Calculated by Segmented Regions We determined another way of calculating NWU by manually segmenting the ischemic regions. The result was named the segmented region-based NWU (SR-NWU). As shown in Figure 2, first, both the admission CT images (<24 h; Image-A) and the follow-up CT images (>24 h; Image-F) were aligned and normalized to MNI-152 space by linear affine transformation with 12 degrees of freedom. Second, four slices were selected from Image-F using standard reference images and the ischemic regions were manually segmented. Finally, these regions were overlaid onto Image-A to calculate D Ischemic from the CT intensity in Image-A, and D Normal was determined from the mirrored region. Histogram Based Imaging Features To fully utilize the information contained in the selected patches, we also calculated the voxel-wised IP-NWU maps ( D Ischemic D Normal , four 30 × 30 matrices with elements ranging from 0 to 1.0). Based on the four maps, the discrete histogram function can be depicted as where r n is IP-NWU of the n-th grade, Y n is the number of voxels with IP-NWU of r n , n =1, 2, .., N. Here N was set at 8. For univariate data Y 1 , Y 2 , ..., Y N , five parameters could be calculated: (1) standard deviation (s); (2) slope; (3) entropy; (4) skewness (g); and (5) kurtosis. The slope was defined as the gradient between the minimum and maximum points among the vector of Y 1 , Y 2 , ..., Y N . Entropy was defined as where p n is the ratio of the number of voxels with IP-NWU of r n to the total number of voxels. H indicates the average amount of information in the image. The skewness was defined as where s is the standard deviation and Y is the mean. The skewness was near zero for the symmetric data, negative for the data skewed left, and positive for the data skewed right. The kurtosis is given as Hence, k is zero for the standard normal distribution; it is positive for a "heavy-tailed" distribution and negative for a "light-tailed" distribution. In total, 13 image features including Y 1 , Y 2 , ..., Y 8 , and the five parameters defined above were employed to construct the radiomics models for predicting MCE. The calculation of 13 radiomics features was done using the Python code written by our group. Machine Learning Algorithms Three machine learning algorithms including support vector machine (SVM), logistic regression (LR), and random forest (RF) were employed as the classifier to predict MCE using IP-NWU, 13 image features, and three clinical features (age, gender, and NIHSS score). For a training dataset D = x 1 , y 1 , x 2 , y 2 , . . . , x m , y m , y i ǫ {−1, +1}, the SVM algorithm draws each entity x i , y i in the dataset as a point in n-dimensional space (n is the number of features) and each feature is treated as a specific coordinate. The classification is carried out by finding a hyperplane ω, b that maximizes the margin between two categories (23)(24)(25). The learned parameters ω and b can be determined by solving the following equations. LR is a kind of classic supervised learning method and it models the log odds (or logit) by linearly combining the independent variables (26). For a dataset x i , y i m i=1 , y i ǫ {0, 1}, LR estimates ω and b by maximizing the log-likelihood where p y = 1|x = e w T x+b 1 + e w T x+b (10) RF is a parallel-style ensemble learning method that uses a decision tree as the base learner and bagging as the ensemble strategy (27,28). Each bootstrap sample generated through bagging with m observations was used to train one decision tree and a final consensus estimate was obtained by combining all individual bootstrap estimates. A subset p of n features was selected randomly for the partition of each node of the tree, which effectively reduced the similarity of trees generated from different bootstrap samples (29). One can refer to the specific literature on machine learning for more details about SVM, LR, and RF (30). All three machine learning algorithms were implemented with Scikit-learn (an open source machine learning library) with default settings. Regarding the RF classifier, there was hyperparameter tuning and cross validation. Specifically, the optimal values of two hyperparameters of random_state and n_estimators were determined with a grid search: for random_state, a range of 2-16 with a step of 2 was used, and for n_estimators, a range of 100-1,000 with a step of 100 was used. The final optimal parameters were random_state = 10 and n_estimators = 100. Statistics and Performance Evaluation of Predictive Models The inter-reader agreement for IP-NWU was evaluated with the intraclass correlation coefficient (ICC). If the ICC is larger than 0.75, then the reliability of the method for calculating NWU is good. The Bland-Altman statistical method was applied to assess the agreement between two methods of calculating NWU. For IP-NWU, the inter-reader and interoperation agreement were also assessed with the Bland-Altman method. A p-value of <0.05 was considered to indicate a significant difference. The performances of various predictive models were evaluated with leave-one-out cross validation (LOOCV). It was implemented with Scikit-learn. In LOOCV, for a dataset with m samples, only one sample was left for the test and the others were used for training a model. This process was conducted for m times. LOOCV has been shown to almost estimate the generalizability of machine learning models impartially (31). The receiver operating characteristic curve (ROC), the area under the ROC curve (AUC), the confusion matrix, accuracy (ACC), sensitivity (SEN), specificity (SPC), F1-score, positive predictive value (PPV), and negative predictive value (PPV) were calculated and compared. DeLong's method was used to evaluate whether there was a significant difference between two AUCs of ROC curves (32). Matthews correlation coefficient (MCC) was also used to evaluate and compare the performance of binary classifiers since it considers all fields of the confusion matrix (33). IP-NWU and Its Value With Time IP-NWU in patients who developed MCE was significantly higher than that in those without MCE (p < 0.05; Figure 3A). The average of IP-NWU in these two groups was 18.2 and 8.5%. These values were very close to those given by a previous study (18.0 and 7.0%) where semiautomatic segmentation of core lesions was done with the aid of CT perfusion images (16). IP-NWU in both groups increased from the time of onset to imaging ( Figure 3B). However, the edema rate for the group with MCE was larger than that of the group without MCE. IP-NWU as a Predictor of MCE and the Influence of IT and Time on Predictions The optimal cut-off value of IP-NWU for discriminating between the patients with MCE and without MCE was 12.25%. Using this cut-off value, the predictive model could achieve a SEN of 0.64, an SPE of 0.91, and an ACC of 0.82. Univariate ROC curve analysis of IP-NWU resulted in an AUC of 0.86 ( Figure 4A). As for ROC curve predictions of MCE by IP-NWU, there was no significant difference between groups including and excluding patients who underwent IT (DeLong test, p > 0.05; MCC, 0.58 vs. 0.55; Figure 4A). This result demonstrated that interventional thrombectomy does not influence the prediction of MCE when using IP-NWU. These findings are in accordance with previous observations (16). The predictive power of IP-NWU/time and IP-NWU/log(time+1) was not higher than that of IP-NWU, according to the ROC curves shown in Figure 4B Image Patches vs. Segmented Regions As for the value of NWU, there was no significant difference between the methods of image patches and segmented regions (Bland-Altman test, p > 0.05; Figure 5A). Most points of difference (113 of 116) were located within the 95% limits of agreement (1.96 standard deviation). Inter-reader and Interoperation Agreement of IP-NWU As for the method of IP-NWU, there was an exceptional interreader agreement (ICC is 0.92). The Bland-Altman method also indicated that there was no significant difference between Researcher A and Researcher B regarding the measurement of IP-NWU (p > 0.05; Figure 6A). Most points were located within the 95% limits of agreement. Meanwhile, as shown in Figure 6B, no significant difference was observed between the two measurements by the same reader (p > 0.05), indicating good reproducibility of the image patch method. The ICC of the two measurements was 0.95. Table 2, for all three machine learning algorithms, SVM, LR, and RF adding the clinical information of age, gender, and NIHSS scores of the patients onto IP-NWU did not improve prediction of MCE. For SVM and LR, compared with the prediction when only using IP-NWU, neither features of "NWU + Imaging" nor "NWU + Clinical + Imaging" improved the performance of predicting MCE. However, as for RF, the features of "NWU + Imaging" can significantly increase the ACC to 0.91, SEN to 0.85, SPE to 0.94, AUC to 0.96, F1-score to 0.90, PPV to 0.87, NPV to 0.92, and MCC to 0.79 (Table 2, Figure 7; DeLong test, p < 0.05). When adding the clinical information, no significant improvement was observed (DeLong test, p > 0.05; MCC, 0.80 vs. 0.79). This means that the performance of the radiomics model for predicting MCE depends on both the classifiers and features. In the current study the combination of RF and features of "NWU + Clinical + Imaging" had the best performance. For this model, 73 of 77 patients who did not develop MCE were DISCUSSION The aim of this study was to calculate net water uptake (NWU) using admission non-enhanced CT image patches scanned within 24 h from stroke onset and build predictive models for MCE by combining NWU with other features using a radiomics approach. The main findings had four aspects: (1) NWU can be estimated by using the standard reference images and patches; (2) the results for IP-NWU showed no significant difference with results when using segmented regions; (3) IP-NWU is a predictor of MCE; and, (4) radiomics models using IP-NWU and other imaging features can predict MCE rather precisely. Standard Reference Images and Patches: An Exceptional Method for Calculating Net Water Uptake Net water uptake in the ischemic regions was originally proposed to identify patients with stroke onset within 4.5 h (the time window of thrombolysis) and extended to predict malignant infarction in 2018 (15,16). It relies on the high sensitivity of CT perfusion to precisely locate the infarct core and the high specificity of non-enhanced CT to measure density. However, for many stroke centers and patients, the CTP and its postprocessing for quantitative perfusion maps, including cerebral blood volume, cerebral blood flow (CBF), mean transit time (MTT), and time to drain (TTD), are not accessible. The early hypoattenuation of the ischemic core in nonenhanced CT images is uncertain or difficult to detect. However, the anatomic location of the MCA (the potential target of infarction) is known. Therefore, we proposed a way of calculating NWU using the standard reference images and patches. Our results showed that this method enables presenting a significant difference in NWU between patients with MCE and without MCE. Further study of the inter-reader agreement in NWU calculation demonstrated that the method had good reproducibility. In summary, using the standard reference images and patches is an exceptional way of calculating net water uptake. It is easy to implement, reliable, and does not require the aid of CTP, CT angiography, or manual segmentation. Recently, NWU has been used to quantify the treatment effects for ischemic stroke; e.g., thrombectomy and adjuvant drugs, especially for the cases with uncertain indications for treatment, such as low ASPECTS (35). Therefore, IP-NWU may be extended to similar applications. Reference Images and Patches vs. Segmented Ischemic Regions Locating the infarct core by overlaying segmented ischemic regions from follow-up CT images (>24 h after stroke onset) is an alternative method compared to the standard method of using a CBV map from CTP. No significant difference was observed between IP-NWU and SR-NWU, which supports the evidence that IP-NWU is accurate. However, it is still not clear to what extent the follow-up CT images can work as surrogates of CTP. The ischemic core, penumbra, and benign oligemia cannot be differentiated from the follow-up CT images (36). The registration error between Image-A and Image-F may have a further detrimental effect on NWU calculations. Predictor of MCE and Other Confounders IP-NWU can be a predictor of MCE for middle cerebral artery stroke patients with an AUC of 0.85. Moreover, the prediction was not influenced by interventional thrombectomy, which is in agreement with a previous study (16). The decision to take IT is based on an early infarct and hypoattenuation. Specifically, patients with a large volume of early infarct and visually evident areas of hypoattenuation are potentially excluded from IT. However, the recanalization status and its influence on IP-NWU and MCE are unknown and need further investigation (37,38). Complete recanalization does not directly indicate a good clinical outcome (39). It has been noted that the prediction of MCE is different from that of cerebral edema (40,41). Moreover, the rate of IT in our current study (15.52%, 18 of 116) might be lower than that in developed countries due to the high economic cost and late presentations at the hospital. The relationship between IP-NWU and time in patients with MCE and without MCE is also in accord with that reported in a previous study (16); i.e., NWU increases with time from stroke onset. However, there was no significant difference in the AUC for MCE prediction among NWU, NWU/time, and NWU/log(time+1). Radiomics Leverages Machine Learning and Features to Predict MCE Precisely Our radiomics model using RF and features of "NWU+Clinical+Imaging" had a comparable performance with the model reported by Broocks (16). This indicated that using more imaging features might be superior to only using NWU. Moreover, the possible reason may rely on two aspects: (1) CTP was not used to locate the ischemic core; and (2) our data consisted of 33.6% (39 of 116) of patients with MCE, which was higher than that in a study by Broocks et al. (18.2%) (16). Our model also showed a higher AUC than that of EDEMA scores (AUC = 0.72) (10) and that of modified EDEMA scores by adding NIHSS scores for 478 Chinese patients (AUC = 0.80) (11). Radiomics leverages machine learning and quantitative imaging features to improve the prediction of clinical outcomes (18). For the prediction of MCE in our current study, radiomics worked well. By adding 13 imaging features from histogram analysis of voxel-wised IP-NWU maps, the AUC of the classifier by RF can increase from 0.86 to 0.96. This improvement may be due to the fact that: (1) multivariate analysis by machine learning has more predictive power than the univariate ROC analysis; and, (2) IP-NWU is only the mean of IP-NWU maps, and more measures from these maps represent the characteristics of the core lesions better. The lesion volume, texture features, penumbra pattern, and other high-level abstract features may be helpful and should be included in radiomics models of MCE predictions in the near future. All 13 imaging features extracted from histogram analysis were used without selection in our current study. Unlike the application of the PyRadiomics Python package in tumor imaging, more than 1,000 features are extracted (https:// pyradiomics.readthedocs.io/). Hence feature selection must be done to reduce overfitting. During feature selection, the importance of features can be obtained (42). Since we only used 13 imaging features, feature selection, and importance analysis were not undertaken in this study. The reason why the PyRadiomics Python package was not used to extract more than 1,000 features is that the voxel-wised IP-NWU map has a small size of 30 × 30 voxels. This map does not contain rich information such as the locations of tumor lesions. For the predictive radiomics model, there is a danger of overfitting if the number of features is very large. In our study, the largest number of features was 19 (IP-NWU, 13 imaging features, and 3 clinical features) and 116 samples or patients were included. According to a rule of thumb in radiomics, each feature requires 10 samples (18). Therefore, the overfitting in our study should be minimal. In the present study, RF performed better for MCE prediction than SVM and LR. For example, the model using RF and "NWU + Clinical + Imaging" had an AUC of 0.96, while the model using LR and SVM had an AUC of 0.81 and 0.84, respectively. RF usually performs best in situations where the output is highly sensitive to small changes in input (29). This may indicate that the prediction of MCE is highly sensitive to small changes in NWU and imaging features. Moreover, RF is one ensemble or consensus estimator, and thus has the merit of mitigating both underfitting and overfitting (29). Underfitting and overfitting may exist in the current study given the fact that the sample size was small. Adding the clinical information of age, gender, and NIHSS scores did not significantly improve the prediction of MCE, which agreed with another previous report (16). Limitations and Future Works Our study has some limitations that could provide direction to future studies. First, our study was retrospective and limited to one single center. The relevance of the resulting radiomics models is unknown for data from other hospitals. Second, the number of patients (116) was relatively small, which limits the statistical power of the study. Third, the selection of slices and patches was done by experts, making IP-NWU depend on expert conditions, such as preferences, experiences, and mood. To set image patches in the cortex orientated by ASPECTS regions may further improve the presented method. It is noted that the cortex regions with CSF should be avoided to eliminate the effect of CSF on NWU. This criterion may make some early infarcts in the cortex not represented in the image patches. Finally, we used three machine learning algorithms and manually designed histogram-based imaging features. A prospective and multi-center study with CTP and nonenhanced CT scans should be carried out in the near future, before the proposed IP-NWU and radiomics models are introduced as clinical applications. The automatic and machine learning based detection of early infarctions from non-contrastenhanced CT images should be used to help calculate NWU and predict MCE (43,44). Other features such as texture and high-level abstract representation can be included. As a state-ofthe-art example of deep learning, a deep convolutional neural network (DCNN) may help predict MCE directly according to the image patches or infarction regions (45)(46)(47). Given the fact that a stroke is a dynamic process, using both admission and follow-up CT images to characterize the temporal and spatial development of infarcts and edema volume may further improve the final prediction of clinical outcomes. CONCLUSION Net water uptake can be calculated based on mirrored patches that are selected by senior neuroradiologists from admission non-enhanced CT images that were scanned within 24 h after stroke onset using standard reference images. The resulting IP-NWU values showed a significant difference between patients with MCE and without MCE and thus it is an effective predictor of MCE. The inter-reader and interoperation agreement for IP-NWU are exceptional. Through integrating IP-NWU and other imaging features by machine learning, the radiomics models further improved the prediction of MCE. In summary, this study demonstrated the feasibility of predicting MCE using only admission non-enhanced CT images scanned within 24 h after onset, even without the aid of CT perfusion or follow-up CT scans. This will potentially help clinicians make decisions about performing a surgical decompressive craniectomy or employing other intensive monitoring to benefit stroke patients. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary materials, further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Medical Ethics Committee of General Hospital of Northern Theater Command. Written informed consent for participation was not required for this study in accordance with the national legislation and the institutional requirements. AUTHOR CONTRIBUTIONS SQ, YK, YY, and HC designed and directed the study. BF, SQ, LT, HX, BY, and YD analyzed data. LT, HX, and HC recruited participants and acquired data. BY and YD reviewed the CT images and drew the image patches. BF, SQ, YK, and YY drafted the manuscript together. All authors revised and approved the final manuscript.
2020-12-23T14:14:49.257Z
2020-12-23T00:00:00.000
{ "year": 2020, "sha1": "749204dafe1beea88dacb19f99945b92c7245196", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2020.609747/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "749204dafe1beea88dacb19f99945b92c7245196", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18826192
pes2o/s2orc
v3-fos-license
Snapshot situation of oxidative degradation of the nervous system, kidney, and adrenal glands biomarkers-neuroprostane and dihomo-isoprostanes-urinary biomarkers from infancy to elderly adults We analyzed biomarkers of lipid peroxidation of the nervous system -F2-dihomo-isoprostanes, F3-neuroprostanes, and F4-neuroprostanes- in urine samples from 158 healthy volunteers ranging from 4 to 88 years old with the aim of analyzing possible associations between their excretion values and age (years). Ten biomarkers were screened in the urine samples by UHPLC-QqQ-MS/MS. Four F2-dihomo-isoprostanes (ent−7-(R)−7-F2t-dihomo-isoprostane, ent−7-epi−7-F2t-dihomo-isoprostane, 17-F2t-dihomo-isoprostane, 17-epi−17-F2t-dihomo-isoprostane), and one DPA-neuroprostane (4-F3t-neuroprostane) were detected in the samples. On the one hand, we found a significant, positive correlation (Rho: 0.197, P=0.015) between the age increase and the amount of total F2-dihomo-IsoPs. On the other hand, the values were significantly higher in the childhood group (4–12 years old), when compared to the adolescence group (13–17 years old) and the young adult group (18–35 years old). Surprisingly, no significant differences were found between the middle-aged adults (36–64 years old) and the elderly adults (65–88 years old). We display a snapshot situation of excretory values of oxidative stress biomarkers of the nervous system, using healthy volunteers representative of the different stages of human growth and development. The values reported in this study could be used as a basal or starting point in clinical interventions related to aging processes and/or pathologies associated with the nervous system. Introduction Biomarkers have been increasingly employed in empirical studies of human populations to understand physiological processes that change with age, diseases whose onset appears linked to age, and the aging process itself [1]. The free radical/oxidative stress theory of aging is the most popular explanation of how aging occurs at a molecular level in aerobic biological organisms [2]. This theory of aging consisted of agerelated biochemical and physiological decline associated with cumulative oxidative damage to cellular components and tissues, promoting oxidative stress (OS) and leading to lesser longevity [3]. Nowadays, this theory is controversial since there may be interventions independent of reactive oxygen species (ROS) that promote longevity without affecting ROS or OS [2,4]. Oxidative stress is widely accepted to be a perturbation in the balance of free radicals in a cell and the cell´s ability to cope with the change by means of its antioxidant defense mechanisms [5]. The balance between ROS production and antioxidant defenses determines the degree of OS according to Finkel and Holbrook [6]. Recently, it has been reported that mild stress stimulates endogenous defense systems, which will promote health, but if the stress becomes chronic or is too extensive, it induces cellular damage and/or aging and a shortening of lifespan [7]. The brain and nervous system are prone to OS and are inadequately equipped with antioxidant defense system, which can lead to persistently increased levels of ROS and reactive nitrogen species reacting with the various target molecules (proteins, lipids, and DNA) [8]. The lipids, especially polyunsaturated fatty acids (PUFAs), are vulnerable to oxidation by both enzymatic and non-enzymatic process. In humans, the products of lipid peroxidation have been accepted as toxic mediators, but they are also known to exert diverse biological effects [9]. Solberg et al. [10] mentioned that the determination of OS is complex and requires a quantification of the levels of free radicals or damaged biomolecules. The measurement of F 2 -isoprostanes (F 2 -IsoPs) by mass spectrometry has been extensively employed as a marker of oxidant stress and is widely considered to be the goldstandard index of lipid peroxidation in vivo. The measurement of free F 2 -IsoPs in plasma or urine can be utilized to assess the endogenous formation of IsoPs but not to reveal the organ in which they are formed. Unless determining the levels of IsoPs in the cerebrospinal fluid, which reflects the ongoing metabolic activity of the brain, provides a great opportunity to reveal the occurrence of OS and lipid peroxidation in the brain. However, there are now some IsoPs-like compounds that might be regarded as markers of lipid peroxidation of the nervous system [11]. The F 2-dihomo-isoprostanes (F 2-dihomo-IsoPs), F 3 -neuroprostanes (F 3 -NeuroPs) and F 4 -neuroprostanes (F 4 -NeuroPs) are used to analyze the OS status of the nervous system in humans [11][12][13]. These biomarkers are formed by a free radical nonenzymatic mechanism from adrenic acid (C22:4 n-6, AdA) [14], docosapentaenoic acid (C22:5 n-6, DPA) [15] and docosahexaenoic acid (C22:6 n-3, DHA) [16] respectively. While DHA is an essential constituent of nervous tissue, highly enriched in neurons, and highly prone to oxidation [17], F 4 -NeuroPs provide a specific quantification of the OS suffered by neural membranes in vivo [18] and F 2 -dihomo-IsoPs are potential markers of free radical damage to myelin in the human brain [14]. Recent efforts have focused on the assessment of F 2-dihomo-IsoPs, F 3 -NeuroPs, and F 4 -NeuroPs as biomarkers in conditions associated with increased OS (particularly in disease conditions) and/or after dietary supplementation with antioxidants [11][12][13][19][20][21]. Despite its increasing clinical us to the best of our knowledge the biological variation of these biomarkers in healthy people of different age has not been reported yet. The ability to quantify these compounds in non-invasive samples like urine could shed light on the changes in excretion values of products of lipid peroxidation across a wide age range and may be useful for comparing these values detected in healthy individuals with those obtained diseased individuals. Therefore, the aim of this cross-sectional study was to quantify biomarkers of lipid peroxidation in the nervous system (F 2 -dihomo-IsoPs, F 3 -NeuroPs, and F 4 -NeuroPs) in urine samples from healthy volunteers of different life stage (4-88 years), analyzing possible associations between their values and age intervals. Study population This study was conducted in accordance with the Helsinki declaration. Approval was obtained from the Bioethics Committee of the University Hospital of Murcia. The participants were insured from the Hospital Virgen de la Arrixaca (Murcia, Spain) aged between 4 and 88 years of both genders (n=158). Age was reported at the time of the household interview as the age (years) at the last birthday. The assignment of the age ranges was based on social aging processes (childhood, adolescence, young adulthood, middle-aged adults, and elderly adults) according to Settersten and Mayer [22]. The age categories used in our statistical analyses were 4-12 years (childhood, n=20; mean: 8.20 ± 2.50 years), 13-17 years (adolescence, n=14; mean: 15.73 ± 1.43 years), 18-35 years (young adulthood, n=45; mean: 27.62 ± 4.97 years), 36-64 years (middle-aged adults, n=58; mean: 49.12 ± 9.03 years) and 65-88 years (old age, n=21; mean: 75.61 ± 6.62 years). All volunteers signed the informed consent document (18-88 years old). Volunteers under the age of 18 years had all referred to a doctor clinic for a routine check-up and a parent signed the informed consent document. Regarding the exclusion criteria, individuals with chronic diseases, under drug treatments, and volunteers with overweight or obesity were excluded from the study. None of the subjects was cigarette smoker an alcoholic and pregnant. All the participants were submitted to clinical examination to confirm their health status. The health status of the participants was considered in the data analysis. The clinical parameters for determining the health status of the individuals are summarized in Supporting information 1. Sample collection and preparation A complete clinical analysis -consisting of hematology, chemistry, and urine chemical analysis -was performed in the volunteers. All samples (blood and urine) were collected, by a nurse at the University Hospital Virgen de la Arrixaca from the subjects early in the morning and under fasting conditions. Blood samples at rest were obtained by venipuncture and were placed in different tubes according to the analytical procedures. The samples were processed within 1 h of collection and stored at −80°C for the analytical determinations. The hematological parameters were recorded using an automated hematological analyzer (Cell Dyn 3700 and 4000, Abbott, IL, USA) at the clinical analysis service of the University Hospital Virgen de la Arrixaca (Murcia, Spain). One-milliliter from the 24-h urine was used for analysis of the lipid peroxidation biomarkers. The metabolites were normalized as ng mg −1 creatinine and were assayed using the method described by Medina et al. [21]. Clinical parameters results of our volunteers (mean ± standard deviations (SD)) are summarized in Table 1. UHPLC-QqQ-MS/MS analyses The separation of NeuroPs and F 2 -dihomo-IsoPs in the urine samples was performed with an Ultra High-Performance Liquid Chromatography 6460-Triple Quadrupole-tandem Mass Spectrometry (Agilent Technologies, Waldbronn, Germany), using the set up previously described by Medina et al. [21]. Data acquisition and processing were performed using Mass Hunter software version B.04.00 (Agilent Technologies, Waldron, Germany). The identification and quantification of NeuroPs and F 2 -dihomo-IsoPs were carried out using the authentic markers described by Medina et al. [21]. Statistical analyses Quantitative data are presented as mean ± SEM (standard error of the mean) or SD (Table 2). Concerning the study population, women and men were analyzed together because no difference between them was detected according to the Student's t-test (data not shown). The Kolmogorov-Smirnov test and Shapiro-Wilk test were applied to assess the distribution of the data. Normality was not established, so nonparametric statistical tests were used for intergroup comparison. Comparison of non-normally distributed groups was carried out using the non-parametric Kruskal-Wallis test. In order to identify differences between the five groups, the Mann-Whitney U test was conducted with a Bonferroni correction, resulting in a significance level set at P < 0.005. Correlation between the variables (F 2 -dihomo-IsoPs) and age (years) was determined using Spearman´s correlation. The statistical analyses were made using the SPSS 21.0 software package (LEAD Technologies, Inc. Chicago, USA). The graphs were plotted using the Sigma Plot 12.0 software package (Systat Software, Inc. Sigma Plot for Windows). In order to provide a snapshot overview of the values of F 2 -dihomo-IsoPs in healthy volunteers, we have studied the most representative stages of human growth and development. To determine whether there were statistically significant differences between the groups, a nonparametric analysis of population medians (Kruskal-Wallis test, P < 0.05) was performed for each individual analyte (Table 2), as well for the total sum of the four F 2 -dihomo-IsoPs (Fig. 2). There was a statistically significant difference in 17-F 2t -dihomo-IsoP, ent−7- (R)−7-F 2t -IsoP, ent−7-epi−7-dihomo-IsoP, and total F 2 -dihomo-IsoPs when comparing the five groups. Thus, having analyzed the five groups, we used the results for the sum of these four F 2 -dihomo-IsoPs, since significant P-values were found for most of the biomarkers. The total F 2 -dihomo-IsoPs ranged from~8.02 ng mg −1 creatinine (age group of 13-17 years old) to~12.28 ng mg −1 creatinine (age group of 65-88 years old) (Fig. 2). The Mann-Whitney U test showed that among the first stage (childhood) and the later stages of life (middle-aged adults and elderly adults) the F 2 -dihomo-IsoPs values did not differ statistically; however, it should be noted that the elderly volunteers showed high values comparing to middle-aged. In the middle stage of life (adolescence and young adults) a significant decrease in the levelscompared to the first and last stages of life -was detected (P < 0.005) (Fig. 2). Discussion The measurement of OS biomarkers using a lipidomics approach has been useful for to compare values detected in a healthy population with those affected with different pathologies [11][12][13][19][20][21]. To the best of our knowledge, this is the only study that evaluates F 2 -dihomo-IsoPs and F 4 -NeuroPs in a large population of healthy volunteers. The results obtained by correlation tests showed a small but positive association between the concomitant increases of urinary F 2 -dihomo-IsoPs (total sum) with the age (Fig. 1). Previously, in female volunteers (n=43), no statistically significant correlation (r=0.0841, P=0.40) between age and plasmatic values of F 2 -dihomo-IsoPs (ent−7(R)-F 2tdihomo-IsoP and 17-F 2t -dihomo-IsoP) was observed; the average age was 13.9 ± 6 years (range 1.5-32 years) and the values were 1.0 ± 0.11 pgmL −1 [19]. Our study, with a wider age range and using data from women and men, showed that age may be related to changes in the excretion of F 2 -dihomo-IsoPs. In addition, individually, ent−7-epi−7-F 2t -dihomo-IsoP and ent−7-(R)−7-F 2t -dihomo-IsoP also exhibited a slight correlation with age. VanRollins et al. [14] mentioned that AdA is the most important PUFA in white matter and suggested the quantification of F 2 -dihomo-IsoPs as help to clarify the in vivo contributions of free radical to myelin and axons damage in white matter damage because of aging. Augmented levels of lipid peroxidation and myelin breakdown have been demonstrated in the myelin of older (normal) individuals, compared to younger. Age-related myelin alterations are ubiquitous and the correlations between their frequency and impairments of cognition occur because the conduction velocity along the affected nerve fibers is reduced so that the normal timing sequences within neuronal circuits breakdown [26,27]. Previous studies also showed an age-related increase in lipid peroxidation in humans and other animals [28][29][30][31][32]. Besides being a specific component of myelin, AdA also is present in several organs and tissues, including the kidney and the adrenal gland [14,19]. A study in rats also showed evidence of age-related lipid peroxidation in adrenals [33]. Therefore, our results also may reflect a positive correlation between non-enzymatic oxidation within the kidney and adrenal glands and advanced age. The precise age of transition among the stages of life is heavily debated and this can vary from person to person [34]. In order to provide a snapshot overview of the values of F 2 -dihomo-IsoPs in healthy volunteers, we have studied the most representative stages of human growth and development. The growth from childhood into adolescence has been associated with biological changes and could be influenced by the endocrinal and biochemical conditions generated in the pre-pubertal period. Our results are consistent with those obtained by Tamura et al. [35] and Kaneko et al. [36]. These authors studied urinary biomarkers produced under OS from lipids, proteins, DNA, and carbohydrates, and underlined that younger subjects (under the age of 10) were more vulnerable to oxidation than adolescent subjects, since they grow up rapidly, activation of the immune system and are probably exposed to high concentrations of ROS and nitric oxide. When we compared the F 2 -dihomo-IsoPs values of the young and middle-aged adults, higher OS values in the latter were apparent (Fig. 2). In humans, [37] according to the free radical theory of aging, the inborn aging process produces changes that increase in an exponential manner with age, becoming the major risk factor for disease and death in humans after the age of 28 years in developed countries. In our study, the volunteers aged from 36 to 88 years were divided into two groups -middle-aged (36-64 years) and elderly adults (65-88 years) -that did not differ significantly although, the values provided an ascending trend concomitantly with the age as is observed in Fig. 2.B. In the ZENITH study, European free-living, healthy, older adults (70-88 years) did not appear to be exposed to acute OS [28]. Our results showed a similar slow decline in the antioxidant status of elderly, healthy, free-living adults, and this may be one of the reasons why the values obtained for our group of older adults did not differ significantly from those of the middle-aged group. Currently, it is not clear which levels should be considered "normal" and which represent a serious imbalance between ROS generation and antioxidant defense. Some scientists reported that OS is an adaptation in the aging process and is not so harmful [2,4]. But, if the damage accumulates throughout the entire lifespan, as a by-product of normal cellular processes or a consequence of inefficient repair systems, this could lead to the diseases associated with the elderly [31,38]. On the other hand, Soares et al. [39] highlighted the impact of the lifestyle (diet, environment, lifestyle among others factors of life-stage as physiological and/ or metabolic process) has an important role in the accumulation of oxidative damage or in the OS increase apart from growing older. The F 2 -dihomo-IsoPs have not been used previously as a biomarker of aging; they have been associated mainly with diseases in the elderly, adolescents, and children. Therefore, the normal values in our adults might be useful in subsequent comparisons evaluating OS progression in groups of older individuals. Finally, regarding NeuroPs derived from DPA or DHA, these lipid metabolites were not detected in samples from healthy and sedentary volunteers. Only 4-F 3t -NeuroP, derived from omega 6-DPA, was detected, in a few volunteers in the age range from 18 to 65 years, who represents young adults, middle-aged adults, and elderly volunteers ( Table 2). The 4-epimer of 4-F 3t -NeuroP was not detected in the urine samples, although its values may have been below the limits of detection (2.95 ng mL −1 ) and quantification (5.9 ng mL −1 ) [21] since each is 4-fold higher than the corresponding limit found for 4-F 3t -NeuroP. The DPA-NeuroPs were synthesized with the aim of understanding their role in a ω3-depleted organism, as well as to broaden and deepen the study of neuronal OS [15]. In our study the metabolites of DHA n-3 peroxidation (4(RS)−4-F 4t -NeuroP, 4-F 4t -NeuroP, 10-F 4t -NeuroP, and 10-epi−10-F 4t -NeuroP) were not detected in the urine samples, suggesting that their values may also have been below the limit of detection. The physiological role of lipid peroxidation is not fully understood; some authors have shown that lipid peroxidation in various organs, including the brain, increases with aging and it is considered a risk factor for neurodegenerative diseases [9,31]. Thus, further investigation is needed, regarding how or why age or the process of aging can influence the oxidative status of the central nervous system and whether the excretion of oxidative products derived from DPA n-6 and DHA n-3 is scarce in the healthy population, according to age. Conclusions A significant, positive correlation was found between the increase in age (4-88 years) and the values of ent−7-(R)−7-F 2t -dihomo-IsoP, ent-7-epi−7-F 2t -dihomo-IsoP, and total F 2 -dihomo-IsoPs. By dividing the participants into five groups according to social aging processes, we have been able to provide evidence for a decrease in total F 2 -dihomo-IsoPs during adolescence and the young-adult stage, when compared to childhood. The observation of a spurt in the total F 2 -dihomo-IsoPs values from the young adults to middle-aged adults is interesting, as is the fact that the elderly group did not show higher values than the middle-aged population. The findings of this work suggest that in healthy and sedentary the values of urinary F 4 -NeuroPs and F 3 -NeuroPs were not representative. We are conscious of the limitations inherent in the performance of such analysis in healthy humans due to wide heterogeneity in the expression of OS biomarkers, as a consequence of aging. Nevertheless, this study in a matrix as human urine represents a powerful approach to advance our knowledge of the role of OS in the central nervous system across a wide age range and so it could serve as a baseline for future clinical studies in populations with different disorders. Declarations of interest The authors declare that they have no conflict of interest. Funding This work was partially funded by the "Fundación Séneca de la Región de Murcia" Grupo de Excelencia 19900/GERM/15.
2018-04-03T06:23:16.166Z
2017-01-12T00:00:00.000
{ "year": 2017, "sha1": "f0b65baf1f18109af954e741fa49247074514227", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.redox.2017.01.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f0b65baf1f18109af954e741fa49247074514227", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244525278
pes2o/s2orc
v3-fos-license
The role of engagement among farmers in developing farming knowledge: evidence from northern Thailand Several studies found science-based knowledge has only able to reach a small fraction of their desired recipient. To compensate for the lack of formal sources, farmers often relied on informal sources of knowledge within their farming community. This study investigates the role of farmer’s social engagement in developing farming knowledge and farmer’s decision-making. A structural equation modelling was used to test the hypothesised moderating role of social engagement among farmers on the effects of service access, training, and knowledge-sharing on farming knowledge. The study used the case of rice farmers in Northern Thailand, wherein a focus group discussion and a series of survey interviews were conducted. Study results found that social interactions among farming communities significantly moderated the effects of training and knowledge-sharing. The findings support the critical role of social engagement among farmers in increasing information flow and experiential knowledge exchange in developing farming knowledge. Furthermore, social interactions promote farming innovation and management practices through advice-seeking with other farmers. Hence, in supporting farming sustainability, extension support should also focus on network building among actors within the farming community and understand how farmers exchange experiential knowledge to compensate for the lack of formal sources of knowledge. Introduction Innovation in agriculture is essential in driving Thailand towards a sufficiency economy. The philosophy under the sufficiency economy served as the central policy towards sustainable agricultural development of the country [1]. This drives the introduction of new farming technologies and improved practices such as precision farming, bio-pesticides, organic fertiliser, and crop diversification. Access to these innovations can equip farmers with the necessary knowledge and skills to attain the country's agricultural development goal. However, science-based information from research investment reached only a small fraction of their intended recipients and achieved less than the expected impact [2,3]. In addition, the adoption rate of farming technologies and improved farming practice is lower than the targeted threshold. Nonetheless, several studies considered that farmer's social network holds an essential role in diffusing knowledge and boosting adoption rate of improved agricultural technology within farming communities [4,5]. The flow of information within and across farmer's social networks increases the potential adaptive capacity of farming communities [4,6]. For example, the study found that farmers exchange experiential knowledge among themselves during group meetings and assemblies to discuss farming-related issues. Furthermore, typical daily interaction and even friendly visits to each farm as a form of informal peer to peer advice facilitate information flows. In addition, Isaac et al. [3] found that 2 farmers with less access to a formal source of knowledge show adoption of farming technologies as influenced by farmers with greater access to formal knowledge. This illustrates how farmers with access to information can serve as a link in bridging formal and informal knowledge. Researcher has recognised the importance of the social network in developing knowledge and skills among farmers. This is evidenced by the growth of research related to social network analysis applied in agricultural research to understand its role in developing farming knowledge [2][3][4][5]7,8]. This study contributes to the literature by looking at the possible moderating effects of social engagement among farmers on developing farming knowledge. The social network of farming communities allows higher interactions and engagement wherein information exchange occurs [4]. This facilitates knowledge diffusion and provides opportunities for farmers with less access to information and farming technologies. Thus, we hypothesised that higher social engagement moderates higher access to services, training, and facilitate knowledge-sharing, which influence farming knowledge. The remainder of the paper proceeds as follows. Section 2 discusses the methodology, while section 3 presents the measurement model's assessment and discusses the moderating role of social engagement. Lastly, section 4 concludes the paper. Study area and data collection The study used the case of rice farmers in Chiang Rai Province in Northern Thailand. We conducted a focus group discussion with farmers and group leaders. Sampled farmers included in the study were identified based on stratified random sampling. After data processing, a total of 304 farmers were interviewed in the study. Farmers in the study area rely on irrigation as supplemental during the rainy season (October-December) and enable rice production during the dry season (March-May). In addition, livelihood activities in the area are agriculturally based. Most farmers have more than thirty years of farming experience, indicating that there is already an established form of social networking within the farming community. On the other hand, the focus group discussion found consensus among farmers that their primary source of information includes other farmers. Farmers shared that they could discuss farming issues during assemblies and group meetings. At the same time, farmers also imitate productive neighbouring farms by adopting the other's farm practices. This indicates the presence of knowledge diffusion that depicts the critical role of networking among farmers in the adoption of improved farming practices and technologies. Data analysis and measurement The study used multiple measurement items using a seven-point Likert scale as indicators in measuring the constructs outlined in Table 1. The analysis used a confirmatory factor analysis (CFA) in assessing the measurement model and evaluate the construct reliability, convergent validity, and discriminant validity. Afterwards, a structural equation modelling (SEM) was used to test the model's hypothesised relationship. In testing the moderating effects of the farmer's social engagement, variable interaction terms were developed by multiplying the composite constructs of service access, training, knowledge sharing with the composite construct of social engagement. The SEM and CFA analysis was performed using the R statistical software with the Lavaan package by Rosseel [9]. At the same time, we use the semTools package by Jorgensen et al. [10] in calculating the variable interaction terms used in the structural model. Relate to the activities of the interpersonal network that facilitate learning and information sharing such as group discussion, farmer to farmer, and peer to peer advice. Social engagement 1 = Never 7 = Very frequently Represent a set of connections among farmer's networks where information flows [2]. We measure the construct based on how frequent farmers associate with other farmers, social workers, local government units, communities, and groups or institutions. Farming knowledge 1 = Not confident at all 7 = Very confident The construct relates to the experiential knowledge of farmers. Therefore, we measure the construct based on how confident farmers perform activities related to soil management, pest and diseases, post-harvest, marketing and other farming activities. Measurement evaluation The fit of the measurement model was assessed using the goodness-of-fit index (0.901), such as the normed fit index (0.903), comparative fit index (0.928), and the root-mean-square of approximation (0.033). Based on these measurement criteria, the model shows an acceptable and adequate model fit. Whereas Table 2 shows the result of the reliability and validity test. The Cronbach's alpha represents the factor reliability indicating the internal consistency of the measurement model. While average variance extracted (AVE) with values above 0.50 indicate convergent validity. On the other hand, discriminant validity was achieved when the square root of the AVE exceeds the factor correlation. Results of the validity and reliability suggest that all necessary conditions were met. Service access, training and knowledge sharing effects on farming knowledge The study estimated two structural models-without moderating effects ( Figure 1) and a moderating effect of social engagement (Figure 2) while controlling for age, sex, and farming experience. This section presents first the direct effects of service access, training, and knowledge-sharing on farming knowledge. The next section discusses the moderating role of social engagement. The study found service access to be a not statistically significant factor in developing knowledge. Service access in the study refers to access to basic services such as transport, community health care, education, and social welfare. Therefore, we assumed the possibility of service access to be more of an indirect effect towards farming knowledge rather than a direct one. Other mediating or moderating factors could play a significant role. The study result also shows that training is not statistically significant in explaining variation in farmer's knowledge. Several studies also found that training alone does not effectively deliver the expected enhancement in farmers' farm production and livelihood [7,11]. This outcome may have resulted in most agricultural extension project that views farmers as end-users and adopters of technology rather than partners in the process. While in the focus group discussion, hesitance among sampled farmers in the adoption of newly developed technology is the perceived risk in driving away from their traditional farm practices. However, risk-averse farmers among sampled farms express willingness to adopt if other farmers (risk takers) start adopting and showing promising outcomes. This reflects how each farmer within their network influences each other on their decision-making process. On the other hand, the effect of knowledge-sharing is straightforward. The study result shows that knowledge sharing has a significant direct effect on farming knowledge among sampled farmers. Furthermore, since social network within a farming community is highly associated with information exchange; thus, interactions that facilitate knowledge-sharing induce learning that improves farmer's knowledge. Moderating effects of farmer's social engagement on farming knowledge The role of social engagement in the information flow within their farming community is considered a potential moderating factor in the model, as illustrated in Figure 2. The structural model results show that training and knowledge sharing effects are moderated by social engagement; however, service access remains statistically not significant. Social engagement statistically moderated the effects of training on farming knowledge. This indicates that knowledge and information transferred by the researcher to farmers through training shows greater impact when information flows within farmer's networks. Also, the study found that farmers with higher social engagement are associated with more training attended. This suggests that social networks play a role in facilitating greater access to sources of information. In Ghana, Isaac et al. [3] observed that farmers with access to training and formal sources of knowledge have able to transfer acquired knowledge with other farmers within their network. At the same time, spillover effects wherein farmers often imitate successful farmers could also amplify the impact of training and facilitate technology transfer [7]. For example, Wood et al. [12] found in their social network analysis that branching of information from farmers who have close contact with a group of agricultural researchers to the non-participating farmers who have no direct close contact. As mentioned previously, knowledge-sharing has a significant direct effect, while Figure 2 shows that social engagement has a significant moderating effect. Both direct sharing of information and extracting information embedded within the social network of farmers through social engagement is positively associated with farming knowledge. During a focus group discussion, farmers reveal that they often talk and share about their farming practices. Farmers seek learning they can use in their farm; this shows how farmers value knowledge-exchange in developing their farming knowledge. For instance, Cadger et al. [8] found that the influence of agricultural intervention is not limited to participants exposed to the extension program. Non-participants who have contact with the participants show adoption of the introduced intervention as well. Knowledge diffusion, in this case, was facilitated by the social interaction of farmers within their farming network. Overall, social engagement among farmers facilitates information exchange, whether from formal or informal sources, via networking within the farming community. Although the study found a significant moderating effect of farmer's social engagement on training, service access and knowledge-sharing, results must be taken cautiously. There is a high possibility of endogeneity that would likely cause bias in which the study does account. The reader is referred to the work of Manski [12], which provides a rigorous discussion on possible sources of endogeneity. On the other hand, this challenge is expected in a non-experimental study such as in social science research. Addressing this potential problem is beyond the objective of the study. This study aims to understand the potential relationship of the selected factors and the potential moderating effects of social engagement among farmers in their farming knowledge. Conclusions Several studies often use social network analysis in understanding the role of farmer's social engagement within their farming community in knowledge diffusion. In this study, we contribute by investigating the potential moderating role of social engagement on the effects of training, service access and knowledge sharing on farming knowledge. The study results show that farmer's social engagement moderated the effects of training and knowledge-sharing activities while statistically not significant on farmer's access to social services. As agriculture constitutes an important role, especially in the rural livelihood in Thailand, the study found that informal networks such as social ties within the farming community hold a potential role in promoting innovations. Introducing newly developed agricultural technologies should be complemented with the promotion of higher community involvement to promote interaction and social exchange. This would greatly facilitate the transfer of information and strengthen pre-existing knowledge shared by farmers within their farming network. The increased flow of information exchange driven by higher social engagement among farmers supports the foundation of a community-based approach in agricultural development.
2021-11-24T20:07:19.117Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "d06a84a4416efad6d6d411b6540e99afea6eaacc", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/892/1/012043", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "d06a84a4416efad6d6d411b6540e99afea6eaacc", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics" ] }
264853479
pes2o/s2orc
v3-fos-license
Birationally rigid Fano varieties We give a brief survey of the concept of birational rigidity, from its origins in the two-dimensional birational geometry, to its current state. The main ingredients of the method of maximal singularities are discussed. The principal results of the theory of birational rigidity of higher-dimensional Fano varieties and fibrations are given and certain natural conjectures are formulated. Introduction This paper is based on the talk given by the author at the Fano conference in Turin.The aim of the paper is to give a brief survey of the concept of birational rigidity which nowadays is getting the status of one of the crucial concepts of higherdimensional birational geometry.The Fano conference both by definition and its actual realization had a natural historical aspect.Therefore it seems most appropriate to review the story of birational rigidity, presenting the principal events in their real succession, from the first cautious steps made in XIX century to the modern rapid development.In this story Gino Fano himself played a prominent part: birational rigidity was one of his most important foresights. For about fifty years Fano was the only mathematician in the world engaged in the field that in his time was a real terra incognita, three-dimensional birational geometry of algebraic varieties which are now called Fano varieties.He had a program of his own and he did his best to realize it, see [F1-F3].On this way he discovered a lot of fascinating geometric constructions, found new approaches to investigating extremely deep and challenging problems, made a terrific amount of computations and guessed certain fundamental facts.He never completed his program.But even realization of a part of it took about thirty years of hard work of his successors, equipped with incomparably stronger techniques. The author is grateful to the Organizing Committee of the conference, namely, to Prof. Alberto Conte, Prof. Alberto Collino and Dr. Marina Marchisio for the invitation to give this talk and for making such a wonderful conference. The talk was prepared during the author's stay at the University of Bayreuth as a Humboldt Research Fellow.The author thanks Alexander von Humboldt Foundation for the financial support and the University of Bayreuth for the hospitality.I am especially thankful to Prof. Th.Peternell. The paper is an enlarged version of the talk, where some details and an explanation of the technique of hypertangent divisors, making the result of the paper [P] slightly stronger, are added.The present paper was written during my stay at Max-Planck-Institut für Mathematik in Bonn.I am very grateful to the Institute for the financial support, stimulating atmosphere and hospitality. The Noether theorem In [N] Max Noether published his famous theorem on the (two-dimensional) Cremona group: the group of birational self-maps of the (complex) projective plane Bir P 2 = Cr P 2 = Aut C(s, t) = {χ: P 2 − − → P 2 } is generated by the group of projective automorphisms Aut P 2 and a single quadratic Cremona transformation which in a suitable coordinate system takes the form τ : (x 0 : x 1 : x 2 ) → (x 1 x 2 : x 0 x 2 : x 0 x 1 ). (1) His argument went as follows.Take an arbitrary birational self-map χ of P 2 and consider the strict transform of the linear system of lines via χ −1 : The moving (that is, free from fixed components) linear system |χ| becomes naturally the main subject of study.One has the following obvious alternative: • either n = 1, in which case χ ∈ Aut P 2 is regular, • or n ≥ 2, in which case χ is a birational map in the proper sense; in particular, the linear system |χ| has base points. Assume that the second case holds.Since the curves in the linear system |χ| are rational and the free intersection (that is, the intersection outside the base locus) is equal to one, one can deduce that there exist at least three distinct points o 1 , o 2 , o 3 of the linear system |χ| satisfying the Noether inequality: If all three points o i lie on P 2 (that is, there are no infinitely near points among them), then take the standard Cremona transformation τ (1) where o 1 , o 2 , o 3 are assumed to be the points (1, 0, 0), (0, 1, 0, ) and (0, 0, 1), respectively.It is easy to compute that the linear system |χ • τ | (which is the strict transform of |χ| via τ , or the strict transform of the linear system of lines on P 2 via the composition χ • τ ) is a moving linear system of plane curves of degree In other words, taking the composition with a quadratic transformation (which are all conjugate with each other by a projective automorphism), one can decrease the degree n ≥ 1.Thus (assuming that at each step there are no infinitely near points among o i 's) we get a decomposition of χ into a product of quadratic transformations: where τ is the standard involution (1) and α, α i are all projective automorphisms. There is an immense literature on the Noether theorem and Cremona transformations, see, for instance [H] (the book is to be soon re-published with an explanatory introduction written by V.A. Iskovskikh and M.Reid).Here we are interested only in the principal ingredients of Noether's argument.These are: • the invariant n ≥ 1 (the degree of curves in the linear system |χ|), • "maximal" triples of base points (that is, the triples satisfying the Noether inequality (1)), • the "untwisting" procedure (decreasing n and thus "simplifying" χ). Remark.One can well imagine that the untwisting procedure may not be uniquely determined.This is the case, when there are more than one "maximal" triples, so that we can decrease the degree n taking the composition with various quadratic involutions.This naturally leads to relations between the generators of Bir P 2 .They were first described by M.Gizatullin in [G].Later the argument was radically simplified by V.A. Iskovskikh [I4]. Fano's work At the beginning of the XXth century Fano started his work in three-dimensional birational geometry.He was an absolute pioneer in the field.His work lasted for about 50 years and, apart from its great mathematical value, presents an example of an equally great courage and inner strength. Fano started with an attempt to reproduce Noether's argument in dimension three.His first object of study was the famous three-dimensional quartic V = V 4 ⊂ P 4 .His investigation went as follows.Take a birational self-map χ ∈ Bir V and look at the strict transform of the linear system of hyperplane sections of V via χ −1 : Now, similar to the two-dimensional case, we get an obvious alternative: • either n = 1, in which case χ ∈ Aut V is regular, • or n ≥ 2, in which case χ is a birational map in the proper sense; in particular, the linear system |χ| has a non-empty base locus. According to the scheme of Noether's arguments, the next step to be made is finding a subscheme of high multiplicity in the base scheme of the linear system |χ|.And indeed, Fano asserted that if n ≥ 2, then one of the following possibilities holds: • there exists a point x ∈ V such that • something similar happens, reminding of the unpleasant infinitely near case of the Noether theorem.Here Fano does not give any formal description, just presents an example of what can take place: there is a point x ∈ V and an infinitely near line L ⊂ E ∼ = P 2 , where is the blowing up of x with the exceptional divisor E, such that where |χ| is the strict transform of the linear system |χ| on V .Now Fano gives certain arguments, some of which are true and some not, showing that these cases are impossible.He concludes that the n ≥ 2 case does not realize and therefore n = 1 is the only possible case.Thus Later Fano studied several other types of three-folds.One of his most impressive claims concerns the complete intersection of a quadric and a cubic in P 5 .Starting as above, is the class of a hyperplane section, Pic V = ZH), Fano discovers that for certain special subvarieties his inequality (2) can be satisfied.This is the case, when B = L ⊂ V is a line.It is easy to see that the projection V − − → P 3 from the line L is a dominant rational map of degree 2. Thus there exists a Galois involution τ L ∈ Bir V, permuting points in a general fiber.One computes easily that which gives the necessary "untwisting" procedure for χ.The analogy to Noether's arguments is now complete.Let us once again look at the general scheme of Fano's arguments.They consist of the following components: • existence of "maximal" curves or points (or something similar), satisfying the Fano inequalities (2,3), • either excluding or untwisting the maximal curve or point found at the previous step (decreasing n and "simplifying" χ). It should be added that the untwisting procedure is not always uniquely determined, that is, Fano inequalities are sometimes satisfied for a few various subvarieties, e.g. two different lines on V 2•3 (when the corresponding plane is contained in the quadric F 2 ).This naturally leads to relations between generators of Bir V .For V 2•3 they were described by V.A.Iskovskikh around 1975, see [I3] and a detailed exposition in [IP]. The theorem of V.A.Iskovskikh and Yu.I.Manin The modern birational geometry of three-dimensional varieties started in 1970 with two major breakthroughs: the theorem of H.Clemens and Ph.Griffiths on the threedimensional cubic [CG] and the theorem of V.A.Iskovskikh and Yu.I.Manin on the three-dimensional quartic [IM].In the latter paper Fano's ideas were developed into a rigorous and powerful theory, which made it possible to begin a systematic study of explicit birational geometry of three-folds.This success was to a considerable degree prepared by the earlier papers of Yu.I.Manin on surfaces over non-closed fields: in [M1, M2] all the principal technical components of the method of maximal singularities were already present, including the crucial construction of the graph, associated with a finite sequence of blow ups. Let us reproduce briefly the arguments of [IM].Fix a smooth quartic V = V 4 ⊂ P 4 and consider, as usual, the strict transform of the linear system of hyperplane sections with respect to χ −1 , where χ: V − − → V ′ 4 is a birational map onto another smooth quartic: As usual, we come to the familiar alternative: • or n ≥ 2, in which case χ is a birational map in the proper sense; in particular, the base subscheme of the linear system |χ| is non-empty. Proposition 3.1.There exists a geometric discrete valuation ν on V (here "geometric" means "realizable by a prime divisor E on some model V of the field C(V )") such that the inequality holds, where ν(|χ|) = ν(D) for a general divisor D ∈ |χ|. Now [IM] shows that a moving linear system |χ| on V cannot have a maximal singularity.The hardest case is when the centre of the maximal singularity ν is a point: centre(E) = x ∈ V .In this case take two general divisors D 1 , D 2 ∈ |χ| and consider the cycle of scheme-theoretic intersection It is an effective curve on V .The crucial fact is given by Proposition 3.2.The following inequality holds Since deg Z = 4n 2 , this gives a contradiction.Thus we obtain Theorem 3.1.[IM] Any birational map between smooth three-dimensional quartics is a projective isomorphism. However this very argument gives immediately a much stronger claim!For instance, let us describe birational maps from V to conic bundles V ′ /S ′ (or, in a slightly different language, describe the conic bundle structures on V ).Let us construct a moving linear system Σ ′ on V ′ in the following way: Then it is easy to prove that a maximal singularity exists always, now irrespective of the value of n ≥ 1.However, we know that existence of a maximal singularity leads to a contradiction.Therefore, the birational map χ simply cannot exist!What was actually proved in [IM], can be formulated as follows: a smooth three-dimensional quartic V ⊂ P 4 • cannot be fibered into rational curves by a rational map, • cannot be fibered into rational surfaces by a rational map, Speaking the modern language, we express all this by saying that the quartic is birationally superrigid. The method of maximal singularities In all the procedures described above a certain integral parameter was involved -namely, the "degree" n of the linear system Σ, defining the birational map χ under consideration.All the above examples dealt with Fano varieties V such that Pic V ∼ = Z, so that the integer n meant just the class of Σ in Pic V .However, this extremely important number has a more general invariant meaning, which we describe now. Let X be a uniruled Q-Gorenstein variety with terminal singularities.This assumption implies that the canonical class K X is negative on some family of (generically irreducible) curves sweeping out X. Therefore for any divisor D the following number is finite: It is called the threshold of canonical adjunction of the divisor D. Sometimes we omit X and write simply c(D) or c(Σ) for D ∈ Σ moving in a linear system.Now we can describe the general scheme of the method of maximal singularities.Let us fix a uniruled variety V with Q-factorial terminal singularities and another variety V ′ in this class.Let us assume that V ′ is birational to V .The aim of the method is to give a complete description of all possible birational correspondences between V and V ′ . We start as usual with the following diagram: The linear system Σ ′ is fixed throughout the whole argument.The thresholds are taken with respect to the varieties V , V ′ .The ? sign means that we do not know, which inequality is true: "≤" or ">".Now we get the alternative: • either ? is ≤, in which case we stop.It is presumed that when we have the inequality then "we can say everything" about the map χ.In real life, sometimes this is the case, sometimes not.But in any case this inequality completely reduces the birational problem to a biregular one, since the family of linear systems Σ is bounded (in many cases (5) implies that it is actually empty or Σ is unique, as above). • or ? is >, in which case we proceed further as follows. Proposition 4.1.There exists a geometric discrete valuation ν = ν E on V such that the inequality holds. The inequality ( 6) is called the Noether-Fano inequality.The discrete valuation ν is called a maximal singularity of the linear system Σ. The word "geometric" means, as we have mentioned above, that there exists a birational morphism ϕ: V → V with V smooth such that ν = ν E for some prime divisor E ⊂ V .In ( 6) ν E (Σ) means the multiplicity of a general divisor D ∈ Σ at E and a(E) means the discrepancy of E. Now for each geometric discrete valuation E the following work should be performed: • either E can be excluded as a possible maximal singularity: there is no moving linear system Σ satisfying the Noether-Fano inequality ( 6) for E, • or E should be untwisted.The untwisting means that we find a birational self-map χ * E ∈ Bir V such that we get the following diagram: , we go back to the beginning of the procedure (compare c(Σ * ) and c(Σ ′ ) and so on). Birationally rigid varieties Roughly speaking V is said to be birationally rigid, if the above-described procedure works for V , that is, in a finite number of steps we obtain the desired inequality (5). Definition 5.1.V is said to be birationally rigid, if for any V ′ , any birational map χ: V − − → V ′ and any moving linear system Σ ′ on V ′ there exists a birational self-map χ * ∈ Bir V such that the following diagram holds: The birational self-map χ * is composition of elementary untwisting maps described above: However, it turns out that for many (hopefully, for "majority" of) Fano varieties the untwisting procedure is trivial. Definition 5.2.V is said to be birationally superrigid, if for any V ′ , any birational map V − − → V ′ and any moving linear system Σ ′ on V ′ the following diagram holds: In other words, a birationally rigid variety is superrigid, if we may always take χ * = id V .In the sense of the given definitions, the smooth quartic V 4 ⊂ P 4 is superrigid and the smooth complete intersection V 2•3 ⊂ P 5 is rigid (for the latter case, the proof has been so far produced for a generic member of the family only, see [I3,P2,IP]). Immediate geometric implications of birational (super)rigidity, which actually determined the very choice of this word combination, are collected below. Proposition 5.1.Let V be a smooth Fano variety with Pic V = ZK V .If V is birationally rigid, then • V cannot be fibered into rationally connected (or uniruled) varieties by a nontrivial rational map; that is, the following diagram is impossible (although χ itself may be not a biregular map).If, moreover, V is superrigid, then χ itself is an isomorphism.In particular, in the superrigid case the groups of birational and biregular self-maps coincide, Conversely, if V is rigid and ( 7) holds, then by definition of rigidity it is clear that V is superrigid. holds for any point x ∈ , where and mult Y Σ means multiplicity of a general divisor D ∈ Σ along Y . Then the variety X is birationally superrigid. The strongest technique which makes it possible to check the (principal) condition (ii) of this criterion is that of hypertangent divisors.Although at the moment an alternative method, based on the connectedness principle of Shokurov and Kollár [Sh,K] (suggested by Corti [C2] and later used in [CM] and [P10]), gains momentum, the older argument by hypertangent divisors is still working better.For the reader to get the idea of this technique, we give here a proof for Fano hypersurfaces V = V M ⊂ P M .Birational superrigidity of any smooth hypersurface has already been proved in [P10].Nevertheless we give here an argument which is based on the paper [P5], slightly sharpening the result. Proposition 5.2.Let x ∈ V = V M ⊂ P M be a point such that there are but finitely many lines L ⊂ V passing through x.Then for any irreducible subvariety Y ⊂ V of codimension two the estimate holds. Proof.Let (z 1 , . . ., z M ) be a system of affine coordinates on P = P M with the origin at x. Write down the equation of the hypersurface V : where q i are homogeneous of degree i in z * .Note that the lines through x on the hypersurface V are given by the system of equations Therefore the set (10) of common zeros is of dimension at most one.Denote by is the same as ( 10), therefore it is of dimension at most one.This implies, in its turn, that the algebraic set the affine part of the hypersurface V is also of dimension at most one: schemetheoretically it is the same as (10), supported on the union of lines on V through x. Let us look at the divisors where dim x means the dimension in a neighborhood of the point x.It is easy to see that for a given subvariety Y ∋ x of codimension two there is a set of (M − 4) divisors Now let us order the set I somehow, so that It is easy to construct by induction on i ∈ {0, . . ., M − 4} the sequence of irreducible subvarieties ) is an effective algebraic cycle on V , Y i+1 is one of its irreducible components; • the following estimate holds: (If the set I = {4, . . ., M − 1}, then the estimate is better: we take the worst possible case.)Making the obvious cancellations and taking into consideration that the left-hand side of (11) cannot exceed 1, we obtain the desired estimate (9). Remark.When we define informally birationally rigid varieties as those for which the method of maximal singularities works, one may naturally ask, what happens if it does not.There is an answer in dimension three.It is given by the Sarkisov program, developed by V.G.Sarkisov (see [S3]) and completely proved by Corti in [C1].The answer is, that when it is possible neither to exclude nor to untwist a maximal singularity, it should be eliminated by a link to another Mori fiber space.We do not touch these points in the present paper.See [S3,R,C1,C2,CR,CM] for the details. Singular Fano varieties Up to this moment, all our examples dealt with smooth Fano varieties.Here we give the most interesting cases of birationally rigid Fano varieties with isolated terminal singularities.The oldest example of a birationally rigid singular Fano 3-fold is the three-dimensional quartic with a unique non-degenerate double point [P1]. Let x ∈ V = V 4 ⊂ P 4 be the singularity.There are 24 lines on V passing through x; denote them by L 1 , . . .L 24 .With the point x ∈ V a birational involution τ ∈ Bir V is naturally associated: the projection from x V ⊂ P 4 rational map | | rational map of degree 2 | | with the fiber P 1 ↓ ↓ P 3 = P 3 determines the Galois involution τ of V over P 3 . Let L = L i be a line through x.Look at the projecttion from the line L: V ⊂ P 4 rational map | | rational map with the fibers -| | with the fiber P 2 cubic curves ↓ ↓ P 2 = P 2 .Thus V is fibered over P 2 into elliptic curves.Since V is singular at x, all the cubic curves pass through x, which means that the fibration V /P 2 has a section.Taking for the zero of a group law on a generic fiber, we get the birational involution the reflection from zero on a generic fiber.Set τ 0 = τ .Let B(V ) be the subgroup of Bir V , generated by the involutions τ 0 , τ 1 , . . ., τ 25 .Theorem 6.1.[P1] (i) V is birationally rigid. (ii) The group B(V ) is the free product of 25 cyclic subgroups τ i = Z/2Z, where i = 0, 1, . . ., 25: (iii) The subgroup B(V ) ⊂ Bir V is normal and the following exact sequence takes place: First proved in [P1], this theorem was later discussed by Corti in [C2], where due to an application of the connectedness principle of Shokurov and Kollár [Sh,K] the proof was simplified.The further study of singular quartics was performed in [CM]. Theorem 6.1 can be generalized in higher dimensions [P9].Let V = V M ⊂ P M , M ≥ 5, be a sufficiently general (for the precise conditions see [P9]) hypersurface with isolated terminal singularities.For a singular point x ∈ V we obviously have and if µ x = M − 2, then the conditions of general position imply that there is only one point with this multiplicity.Let us define the integer where τ ∈ Bir V is the Galois involution determined by the rational map Of course, singular Fano varieties are more numerous in types than the smooth ones.The smooth quartics generalize to 95 families of Q-Fano hypersurfaces V = V d ⊂ P(1, a 1 , . . ., a 4 ), a 1 + a 2 + a 3 + a 4 = d.They all have terminal quotient singularities but in a sense are closer by their properties to smooth Fano varieties.Theorem 6.3.[CPR] A general member V of each of 95 families is birationally rigid.The group of birational self-maps is generated by finitely many involutions. For each of 95 families these involutions were explicitly described in [CPR].This paper is based on the classical method of maximal singularities combined with the Sarkisov program [S3,R,C1]. The relative version So far we have been considering the absolute case, that is, the case of Fano varieties.However, the world of rationally connected varieties is much bigger.If we assume the predictions of the minimal model program, each rationally connected variety is birational to a fibration into Fano varieties over a base that, generally speaking, is not necessarily a point. Here we give a very brief outline of the relative version of the rigidity theorythat is, rigidity theory of non-trivial fibrations.Similar to the absolute case of Fano varieties, the starting point here was formed by "two-dimensional Fano fibrations" over a non-closed field, or simply speaking, surfaces with a pencil of rational curves over a non-closed field.In the papers of V.A.Iskovskikh [I1,I2] (which continued the work started in the papers of Yu.I.Manin on del Pezzo surfaces over non-closed fields [M1,M2], see also [M3]) it was proved that under certain conditions there is only one pencil of rational curves.This theorem was the first relative rigidity result.It was necessary to generalize these claims and, in the first place, the technique of the proof to higher dimensions. This breakthrough was made by V.G.Sarkisov in [S1,S2].Let us consider smooth conic bundles of dimension ≥ 3: a locally trivial sheaf of rank 3 on S. The points x ∈ S over which the conic π −1 (x) ⊂ P 2 degenerates comprise discriminant divisor D ⊂ S. Assume that V /S is minimal (or standard, see [I3,S1,S2]) in the following sense: That is, V /S is a Mori fiber space, see [C2].In particular, V /S has no sections.The main question to be considered is whether V has other structures of a conic bundle or not.In other words, let π ′ : V ′ → S ′ be another conic bundle.Is an arbitrary birational map χ: V − − → V ′ fiber-wise or not?Here is the diagram: where the ?sign means the question above. Theorem 7.1.[S1,S2] If |4K S + D| = ∅, then χ is always fiber-wise: there exists a birational map α: S − − → S ′ such that There was no concept of birational rigidity in 1980.Now we just say that V /S is birationally rigid.In [S1,S2] it was said just that the conic bundle structure, given by definition, is unique. Let V /S be a non-trivial fibration, dim S ≥ 1, with rationally connected (or just uniruled) fibers.Define the group of proper birational self-maps setting Bir(V /S) = Bir F η ⊂ Bir V, where F η is the generic fiber, that is, the variety V considered over the field C(S).In other words, birational self-maps from Bir(V /S) preserve the fibers. Definition 7.1.The fibration V /S is birationally rigid, if for any variety V ′ , any birational map V − − → V ′ and any moving linear system Σ ′ on V ′ there exists a proper birational self-map χ * ∈ Bir(V /S) such that the following diagram holds: The definition of superrigidity is word for word the same as in the absolute case. Proposition 7.1.Assume that X/S is Fano fibration with X, S smooth such that Pic X = ZK X ⊕ π * Pic S and for any effective class D = mK X + π * T the class NT is effective on S for some N ≥ 1. Assume furthermore that X/S is birationally rigid.Then for any rationally connected fibration X ′ /S ′ and any birational map χ: X − − → X ′ (provided that such maps exist) there is a rational dominant map α: S − − → S ′ making the following diagram commutative: Conjecture 7.1.If the fibration V /S as above is sufficiently twisted over the base S, then V /S is birationally rigid. We prefer not to be formal here about this conjecture.Instead of explaining what precisely is meant by the twistedness assumption, we just give an example that illustrates the point.This example has already been generalized in higher dimensions [P7]. Let us consider three-folds fibered into cubic surfaces: fibers are cubic surfaces π ↓ ↓ locally trivial P 3 − fibration Here rk E = 4, Pic P(E) = ZL ⊕ ZG, where L is the class of the tautological sheaf, G is the class of a fiber, and V ∼ 3L + mG is a smooth sufficiently general divisor in the linear system |3L + mG|.Assuming that 3L + mG is an ample class, we get by the Lefschetz theorem that Pic V = ZK V ⊕ ZF, where F = G| V is the class of a fiber.Furthermore,
2014-10-01T00:00:00.000Z
2003-10-17T00:00:00.000
{ "year": 2003, "sha1": "4ede94b1d4c2f15912f5bf4913695237fe543cf3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4ede94b1d4c2f15912f5bf4913695237fe543cf3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
269123072
pes2o/s2orc
v3-fos-license
Clinical features of COVID-19-related optic neuritis: a retrospective study Objective This retrospective study aimed to investigate the clinical features of optic neuritis associated with COVID-19 (COVID-19 ON), comparing them with neuromyelitis optica-associated optic neuritis (NMO-ON), myelin oligodendrocyte glycoprotein-associated optic neuritis (MOG-ON), and antibody-negative optic neuritis (antibody-negative ON). Methods Data from 117 patients (145 eyes) with optic neuritis at the Shantou International Eye Center (March 2020–June 2023) were categorized into four groups based on etiology: Group 1 (neuromyelitis optica-related optic neuritis, NMO-ON), Group 2 (myelin oligodendrocyte glycoprotein optic neuritis, MOG-ON), Group 3 (antibody-negative optic neuritis, antibody-negative ON), and Group 4 (optic neuritis associated with COVID-19, COVID-19 ON). Characteristics of T2 and enhancement in orbital magnetic resonance imaging (MRI) were assessed. Best-corrected visual acuity (BCVA) was compared before treatment, at a short-term follow-up (14 days), and at the last follow-up after treatment. Results The COVID-19-associated optic neuritis (COVID-19 ON) group exhibited 100% bilateral involvement, significantly surpassing other groups (P < 0.001). Optic disk edema was observed in 100% of COVID-19 ON cases, markedly differing from neuromyelitis optica-related optic neuritis (NMO-ON) (P = 0.023). Orbital magnetic resonance imaging (MRI) revealed distinctive long-segment lesions without intracranial involvement in T1-enhanced sequences for the COVID-19 ON group compared to the other three groups (P < 0.001). Discrepancies in optic nerve sheath involvement were noted between the COVID-19 ON group and both NMO-ON and antibody-negative optic neuritis (antibody-negative ON) groups (P = 0.028). Before treatment, no significant difference in best-corrected visual acuity (BCVA) existed between the COVID-19 ON group and other groups. At the 14-day follow-up, BCVA in the COVID-19 ON group outperformed the NMO-ON (P < 0.001) and antibody-negative ON (P = 0.028) groups, with no significant difference observed compared to the myelin oligodendrocyte glycoprotein optic neuritis (MOG-ON) group. At the last follow-up after treatment, BCVA in the COVID-19 ON group significantly differed from the NMO-ON group (P < 0.001). Conclusion Optic neuritis associated with COVID-19 (COVID-19 ON) predominantly presents with bilateral onset and optic disk edema. Orbital magnetic resonance imaging (MRI) demonstrates that COVID-19 ON presents as long-segment enhancement without the involvement of the intracranial segment of the optic nerve in T1-enhanced images. Glucocorticoid therapy showed positive outcomes. Methods: Data from patients ( eyes) with optic neuritis at the Shantou International Eye Center (March -June ) were categorized into four groups based on etiology: Group (neuromyelitis optica-related optic neuritis, NMO-ON), Group (myelin oligodendrocyte glycoprotein optic neuritis, MOG-ON), Group (antibody-negative optic neuritis, antibody-negative ON), and Group (optic neuritis associated with COVID-, COVID-ON).Characteristics of T and enhancement in orbital magnetic resonance imaging (MRI) were assessed.Best-corrected visual acuity (BCVA) was compared before treatment, at a short-term follow-up ( days), and at the last follow-up after treatment. ). Orbital magnetic resonance imaging (MRI) revealed distinctive longsegment lesions without intracranial involvement in T -enhanced sequences for the COVID-ON group compared to the other three groups (P < . ). Discrepancies in optic nerve sheath involvement were noted between the COVID-ON group and both NMO-ON and antibody-negative optic neuritis (antibody-negative ON) groups (P = . ).Before treatment, no significant di erence in best-corrected visual acuity (BCVA) existed between the COVID-ON group and other groups.At the -day follow-up, BCVA in the COVID-ON group outperformed the NMO-ON (P < . ) groups, with no significant di erence observed compared to the myelin oligodendrocyte glycoprotein optic neuritis (MOG-ON) group.At the last follow-up after treatment, BCVA in the COVID-ON group significantly di ered from the NMO-ON group (P < . ). Conclusion: Optic neuritis associated with COVID-(COVID-ON) predominantly presents with bilateral onset and optic disk edema.Orbital magnetic resonance imaging (MRI) demonstrates that COVID-ON presents as long-segment enhancement without the involvement of the intracranial Optic neuritis (ON), characterized by inflammation of the optic nerve, presents in various forms-autoimmune, infectious, or systemic.Autoimmune optic neuritis includes distinct subtypes such as neuromyelitis optica-related optic neuritis (NMO-ON), collapsin response mediator protein 5 optic neuritis (CRMP5-ON), myelin oligodendrocyte glycoprotein ON (MOG-ON), multiple sclerosis ON (MS-ON), single isolated optic neuritis (SION), relapsing isolated optic neuritis (RION), and chronic relapsing inflammatory optic neuropathy (CRION).Antibody-negative optic neuritis (antibody-negative ON) is classified into SION and RION based on the course of the disease.Systemic disorders causing optic neuritis include allergic granulomatous angiitis, anti-neutrophil cytoplasmic autoantibody (ANCA)-associated vasculitis, ankylosing spondylitis, Behçet's disease, Churg-Strauss disease, Cogan syndrome, giant cell arteritis, granulomatosis with polyangiitis, IgG 4 disease, Kawasaki disease, microscopic polyangiitis, polyarteritis nodosa, primary antiphospholipid syndrome, rheumatic disease, sarcoidosis, Sjögren syndrome, systemic lupus erythematosus, Susac's syndrome, systemic sclerosis, Takayasu arteritis, treatment side-effects, ulcerative colitis, and Wegener granulomatosis.COVID-19 infections can directly affect the optic nerve and may also trigger optic neuritis as a post-infectious or post-vaccination phenomenon.This trigger can result in infectious optic neuropathies, which is a less common but diverse group with varied and non-specific clinical presentations (12).Infectious optic neuropathies are less common and may present with varied, overlapping, and non-specific clinical appearances (13).COVID-19 ON falls within the category of infectious optic neuropathies.This retrospective study aims to comprehensively explore and compare the clinical features of COVID-19 ON with those of NMO-ON, MOG-ON, and antibody-negative ON. Study cohort This retrospective study was conducted at the Joint Shantou International Eye Center of Shantou University and The Chinese University of Hong Kong between March 2020 and June 2023.All patients were treated with intravenous methylprednisolone (at a dose of 20 mg/kg/day for children and 1 g/day for adults) for 3-5 days, followed by a taper of oral prednisone (at a starting dose of 1 mg/kg/day) with variable durations, based on the subtype of ON and recovery from optic neuritis.Follow-up data were obtained during the return visit for clinical examinations.For patients whose follow-up compliance time is less than half a year, the recurrence status will be obtained by telephone follow-up with the patient or their guardian. The diagnostic and inclusion criteria for patients with ON are as follows ( 12): (1) Monocular, subacute loss of vision associated with relative afferent pupillary defect (RAPD) and with or without orbital pain worsening on eye movements, (2) Binocular, subacute loss of vision associated with or without relative afferent pupillary defect (RAPD) or orbital pain worsening on eye movements, (3) Orbital magnetic resonance imaging (MRI): Contrast enhancement of the symptomatic optic nerve and sheaths acutely, with or without T2 hyperintensity within 3 months, (4) Biomarkers: Aquaporin 4 (AQP4) and MOG antibody seropositive, and (5) NMO-ON and MOG-ON conform to their respective diagnostic guidelines (14, 15).All patient sera were tested for autoantibodies, including ANCA, antinuclear antibody (ANA), human leukocyte antigen-B27 (HLA-B27), anti-Sjögren's syndrome-related antigen A (SSA), anti-Sjögren's syndrome-related antigen B (SSB), rheumatoid factor (RF), and antibodies related to tuberculosis, hepatitis B, syphilis, HIV, and COVID-19.The exclusion criteria for the study are as follows: (1) any other types of optic neuropathy, including compressive, vascular, toxic, metabolic, infiltrative, or hereditary optic neuropathy, (2) presence of craniocerebral lesions other than those from demyelinating diseases involving the optic chiasm or optic pathway downstream of the optic chiasm and the optic cortex, (3) glaucoma or any other ocular diseases that could influence visual acuity (VA), and (4) unknown serum MOG and AQP4 antibody status.The diagnostic and inclusion criteria for patients with COVID-19 ON are as follows ( 12): (1) diagnosis of optic neuritis, (2) exclusion of other types of infectious optic neuritis, (3) negative AQP4, and MOG antibodies, and (4) confirmed diagnosis of COVID-19 within the past 28 days, with current positivity for antibodies.The exclusion criterion for the study was the absence of acute clinical manifestations of new COVID-19. Study protocol All patients underwent a comprehensive ophthalmic evaluation, which included the assessment of Snellen best-corrected visual acuity (BCVA), slit-lamp examination, measurement of intraocular pressure (IOP), and Humphrey automated static perimetry for visual field assessment (VF).Dilated fundus photography and peripapillary retinal nerve fiber layers (pRNFLs) were assessed to evaluate optic disk swelling before treatment using high-definition spectral domain optical coherence tomography (SD-OCT: Carl Zeiss Meditec, Dublin, CA, USA).BCVA was examined by the standard table of vision logarithms at 5 m.Individuals who were unable to read any letter at a distance of 1 m were further examined by finger counts, hand movements, or perceiving light.Visual parameters, converted to logarithm of the minimum angle of resolution (logMAR) units for statistical analysis, were recorded at baseline, 14-day follow-up, and final follow-up visits.The onset of symptoms in both eyes within a span of 2 weeks is considered as simultaneous bilateral onset. All patients underwent orbital magnetic resonance imaging (MRI) with fat-suppressed T2-weighted images and gadoliniumenhanced T1 sequences at a magnetic field strength of 1.5 T. The imaging encompassed the optic nerve and the optic chiasm, with axial scans precisely aligned to these structures.The slice thickness was 3 mm with a 0.5-mm interslice interval.In orbital MRI, the optic nerve was divided into three regions: orbital, canalicular, and intracranial.The lesion length or the optic nerve length was determined by multiplying the slice thickness (3 mm) and interslice interval (0.5 mm) by the number of coronal sections showing optic nerve changes on T1-enhanced images, with or without long T2 sequences.Long-segment lesions were those exceeding half the length of the optic nerve, as measured using MicroDicom Viewer software, version 3.4.7 (MicroDicom DICOM viewer Copyright © ).For the statistical analysis of orbital MRI, cases from external institutions with lesions that could not be measured for length were excluded.MRI of the head or spinal regions was performed on patients presenting with myelitis or systemic symptoms.The images were reviewed blindly by two independent raters (FFZ and LPC).When there was a mismatch between MRI findings, the images were reviewed by both readers and a consensus was reached. Statistical methods Statistical analysis was carried out using SPSS software version 25.0 (IBM Co., Chicago IL).Quantitative data were expressed as mean ± standard deviation or median and range, as appropriate.Appropriate parametric and non-parametric tests, including the chi-squared test, analysis of variance (ANOVA), Fisher's exact test, the Mann-Whitney U-Test, and the Kruskal-Wallis test, were applied reasonably.Bonferroni-corrected pairwise comparisons were used to adjust for multiple comparisons.P-values of <0.05 were considered statistically significant. MRI manifestation Cases from external institutions with lesions that could not be measured for length were excluded.A total of 99 eyes were retrospectively categorized into four groups: Group 1 comprised 35 eyes with NMO-ON; Group 2 included 10 eyes with MOG-ON; Group 3 consisted of 46 eyes with antibody-negative ON; and Group 4 encompassed 8 eyes with COVID-19 ON.Table 3 presents the results of optic nerve MRI, revealing T1-enhanced images with or without long T2 sequences.No significant differences in orbital, canalicular, intracranial, and long-segment lesions were observed among the four groups (p = 0.305, p = 0.712, p = 0.287, and p = 0.165) (Table 2).Regarding the involvement of the optic chiasm, Group 1 exhibited a significant difference compared to Group 3 (p = 0.032, Table 3).Concerning the involvement of the optic nerve sheath, differences were noted between Group 4 and both Group 1 and Group 3 (p = 0.028, Table 3).In terms of longsegment lesions not involving the intracranial segment, significant differences were observed between Group 4 and the other three groups (p < 0.001, Table 3; Figure 1). Discussion Our study investigated the clinical features of COVID-19 ON, categorizing it as an infectious optic neuropathy.The etiology of bacterial infectious optic neuropathies includes Bartonella, Brucella, Coxiella burnetii, leprosy, Mycoplasma pneumoniae, ocular cat-scratch disease, Streptococcus, syphilis, tuberculosis, Whipple disease, and typhus.Fungal infectious optic neuropathies are mainly associated with histoplasmosis, while parasitic infectious optic neuropathies are primarily linked to toxoplasmosis and neurotoxocarosis.The etiology of viral infectious optic neuropathies encompasses Chikungunya fever, cytomegalovirus, coronavirus, dengue, Epstein-Barr virus, echovirus, hepatitis B and C, herpes simplex, human immunodeficiency virus (HIV), human herpesvirus 6, Inoue-Melnick virus, measles, mumps, rubella, varicella-zoster virus (VZV), tick-borne encephalitis, West Nile virus, and Zika virus (12).COVID-19 belongs to the category of coronaviruses within the viral class. Optic neuritis induced by cytomegalovirus, dengue, echovirus, hepatitis B, herpes simplex, human herpesvirus 6, measles, mumps, rubella, West Nile virus, and Zika virus is primarily reported in individual case studies, lacking a systematic summary of clinical presentations, MRI characteristics, and treatment efficacy.Limited reports on optic neuritis caused by HIV suggest that patients may experience unilateral or bilateral onset, often with the manifestation of optic disk swelling.The recommended treatment is directed toward addressing the underlying primary condition.Optic neuropathy related to HIV demonstrates axonal degeneration, possibly mediated by infected macrophages and increased expression of proinflammatory cytokines (16,17).Another small number of reports on varicella-zoster virus-induced optic neuritis indicate that patients may experience the condition unilaterally or bilaterally, with some presenting optic disk swelling.MRI findings suggest abnormalities in the optic nerve and/or the myelin sheath.The recommended treatment involves antiviral therapy combined with corticosteroid treatment, and the prognosis is uncertain, possibly associated with timely administration of medication.The pathological mechanism may be attributed to direct damage by VZV or optic nerve demyelination caused by an immune response (18) may experience the condition unilaterally or bilaterally, both accompanied by optic disk swelling.Corticosteroids have been found to expedite recovery when initiated early in the disease.The possible causes may include direct viral involvement and a delayed immune response after a viral infection.A good response to corticosteroid therapy indicates the possibility of an autoimmune mechanism in the pathogenesis of the disease (18,19). Among ON cases caused by coronaviruses, COVID-19 ON has been most frequently documented, predominantly in the form of individual cases or case series, lacking systematic clinical presentations and treatment outcomes (5)(6)(7)(8)(9)(10)(11).In our study, all patients presented with bilateral involvement, differing from optic neuropathy induced by HIV, VZV, and CHIKV, which can affect either one or both eyes.Patients exhibited optic disk swelling similar to that observed in optic neuritis caused by CHIKV.The unique characteristics of MRI examinations in this context have not been systematically summarized in previous research.A 14-day follow-up of patients with COVID-19 ON revealed no significant difference in BCVA compared to that in the MOG-ON group; however, there was a notable improvement in the COVID-19 ON group compared to the NMO-ON group and antibody-negative ON group.At the last follow-up, BCVA in the COVID-19 ON group significantly differed from the NMO-ON group.This study suggests that patients with COVID-19 ON respond well to glucocorticoid therapy, akin to the effects observed in previous studies on corticosteroid treatment for optic neuropathy induced by CHIKV.The treatment of optic neuritis caused by HIV primarily focuses on addressing the underlying primary disease.For optic neuritis caused by VZV, the recommended approach involves a combination of antiviral therapy and corticosteroid treatment, and the prognosis remains uncertain. Efforts to delve deeper into understanding the pathogenesis of COVID-19 ON demand heightened attention.The coronavirus, an enveloped virus with a diameter of ∼120 nanometers, exhibits distinctive crown-shaped projections on its surface (20).Its genome comprises an exceptionally lengthy positive-stranded RNA.The infection process relies on the angiotensin-converting enzyme 2 (ACE2) receptor, which is present in severe acute respiratory syndrome coronavirus 1 (SARS-CoV-1) and SARS-CoV-2 and is mediated through the ACE2 receptor (21,22).The neuropathological characteristics of COVID-19 arise from SARS-CoV-2 ′ s direct invasion into the brain via the nasopharyngeal region.ACE2 receptors are identified in the nervous system, including glial cells and the basal ganglia (23).Inflammatory mediators disrupt the blood-brain barrier, activate glial cells, and induce demyelination of nerve fibers (24).In a previous study, it was revealed that molecular mimicry and the production of autoantibodies may contribute to immune-mediated damage to nervous tissues (23,24).In our study, patients exhibited a favorable response to steroid treatment, suggesting that the pathological mechanism may be primarily attributed to molecular mimicry and the production of autoantibodies, contributing to immune-mediated damage to nervous tissues.This mechanism is similar to the pathogenesis of optic neuropathy induced by CHIKV.However, optic neuropathy caused by HIV and VZV may involve direct damage to the optic nerve, with or without demyelination, triggered by an immune response. Among NMO-ON patients, 47% had a history of viral infection (25), while among MOG-ON patients, 37.5-60% had a history of previous infections (26).However, these patients were diagnosed with NMO-ON or MOG-ON rather than infectious optic neuropathy.Extensive research has been conducted on NMO-ON and MOG-ON, with standardized diagnostic and treatment guidelines and systematic summaries of clinical manifestations and MRI examinations.In alignment with this finding, we have summarized the clinical characteristics of COVID-19 ON.In comparison to NMO-ON and MOG-ON, COVID-19 ON mainly presents with bilateral onset and is more prone to optic disc edema.Previous studies have indicated that NMO-ON often involves the posterior and frequently longitudinally extensive optic nerve, with a tendency to affect the chiasmal region (14); MOG-ON is frequently longitudinally extensive and involves the optic nerve sheath (15).Our study corroborates these findings but uniquely identifies COVID-19 ON as presenting with long-segment enhancement without involvement of the intracranial segment Frontiers in Neurology frontiersin.org of the optic nerve in T1-enhanced images, which is a novel observation (Figure 1).Unfortunately, due to the retrospective nature of this relatively small sample size and short-term follow-up, our statistical conclusions lack the strongest evidence support. Conclusion Our investigation highlights the favorable impact of glucocorticoid therapy on COVID-19 ON.Notably, our orbital MRI observations unveiled a unique long-segment enhancement in T1-enhanced images, excluding the involvement of the intracranial optic nerve segment.These distinctive features identified in COVID-19 ON contribute significantly to advancing our comprehension of infectious optic neuropathies, offering valuable insights for future research and clinical considerations. TABLE Demographic and clinical details of patients in four groups of optic neuritis. TABLE Best - corrected visual acuity (BCVA) during the course of the study for each group. . Infectious optic neuropathy induced by viruses has been extensively reported, with documented cases linked to the Chikungunya virus (CHIKV) infection during a short epidemic in 2006-2007 in Southern India.PatientsTABLE Comparison of the optic nerve MRIs in four groups.Based on Fisher's exact test.NMO-ON, neuromyelitis optica-related optic neuritis; MOG-ON, myelin oligodendrocyte glycoprotein optic neuritis; antibody-negative ON, antibody-negative optic neuritis; COVID-19 ON, optic neuritis associated with COVID-19; MRI, magnetic resonance imaging.
2024-04-14T15:11:09.532Z
2024-04-12T00:00:00.000
{ "year": 2024, "sha1": "30f54b389e82d68ab3632dbf649e741b3fac3bc4", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/neurology/articles/10.3389/fneur.2024.1365465/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "98405d9833cf1b23627a5ccef9b21dd99e7998ec", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
85694818
pes2o/s2orc
v3-fos-license
Losing time? Incorporating a deeper temporal perspective into modern ecology . Ecologists readily acknowledge that a temporal perspective is essential for untangling ecological complexity, yet most studies remain of relatively short duration. Despite a number of excellent essays on the topic, only recently have ecologists begun to explicitly incorporate a historical component. Here we provide several concrete examples drawn largely from our own work that clearly illustrate how the adoption of a longer temporal perspective produces results significantly at odds with those obtained when relying solely on modern data. We focus on projects in the areas of conservation, global change and macroecology because such work often relies on broad-scale or synthetic data that may be heavily influenced by historic or prehistoric anthropogenic factors. Our analysis suggests that considerable care should be taken when extrapolating from studies of extant systems. Few, if any, modern systems have been unaffected by anthropogenic influences. We encourage the further integration between paleoecologists and ecologists, who have been historically segregated into different departments, scientific societies and scientific cultures. Relationship between maximum body size and area of landmass for extant (open circles, N = 28) and late Pleistocene (closed circles, N = 30) mammals. Modern data are missing for two islands, Barbuda and East Falkland, due to the extinction of all terrestrial, non-carnivorous, mammals. Both body size and area were log 10 -transformed prior to analysis. Slopes were indistinguishable between the two time periods, but the intercept for late-Pleistocene mammals was significantly larger (ANOVA; p < 0.01) than for extant species, suggesting that islands supported larger animals in the past. The pronghorn (Antilocapra americana) is a quintessential symbol of the Great Plains. As the fastest land mammal in the Americas, it can reach close to ~100 km/h and can sustain speeds of 45 km/h for long distances (Byers 1997). Much of its physiology, morphology and life history reflect an optimization for being swift; pronghorn have oversized hearts and lungs, a 320° field of vision, hollow hair and overlong gestation for their size (Byers 1997). Understanding the selective pressures that led to such specialized adaptations is difficult without the knowledge that the pronghorn co-evolved with a suite of now extinct predators, including the American cheetah (Micracinonyx trumani) (Byers 1997, Barlow 2001. As the only surviving member of the once speciose North American family Antilocapridae, the pronghorn no longer has effective natural predators. Consequently, many of its social, morphological and physiological traits have little apparent modern selective value (Byers 1997). Ecologists recognize the anachronistic nature of animals like the pronghorn, but more as a curiosity rather than as a concrete example of the substantial alteration of ecosystems that occurred in the late Quaternary. Although work investigating the ecology and life-history characteristics of tropical and temperate plants has proposed that numerous adaptations for dispersal or regrowth arose in response to foraging by now-extinct megafauna (Janzen and Martin 1982, Wing and Tiffney 1987, Barlow 2001, in general, the implications of the prehistoric loss of megafauna in the late Pleistocene have been overlooked. Yet, these animals undoubtedly played key roles in terms of ecosystem structure and function; their abrupt disappearance some 11,000 years ago must have profoundly influenced ecosystem dynamics (Martin 1967, Donlan et al. 2005. How many other life-history, ecological or distributional features of extant animals and plants are due in some part to now-extinct components of the ecosystem? perspective Losing time? Incorporating a deeper temporal perspective into modern ecology As ecologists increasingly turn from 'explaining the present' to 'anticipating the future', there has been renewed interest in bringing a historical perspective into ecology (Botkin et al. 2007, Gavin et al. 2007, Williams and Jackson 2007. While earlier workers illustrated the insights a wider temporal window yields (e.g., Schoonmaker and Foster 1991, Herrera 1992, Delcourt and Delcourt 1998, the contemporary focus on anthropogenic climate change has galvanized efforts. After all, the late Quaternary provides abundant examples for examining the influence of changing abiotic conditions on organism distribution, ecology and evolution (Clark et al. 2001, Botkin et al. 2007, Gavin et al. 2007, Williams and Jackson 2007. Evidence from fine-scale paleoclimate reconstructions (e.g., pollen, cross-dated tree-ring chronologies, ice cores, and other indicators) suggests abrupt climate shifts occurred with regularity in the past (Schoonmaker and Foster 1991, Allen and Anderson 1993, Dansgarrd et al. 1993, Bond and Lotti 1995, Alley 2000. Some, such as the Younger Dryas, were significant events, with temperature warming of as much as 5-10°C reportedly occurring within a decade (Alley et al. 1993, Alley 2000, a rate of change higher than that expected under most scenarios of anthropogenic climate change (IPCC 2007). Virtually all species extant today were present and successfully coped with the Younger Dryas. Thus, it has been recognized as a particularly useful analog for studying the likely effects of anthropogenic climate change. Indeed, the most recent IPCC report now contains sections on paleoclimate. The de facto standard traditionally used by ecologists to set ecological baselines is to replicate experiments across space. The implicit assumption is that if the spatial extent is sufficient it encompasses the possible range of natural variation. However, more than 20 years ago it was recognized that conceptual problems occur if ecologists use "short-term experiments to address long-term questions" (Tilman 1989, pg 139). Although the incorporation of a broader spatial perspective clearly increases the natural range of variation expressed in both abiotic and biotic conditions, space is not necessarily an adequate substitute for time. Simply put, ecological history matters. This disparity may occur because of nonanalog climatic conditions found in the past, leading to assemblages of mammal or vegetative communities not found together today (e.g., Huntley 1990, Schoonmaker and Foster 1991, Overpeck et al. 1992, Graham et al. 1996, Williams et al. 2001, Williams and Jackson 2007, or because the type or magnitude of change in the past dwarfs that represented along a modern spatial gradient (Jackson 2007). Here, we provide several concrete examples of how the adoption of a deeper temporal perspective can sometimes provide more complete and often divergent insights into modern ecology. These are drawn largely from our own work or that of close colleagues because it was otherwise difficult to obtain the original data that would allow us to redo the analyses. Our first examples demonstrate how limiting macroecological studies to extant taxa may yield skewed interpretations of broad-scale geographic patterns. The second set of examples demonstrates the added utility that a longer temporal perspective can provide for conservation biology. Both paleontologists and conservation biologists are interested in extinction, and understanding past events could aid in the understanding of current risk for many taxa. Finally, we focus on studies of the likely effect of anthropogenic climate change on organisms and ecosystems. Climate scientists have traditionally used forward-projected models with baselines established using modern conditions. Given the likelihood of non-analog climatic regimes in the future, models that are parameterized based only on modern conditions are likely to fail to accurately predict ecological responses to novel climates. Here, a historical perspective can be particularly useful. Our intent is to demonstrate the need to integrate both paleontological and ecological approaches in developing a synoptic understanding of ecological systems. Ecological examples Body size distributions in macroecology Macroecology was developed in the late 1980s as an effort to understand patterns underlying the local abundance, distribution and diversity of species (Brown and Maurer 1989). As a complementary approach to experimental ecological research, it has been remarkably successful at illuminating large-scale spatial and statistical patterns (Smith et al. 2008 and references therein). Body size is often used as a variable of interest in macroecological studies because it is tightly related to many fundamental physiological, ecological and evolutionary characteristics, and moreover, is relatively easy to characterize even for fossil forms (Peters 1983, Damuth andMacFadden 1990). But what if the underlying body-size distribution used in an analysis is biased or incomplete? The use of body size as a dependent variable could potentially lead to misleading results if portions of the biota are selectively missing. In 1991, Brown and Nicoletto reported that the shapes of mammalian body-size distributions in North America change with spatial scale. The continental-level body-size distribution was unimodal and right skewed, but as spatial scale decreased, regional distributions became progressively flatter until they were nearly uniform for local communities. Brown and Nicoletto's results had important implications in terms of community assembly and structure and the paper has been highly cited. Because species found at local scales were not a random subsample of the regional scale (differing in median, mean, skew and range of size), they argued there were 'rules' influencing the assembly of communities and a limit to the number of species of each body size that could co-exist locally. Distributions became peaked as sites were aggregated over space because of higher taxonomic turnover in smaller-bodied species. Numerous authors have debated the validity of both the patterns and underlying mechanisms since this seminal paper was published; for example, mammalian communities in South American tropical forests reportedly show more peaked distributions than those in other habitats Cofré 1999, Bakker andKelt 2000) and the body-size distributions of bats are not flat at the local level across a wide range of latitudes (Willig et al. 2008). To what extent were the macroecological patterns described by Brown and Nicoletto (1991) influenced by the use of extant North American mammals? The contemporary distributions of both North and South American mammals were heavily impacted by the human-mediated late Pleistocene megafaunal extinction (Martin 1967, Surovell andWaguespack 2009). This event was extremely size biased; although 'only' 12.8% of the North American mammal fauna were extirpated, they were mostly the largest species present (Lyons et al. 2004). To address the sensitivity of these macroecological patterns, we reanalyzed the results reported by Brown and Nicoletto (1991) at the local, regional and continental level, using a global database of body size in late Quaternary mammals (Smith et al. 2003). As an example, we present the mammalian body-size distribution of a county in New Mexico nested within the western grasslands biome of North America. For each of these areas we determined the likely presence of extinct species based on FAUNMAP range reconstructions and local fossil evidence (Harris 1970, Graham et al. 1996, Wilson and Ruff 1999, NPS 2007. Our analysis suggests little sensitivity to inclusion of extinct species at the local level ( Fig. 1); although the range of mammalian body size at sites was underrepresented (e.g., the local area supported larger animals than the maximum present today), the shape of the distribution was not significantly different. At coarser spatial scales, however, both the range of body size and the shape of the distribution changed significantly (Kolmogorov-Smirnov two-sample tests, regional: P < 0.05, continental: P < 0.01). The body size distribution at both the regional and continental levels contains a second mode of larger-bodied mammals (Fig. 1). Recent work by Lyons et al. (2004) suggests that such multimodality is typical of all continents when extinct late-Pleistocene megafauna are included. losing time? What does this mean in terms of ecosystem function? Macroecologists routinely use patterns in the body-mass distributions of animals as a basis for understanding the structure, assembly and persistence of ecological communities. Many hypotheses have been proposed to explain bodysize distributions at various levels, including ones based on energetics, ecology, phylogeny, biogeography and habitat or textural discontinuities. However, a longer temporal perspective reveals the influence of the sizebiased extinction on the shape of contemporary body-size distributions. Clearly large animals have pivotal roles in ecosystems (e.g., Owen-Smith 1987, Pringle et al. 2007, and integrative paleoecological studies are beginning to reveal the profound effects of megafaunal extinction on vegetation structure, composition, and dynamics, both in North America (Gill et al. 2009), and elsewhere Galetti 2009, Johnson 2009). Scaling of landmass area and body size Space use in animals is strongly linked to body size and has been a focal point of much macroecological research (Brown and Maurer 1989, Brown 1995, Gaston 2003, Jetz et al. 2004). Marquet and Taper (1998), and later Burness et al. (2001), observed that the size of the largest mammal on a given landmass increases with land area. To explain this pattern, they noted that large mammals are characterized by both low population densities and large home ranges. Recent studies of mammalian (Okie and Brown Figure 1. Body size frequency distributions of North American mammal species before (black shading) and after (grey shading) late-Pleistocene extinctions. Distributions are shown at three different scales: the continent of North America (top), regional (a western grassland biome; middle), and local (Eddy County, New Mexico; bottom). While significant differences between past and present body size distributions are observed at the continental and regional scales, patterns were statistically indistinguishable at the local scale. frontiers of biogeography 4.1, 2012 -© 2012 the authors; journal compilation © 2012 The International Biogeography Society 2009) and avian (Boyer and Jetz 2010) body size on islands also found robust scaling relationships between maximum size and island area. These effects imply that to persist large species require large areas to sustain viable population sizes, and important concept in reserve design and conservation of large-bodied mammals (Kelt and Van Vuren 2001). However, Jetz et al. (2004) found a high degree of home range overlap in large mammal species, suggesting that population density rather than home range size is the better measure to use in quantifying individual area needs for conservation purposes. Since many islands experienced extinctions during the late Pleistocene (Alcover et al. 1998), and these extinctions may have affected the local body-size distribution (Lyons et al. 2004, Boyer andJetz 2010), we re-examined the scaling of maximum size with land area before the influence of humanmediated extinctions. We gathered data on the largest mammal species found today and in the late Pleistocene on 30 islands and landmasses around the world. Mammal data were limited to herbivorous and omnivorous species, owing to differences in the scaling of population density and space use between carnivores and herbivores (Peters 1983, Jetz et al. 2004. Island area was based on present -day measurements. Because island mammals would have experienced a dynamic land area due to eustatic sea-level changes during the late-Pleistocene, and because the extinct taxa in our dataset also differ in their dates of last appearance, we found it difficult to assign a single late-Pleistocene value for land area to each island. However, because sea levels in most areas were over 100m lower than present levels during the last glacial maximum (Fleming et al. 1998), the land area of many islands would have been substantially larger during the late-Pleistocene and some islands were connected to nearby continents by exposed land bridges. To control for these issues, we excluded all land-bridge islands and islands where extinction occurred when sea levels were substantially lower than current levels (ca. 7000 years before present, Fleming et al. 1998). For comparison to the island data, we also included late-Pleistocene and modern body mass Modern data are missing for two islands, Barbuda and East Falkland, due to the extinction of all terrestrial, non-carnivorous, mammals. Both body size and area were log 10 -transformed prior to analysis. Slopes were indistinguishable between the two time periods, but the intercept for late-Pleistocene mammals was significantly larger (ANOVA; p < 0.01) than for extant species, suggesting that islands supported larger animals in the past. values and modern land area for six continental landmasses (Australia, New Guinea, South America, North America, Africa, and Eurasia). Body size (g) and area (km²) were log 10transformed prior to analysis. We compared the size of the largest mammal species on each island before and after the extinction, and plotted these body size maxima against land area (Fig. 2). The strength and direction of the scaling between maximum body size and area in extant species was not significantly altered by the inclusion of extinct species, but the intercept was significantly higher when late-Pleistocene species were included (ANOVA, P < 0.01, df=1,54, F=7.4) than for extant species (Fig. 2). This translates to an order of magnitude decrease in the size of the largest mammal supported by a given land area since the late Pleistocene. For example, according to the modern data, the largest mammal that an island of 1000 km 2 would be expected to support is about 431g, however, in the late-Pleistocene fauna, this maximum size was 2291g. In order to compare the scaling of body size with area to other studies, we computed the slope of the relationship with body mass as the independent variable (late-Pleistocene: slope = 0.86, extant: slope = 0.71). Late-Pleistocene faunas were statistically consistent with previous studies of both the scaling of maximum body size with area in extinction-structured communities (slope = 0.79, Marquet and Taper 1998) and the body-size scaling of population density in mammals (slope = 0.76, Jetz et al. 2004), suggesting that late-Pleistocene island faunas accurately represent the ecologically constrained scaling of maximum body size with area. It is beyond the scope of this study to determine the ecological mechanisms that allowed larger species to inhabit smaller areas in the past than that observed today; differences in scaling intercept may relate to the allocation of land area and resurces for use by humans and their commensals (Boyer and Jetz 2010). However, it is clear that macroecological studies based solely on extant species, especially those conducted in areas known to be affected by recent extinctions, may offer an incomplete picture of these ecosystems. Temporal perspective in conservation studies A central goal of conservation biology is to understand the extinction process in order to mitigate current and future anthropogenic biodiversity losses. Extinction risk analyses, like many other predictive ecological studies, are often based on current distributions of extant species (Cardillo 2003. Longer temporal records, however, can provide an alternative perspective on how conservationists view extinctions. From this perspective, near-time fossil data can be invaluable for identifying general patterns of extinction risk (McKinney 1997, Willis et al. 2007, Boyer 2010. The inclusion of fossil data has several advantages: unlike data on extant endangered species, fossil data provide direct information on the extinction process itself. Rarity and extinction do not always result from the same processes. Second, fossil data represent an independent dataset of extinction probability on which to build predictive models. This avoids the circularity inherent in building a model and testing it on the same dataset (such as the IUCN Red List). Not only can paleoecological data provide a comparison of prehistoric (>500 years) and historic (past 500 years) extinctions, but such data may also aid in determining baseline conditions for conservation and restoration, and help predict future extinction risk. As a conservation-oriented case study, we turn to the Holocene extinction of birds on Pacific islands. In the Hawaiian islands, the arrival of Polynesian colonists about 1200 years ago corresponded with the extinction of about 50% of indigenous land bird species (56/111 species; James 1982, 1991). In comparison, historic losses of Hawaiian birds amount to about 40% of the historically observed species (23/55 species; Boyer 2008). To put global Holocene avian extinctions in context, Pimm et al. (2006) estimated extinction rates for pre-European and historic timescales, and compared these to the generic background of ~1 extinction per million species per year (E/MSY). Historic rates were around 26 E/MSY, but after accounting for pre-European extinctions, the estimate rose to ~100 E/MSY. The vast majority of recorded extinctions before the 20 th century were on islands. If the most vulnerable species were lost quickly after human colonization, we may expect rates on islands to slow down over time (Pimm et al. 1994). However, human impacts on island environments have intensified through time, so how do prehistoric and historic extinctions compare in Hawaii? In the Hawaiian islands, the influence of humans on natural environments differed between the two time periods. Consequently, prehistoric and historic extinction waves may have had different causes resulting in contrasting patterns of extinction risk (see Boyer 2008). Prehistoric extinctions showed a strong bias toward larger body sizes and flightless and ground -nesting species, even after accounting for fossil preservation bias. Many small, specialized species also disappeared, implicating a wide suite of human activities including hunting and destruction of habitat. In contrast, the highest extinction rates in the historic period were in medium-sized nectarivorous and insectivorous birds. Although the most vulnerable species may have disappeared first, changing human activities led to continued extinctions through time. Currently endangered species are only the most recent victims of a human-caused biodiversity crisis that began thousands of years ago (Steadman 1995). Despite the crucial information the past can provide, paleoecological data are not always incorporated in studies of extinction risk (Blackburn et al. 2004, Trevino et al. 2007). To illustrate the difference this might make, we examined correlates of extinction risk for Hawaiian birds using decision tree models (Boyer 2008). We compared the results of two models: one including only extinctions that occurred since European colonization of Hawaii (ca. 1800 AD), and one incorporating all known prehistoric and historic extinctions (cumulative). The addition of older data to the model produced a substantial increase in explanatory value (7.3 Δ% DE; Fig. 3). When the recent extinctions alone were considered, extinction predictors included body mass and diet, but the cumulative tree expanded the list to include endemism and flightlessness as significant risk factors as well. While these traits may have been most important during the prehistoric extinction, they remain important for modern birds. As well as identifying traits associated with past extinctions, the two models were used to predict extinction risk for extant Hawaiian birds. Predictions from the cumulative model were a better match to current IUCN Red List status for extant Hawaiian birds than predictions from the historic model, but the difference between the two models was not significant (ANOVA, cumulative r²=0.15, recent r²=0.09; P >0.85; df = 4, 56; F=0.33). Extant Hawaiian birds have already been through a strong extinction filter (Pimm et al. 2006) and these past extinctions have relevance for the conservation of the remaining species. Although human environmental impacts on birds and their habitats have changed over time, modern endangered birds within the Pacific region share many of the same ecological characteristics as victims of previous extinctions (Boyer 2010). It seems logical that conservation and restoration policies should incorporate paleoecological information about bird species' ranges and island ecosystems. Fossil evidence suggests that many species currently limited to a single island were much more widespread before human contact (Steadman 2006). Thus, Steadman and Martin (2003) argued that future extinctions may be partially offset by selectively translocating birds to islands where they once occurred. The Marquesas Lorikeet (Vini ultramarina) and the Polynesian Megapode (Megapodius pritchardii) have been reintroduced to well-forested islands in their former range; in New Zealand, similar island sanctuaries have been quite successful (Birdlife International 2004). Steadman and Martin (2003) provided examples of five more species that could benefit from such translocations. Proactive conservation strategies present opportunities for paleoecology to step outside its traditionally retrospective role. Ecological history and climate change A better understanding of ecological history can enhance our understanding of how organisms respond to climate change in several ways. Finescaled paleoclimate reconstructions of the late Quaternary indicate that climate variability over the past 100 years does not adequately represent the full range of climate changes that occur in ecosystems and, further, that we tend to underestimate the degree of climatic 'teleconnections' (Schoonmaker andFoster 1991, MacDonald et al. 2008). Given that many researchers have employed forward projected models to predict future climate and ecosystem responses, such models parameterized based only on modern conditions are likely to be misleading (Botkin et al. 2007, Williams andJackson 2007). For some organisms the paleorecord provides us with multiple examples of responses to climate shifts of varying magnitude and frequency. For example, pollen records can provide well-resolved information on regional shifts in abundance, and distributional movements of plants over the late Quaternary (Schoonmaker and Foster 1991, Davis and Shaw 2001, Williams et al. 2001, Williams et al. 2002. Similarly, there are detailed records for animals (Graham 1986, Graham and Grimm 1990, Graham et al. 1996, Hadly 1996. Of particular interest is the woodrat paleomidden record, which allows fine-grained study of morphological adaptation to climate shifts of varying intensity. Woodrats (genus Neotoma) are small rodent herbivores found throughout much of North America. They are unique in creating middens or debris piles consisting of plant fragments, fecal pellets and other materials held together by evaporated urine ("amberat"). When sheltered in a rock crevice or cave, the contents form an indurated conglomerate, which can be preserved for thousands of years (Betancourt et al. 1990). Microscopic identification and radiocarbon dating of the materials yields estimates of diet and vegetation over time. Moreover, the width of the pellets is highly correlated with body mass, thus allowing estimates of morphological change of populations over time; ancient DNA can also be extracted (Smith et al. 1995, Smith andBetancourt 2006). Middens are ubiquitous across rocky arid regions of the western United States; a well-sampled mountain region may yield upwards of 50-100 discretely dated samples spanning 20,000 years or more. Thus, paleomidden analysis yields a finegrained characterization of both morphological and genetic responses of woodrat populations to climate fluctuations over thousands of years. We have used the woodrat paleomidden record to investigate the response to late Quaternary climate change over the western United States. We find that in most instances woodrats readily adapted in situ, although there were intervals when temperature alterations apparently exceeded species' thermal tolerances (Smith andBetancourt 2006, Smith et al. 2009). Overall, woodrats follow Bergmann's rule: within a region the body size of woodrat populations was larger during cold temporal intervals and smaller during warmer episodes. As might be expected, sites located near modern range boundaries where animals approach physiological and ecological limits demonstrate more complicated responses. At range boundaries, elevation matters. Populations at higher elevations adapt, while those lowelevation sites may become extirpated, depending on the severity of environmental shifts. Note that our interpretations of the ecological history of woodrats are firmly rooted in modern ecology. The robust relationship between the body sizes of woodrat populations and ambient environmental temperature is also seen spatially with modern populations of multiple woodrat species (Brown 1968, Brown and Lee 1969, Smith et al. 1998. We are able to attribute the underlying mechanism to physiology because both laboratory and field studies have demonstrated that maximum and lethal temperature scale inversely with body mass (Brown, 1968, Brown and Lee, 1969, Smith et al. 1995, Smith and Charnov 2001. Moreover, lab studies have yielded estimates of high heritability for woodrat body mass, suggesting that the morphological shifts observed probably include both a genetic and phenotypic component (Smith and Betancourt 2006). Does analysis of paleomiddens provide additional insights over studies of modern animals? Given the extreme sensitivity of woodrat body mass to temperature, could we predict how woodrats would respond to environmental change without recourse to the paleomidden record? We argue no. In recent work, we investigated the influence of late-Quaternary climate change along a steep elevational and environmental gradient in Death Valley, California. Today, this is the hottest and driest area in the Western Hemisphere with temperatures of 57°C (134°F) recorded. During the late Quaternary, however, pluvial Lake Manly covered much of Death Valley and climate was probably 6-10°C cooler (Mensing 2001, Koehler et al. 2005. Two species of woodrats live in this area today: N. lepida, the desert woodrat, found from the valley floor to elevations of ~1800 m on the surrounding mountains and N. cinerea, the bushy-tailed woodrat, restricted to elevations above 1800-2000 m in the Panamint Mountains on the east side of the valley. Over the past few years, we have collected and analyzed a series of 74 paleomiddens recovered from a 1300 m elevational transect along the Grapevine Mountains on the west side of the valley. These span 24,000 years and indicate a complicated ecological history (Fig. 4). Although N. cinerea are currently extirpated on the east side of Death Valley, they were ubiquitous throughout this area from late Pleistocene to middle Holocene. They adapted to climate shifts by phenotypic changes in body ; open squares with dots: N. lepida (desert woodrat). The two species vary considerably in habitat requirements, body mass and thermal niche dimensions; see text for details. a) Elevational displacement over the past 24,000 years. Note that N. cinerea was widespread at lower elevations during much of the late Pleistocene, but retreated upslope as climate warmed during the early to middle Holocene; it was eventually completely extirpated on the east side of Death Valley. As the larger and behaviorally dominant N. cinerea retreated, N. lepida expanded its elevational range upwards until eventually reaching the limits of its cold tolerance at ~1800m, a limit maintained into modern times. b) Body mass over time. Error bars are 95% confidence intervals. Note the remarkable and rapid dwarfing of body mass of N. cinerea populations from the full glacial (~21,000 calendar ybp) to the Holocene; for much of this time, the animal occupied the same elevational range, but adapted to climate changes in situ. Panel b redrawn from Smith et al. (2009). mass; during colder episodes they were larger, and during warmer intervals, animals were smaller (Fig. 4b). Their presence may have been tied into a much more widespread historical distribution of juniper (Juniperus spp.); we document a downward displacement of ~1,000 m relative to juniper's modern extent in the Amargosa Range. These results suggest a cooler and more mesic habitat association persisting for longer and at lower elevations than previously reported. As climate warmed during the Holocene, N. cinerea adapted and retreated upslope; populations were eventually completely extirpated on the east side of Death Valley, despite the presence of what would appear to be enough high-elevation habitat ( Fig. 4a; Smith et al. 2009). Moreover, the range retraction of the larger and behaviorally dominant N. cinerea led to a range expansion of N. lepida, which eventually reached the limits of its cold tolerance at ~1800m, an upper elevational limit maintained into modern times. Of particular interest is the remarkable and rapid dwarfing of body mass of N. cinerea populations from the full glacial (~21,000 calendar ybp) to the Holocene (Fig. 4b); for much of this time, they occupied the same elevational range, but adapted to climate changes in situ. Similar patterns are seen in other parts of the range (e.g., Smith et al. 1995, Smith andBetancourt 2006). Note that a modern ecologist would detect the presence of only one species (N. lepida) in the area today, occupying an elevational range from -84 to 1800m, with a maximum body mass of ~250g. Analysis of present distributions provides a limited perspective when trying to evaluate the potential response of these species to anthropogenic warming; clearly considerable in situ evolutionary adaptation occurred along with extensive distributional/elevational migrations. Our paleomidden work highlights just how dynamic and sensitive body size and range are to thermal shifts, and suggests that both are likely responses to future anthropogenic climate shifts. Finding time As Herrera (1992) stated, "Ecologists study thin temporal slices of historically dynamic systems." Certainly, the examples we have provided all underscore this point. The conclusions drawn by each study were markedly different without the incorporation of a longer temporal perspective in the analysis. In each of these instances, the "missing perspective" was biased in a way that compromised the results. Flightlessness, for example, did not come out as a factor predisposing insular birds to extinction in our analysis based on modern data, because the flightless birds had already gone extinct. Yet, if such an analysis was extended to predict extinction risk and direct conservation efforts in another archipelago, the absence of flight ability as a factor in the analysis could have very damaging real-world results. Similarly, given that macroecological studies are often dependent on large-scale distributional information, how much validity do we give studies aiming to tease apart factors influencing the structure and function of ecological systems if major components of the system are missing? Our message is not novel; a number of workers have emphasized the importance of incorporating a longer-term perspective in modern ecology (Schoonmaker and Foster 1991, Herrera 1992, Delcourt and Delcourt 1998, Botkin et al. 2007). And progress is ensuing. Certainly, ecologists increasingly recognize the importance of long-term studies; a number of important field projects have now been running for many decades (e.g., Paine 1994, Brown et al. 2001). Yet, as Tilman (1989) noted, non-linear dynamics and new equilibrium states can complicate interpretations from even the best-designed and longest-running ecological studies. Here, we have provided examples where the temporal scale required was much longer than that achieved by any ecological study. Our purpose in doing so was not to criticize modern ecology, but rather to clearly illustrate why paleoecology is relevant. By providing concrete examples, we hope we have clearly demonstrated the utility of incorporating a historical perspective. There are many resources available to do so; numerous comprehensive databases provide paleo distribution and abundance information for pollen, mammals, and other fossils, making it possible to imbed modern ecological work into a deeper context. The extent to which time matters clearly depends on the questions ecologists ask. In some cases, a millennial-scale temporal perspective may not be relevant. But for many of the most pressing ecological issues facing society, an appreciation of past history is imperative. Along with earlier workers (e.g., Schoonmaker and Foster 1991, Herrera 1992, Delcourt and Delcourt 1998, Botkin et al. 2007), we encourage better integration between paleoecologists and ecologists, who are often interested in the same questions. Effective communication between these disciplines remains complicated by the traditional structuring of universities and funding agencies into the physical and natural sciences, which have physically and philosophically segregated paleoecology from ecological or evolutionary disciplines. Yet, progress on some of the most topical issues requires that we integrate across both macro-and micro-evolutionary and ecological theory and combine both theoretical and empirical perspectives. In much the same way that understanding the specialized running abilities of pronghorn antelope requires an understanding of the context in which they evolved, there may be many more ecological features of extant animals and plants that are due in some part to nowextinct components of ecosystems.
2018-12-19T14:38:35.858Z
2012-01-01T00:00:00.000
{ "year": 2012, "sha1": "4f6a9d7342cbbc6f8037146e22a4e8b7b030fe8e", "oa_license": "CCBY", "oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt3bg6583c/qt3bg6583c.pdf?t=pfzyfz", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bba93da2b76bae35e60ee694c63062dde265f6a2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Geography" ] }
54598905
pes2o/s2orc
v3-fos-license
LARGE-WAVE SIMULATION OF TURBULENT FLOW INDUCED BY WAVE PROPAGATION AND BREAKING OVER CONSTANT SLOPE BED In the present study, the three-dimensional, incompressible, turbulent, free-surface flow, developing by the propagation of nonlinear breaking waves over a rigid bed of constant slope, is numerically simulated. The main objective is to investigate the process of spilling wave breaking and the characteristics of the developing undertow current employing the large-wave simulation (LWS) method. According to LWS methodology, large velocity and freesurface scales are fully resolved, and subgrid scales are treated by an eddy viscosity model, similar to large-eddy simulation (LES) methodology. The simulations are based on the numerical solution of the unsteady, threedimensional, Navier-Stokes equations subject to the fully-nonlinear free-surface boundary conditions and the appropriate bottom, inflow and outflow boundary conditions. The case of incoming second-order Stokes waves, normal to the shore, with wavelength to inflow depth ratio λ/dΙ  6.6, wave steepness H/λ  0.025, bed slope tanβ = 1/35 and Reynolds number (based on inflow water depth) Red = 250,000 is investigated. The predictions of the LWS model for the incipient wave breaking parameters breaking depth and height are in very good agreement with published experimental measurements. Profiles of the time-averaged horizontal velocity in the surf zone are also in good agreement with the corresponding measured ones, verifying the ability of the model to capture adequately the undertow current. INTRODUCTION Wave breaking, one of the major near-shore processes, is responsible for the development of wavegenerated currents that result into the initiation and development of sediment transport.Therefore, the investigation of wave breaking, as a phenomenon interconnected with a series of significant coastal processes, is of great interest.Wave breaking takes place when wave height and steepness become very large as the water depth becomes shallower.For the case of spilling breaking, a vortex structure, usually called "surface roller", is formed under the collapsing wavefront just after breaking. As mentioned above, closely related to wave breaking is the generation of wave-induced currents, i.e., the longshore current, which is developed only when the direction of the breaking wave is oblique to the shoreline, and the cross-shore current, which is known as the undertow current.Both of them are developing in the surf zone, i.e., the coastal area where wave energy dissipation occurs after breaking.The undertow current owes its existence to the mean shear stress field, developing in the surf zone, in order to balance the pressure gradient and the momentum fluxes due to the wave set-up and the wave height dissipation.Close to the bottom, the current is offshore directed, while near the free surface is oriented towards the shore, ensuring that the total cross-shore water flux is zero. The wave propagation and breaking in the coastal zone has been investigated by several researchers for many years, by use of numerical and physical models.As for the numerical simulation of spilling wave breaking, there are two broad categories of models based on the treatment of the surface roller: those that incorporate empirical models, often called surface roller (SR) models, and those that simulate the surface roller as part of the turbulent flow, induced by the wave propagation, utilizing one of the turbulence modeling methods, i.e., the Reynolds-Averaged Navier-Stokes (RANS) equations and the large-eddy simulation (LES) method. The SR model, in which incipient breaking is defined by empirical criteria, may be coupled with Boussinesq (Briganti et al. 2004;Madsen et al. 1997;Schäffer et al. 1993;Veeramony and Svedsen 2000) or Euler (Dimas and Dimakopoulos 2009) equations giving very good results for the normal to the shore wave breaking, but it cannot be easily extended to the case of oblique to the shore wave breaking.RANS models, where all turbulent scales are treated by a closure model, have been applied for the case of two-dimensional turbulent flow during spilling breaking (Bradford 2000;Lin and Liu 1998;Torres-Freyermuth et al. 2007).LES models, where only the large scales of turbulence are resolved while the small ones are modeled, have been used for two-dimensional (Hieu et al. 2004;Zhao et al. 2004) and three-dimensional (Christensen and Deigaard 2001;Christensen 2006) flows.LES method requires more computational resources than RANS, but generally achieves better results for the dynamics of spilling breakers.Both methods require the use of a scheme for capturing the free-surface elevation, such as the Volume of Fluid (VOF) method and the Marker and Cell (MAC) method. Recently, Dimakopoulos and Dimas (2011), in order to investigate the turbulent flow induced close to the free surface by the oblique wave breaking over a constant slope bed, employed the Large-Wave Simulation (LWS) method coupled with their inviscid flow solver (Dimas and Dimakopoulos 2009).Their model was validated by comparison of their numerical results to corresponding experimental measurements presented in Ting and Kirby (1994;1996), which was also used for the calibration of the LWS model. The novelty of the present study is the coupling of the LWS methodology (Dimas and Fialkowski 2000) with a numerical solver of a three-dimensional viscous flow, in order to calculate the waveinduced currents in the surf zone, the generation of which is substantially affected by the presence of the bed shear stress.Specifically, numerical simulations of the three-dimensional, free-surface flow, induced by the propagation of nonlinear breaking waves, normal to the shoreline, over a constant slope, rigid bed, are presented.The LWS methodology is based on the decomposition of the flow variable scales (velocity, pressure and free-surface elevation), to resolved (large) and subgrid (small) scales.One of the main objectives of this work, is to overcome some of the limitations of the aforementioned studies, mainly arising by the use of inviscid (Euler) and depth-averaged (Boussinesq) models.For example, inviscid models are not able to capture correctly the profile of the undertow current, since they ignore viscous effects close to the bed, while Boussinesq models are incapable of calculating flow variable profiles over the depth.In the following sections, the flow equations, the main features of the LWS methodology, the numerical method, the simulation results and the main conclusions, are presented. FLOW EQUATIONS The three-dimensional, incompressible free-surface flow, for a fluid of constant viscosity, is governed by the continuity and the Navier-Stokes equations 2 1 Re where i, j = 1,2,3, t is time, x 1 , x 2 are the horizontal coordinates, x 3 is the vertical coordinate, positive in the direction opposite to gravity, u 1 , u 2 and u 3 are the corresponding velocity components, p is the dynamic pressure and Re d is the Reynolds number.Equations ( 1) and (2) are expressed in dimensionless form with respect to the inflow depth d I , the gravity acceleration g and the water density π, therefore Re d = (gd I ) 1/2 d I /ν, where v is the kinematic water viscosity.For viscous flow, the kinematic and the normal stress dynamic boundary conditions at the free surface are, respectively, where η is the free-surface elevation, Fr is the Froude number, which under the present dimensionless formulation is equal to one, while in Eq. (4) the atmospheric pressure is considered equal to zero.The shear stress dynamic boundary condition at the free surface, is expressed for each of the horizontal coordinates x 1 and x respectively.In addition, the no-slip and non-penetration boundary conditions at the bottom are respectively, where d is the bottom depth measured from the still free-surface level. Given that the free surface is time-dependent, the Cartesian coordinates are transformed, in order for the computational domain to become time-independent, according to the expressions and where -1 ≤ s 3 ≤ 1.In the transformed domain, s 3 = 1 corresponds to the free surface and s 3 = -1 to the bottom.By application of Eq. ( 9), the continuity and Navier-Stokes equations ( 1) and ( 2) are transformed, respectively, where k= 1,2, hereafter, and LWS methodology As aforementioned, the large-wave simulation (LWS) method is based on the application of a volume filter to the velocity components and pressure, as in LES, and an exclusive surface filtering operation for the free-surface elevation.Therefore, each flow variable, f, is decomposed into resolved, f , (large) and subgrid, f ', (small) scales, in a manner illustrated in Fig. 1 for the decomposition of the free-surface elevation, η.The filtering operation is applied on Eqs. ( 10) and ( 11), resulting into the continuity and Navier-Stokes equations for the resolved scales of the flow, which, respectively, are where includes the subgrid scale (SGS) terms, i.e., the eddy SGS stresses and the wave SGS stresses, which, respectively, are The eddy SGS stresses appear both in the LES and the LWS method, while the wave SGS stresses appear exclusively in the LWS method.The transformation, given by Eq. ( 9), and the filtering procedure are also applied successively to the boundary conditions (3) -( 8) at the free surface (s 3 = 1) and the bottom (s 3 = -1), resulting into the transformed boundary conditions for the resolved scales. It must be noted that the filtering procedure takes into to account some simplifying assumptions, which are specified analytically in Dimakopoulos and Dimas (2011).In addition to this, the SGS terms that result from the filtering of the viscous terms of Eq. ( 11), are considered to be negligible compared to the rest of the terms of the same equation. In the present study, the eddy and wave SGS stresses, appearing in Eq. ( 16), are computed by use of Smagorinsky eddy-viscosity models (Rogallo and Moin 1984).Specifically, the model for the eddy SGS stresses is where C = 0.1 is the model parameter, set according to the usual practice in LES method, Δ = (Δ 1 Δ 2 Δ 3 ) 1/3 is the smallest resolved scale based on the grid size, S ij is the strain-rate tensor of resolved scales 1 2 is its magnitude.The model for the wave SGS stresses, based on the one presented in Dimas and Fialkowski (2000), is given as where C η is the model parameter and S ij η is a modified strain-rate tensor of resolved scales where δ ij is the Kronecker delta. NUMERICAL METHOD The flow simulations are based on the numerical solution of the transformed Navier-Stokes equations, which is achieved by use of a fractional time-step scheme for the temporal discretization, and a hybrid scheme for the spatial discretization.The hybrid scheme includes central finite differences, on a uniform grid with size Δs 1 , for the discretization along the streamwise direction s 1 , a pseudo-spectral approximation method with Fourier modes along the spanwise direction s 2 , and a pseudo-spectral approximation method with Chebyshev polynomials along the vertical direction s 3 . The transformed Navier-Stokes equations ( 16) can be written in the form 1 Re where is the transformed pressure head and T j  is the transformed Laplacian operator.The time-splitting scheme for the temporal discretization, in which each time-step consists of three stages, achieves the calculation of the velocity field at the next time step successively, by adding the corresponding corrections of each of the three stages to the field of the previous time-step.Moreover, the dynamic pressure field is obtained in the second stage of each time step, while the free-surface elevation is calculated from the kinematic boundary condition at the end of each time step. At the first stage of each time-step, the nonlinear term, i A , the SGS term, i T , and the viscous term, i V , of the transformed equations of motion ( 24) are treated explicitly by an Euler scheme.At the second stage, an implicit Euler scheme is used for the treatment of the pressure head term, T j  , of Eq.( 24), which results into a generalized Poisson's equation for  by satisfying the transformed continuity equation as well.The transformed dynamic (normal stress) free-surface condition and nonpenetration bottom condition are imposed at this stage.At the third stage, the remaining viscous terms,  , of Eq. 24, are treated by an Euler implicit scheme satisfying the transformed dynamic (tangential stress) free-surface and bottom conditions.According to the hybrid scheme for the spatial discretization, each flow variable f (velocities and pressure) is approximated by the following expression where mn f  is the Chebyshev-Fourier transformation of f , M is the number of Fourier modes, L 2 = M • Δ 2 , is the length of the computational domain in s 2 , and T n is the Chebyshev polynomial of order n, and N is the highest order of the Chebyshev polynomias.The transformations between physical and spectral space are performed by a Fast Fourier Transform algorithm (Press et al., 1992). Application of Eq. ( 25) for the discretization of Eq. ( 24), leads to the formation of a system of algebraic equations, with the general form    , for each of the transformed flow variables. The aforementioned system may be divided into M independent subsystems (   for each Fourier mode), which can be solved in parallel, due to the decoupling of the Fourier modes.Each subsystem is solved at each time-step, using an iterative generalized Gauss-Seidel method.The matrix of coefficients   m A is band diagonal, and is decomposed once at the beginning of the computation by using the LU-decomposition method. In the present work, the propagation, transformation and spilling breaking of incoming secondorder Stokes waves over a constant slope bed is simulated.As shown in the sketch of the computational domain (Fig. 2), a flat bed region of length L I and constant depth d I , which ensures the development of the incoming waves, is followed by the inclined region of the bed.Then, a flat region of length L E and constant depth d E << d I (the formulation allows the outflow depth d E to be small but nonzero) is considered in order to simulate the swash zone of a coastal area, where the waves are completely damped.For this reason, two overlapping zones are placed in the outflow region: a wave absorption zone of length L A ≈ L E , which ensures that waves are not reflected by the outflow boundary (Dimas and Dimakopoulos, 2009), and a velocity attenuation (slowdown) zone of length L D ,.For the numerical solution of Eq. ( 24), a reduced value of Re d is used within the slowdown zone, which corresponds to an increased value of the kinematic viscosity. RESULTS The validation of the inviscid version of LWS methodology, coupled with the Euler equations, was performed by Dimakopoulos and Dimas (2011), and one of the main results of their work, was the calibration of the parameter C η , used in the model for the wave SGS stresses.The chosen value, C η = 0.4, resulted from the comparison of their numerical results to corresponding experimental measurements, presented in Ting and Kirby (1994;1996), for the case of spilling wave breakers, propagating normal to the shoreline over a beach of constant slope tanβ = 1/35. In the present study, the accuracy and efficiency of the viscous version of LWS methodology, coupled with the Navier-Stokes equations, is investigated, performing numerical simulation for the normal to the shoreline propagation, transformation and spilling breaking of incoming second-order Stokes waves over a bed of constant slope tanβ = 1/35.Our numerical results are, also, compared to the experimental measurements conducted by Ting and Kirby (1994), while the suggested value of C η = 0.4 (Dimakopoulos and Dimas 2011) is adopted.It must be noted that all of the numerical results presented in this study are spanwise averaged. The experimental flow parameters (Ting and Kirby 1994) for the case of spilling breaking are summarized in the following: wave inflow depth, d I = 0.4 m, wave height and period, H I = 0.125 m and Τ = 2 s, respectively, which correspond to wave height and wavelength at deep water, H ο = 0.127 m and λ ο = 6.245 m, respectively.In our case, the wave parameters at deep water are identical to those of Ting and Kirby (1994), but a larger inflow depth d I = 0.7 m is considered, since the Stokes wave theory is utilized for the incoming waves, which are of height H I = 0.118 m.The parameters of the incoming waves are rendered dimensionless by d I , and g, and the resulting values are H I = 0.168, Τ = 7.487 and λ = 6.605.The Irribaren number is ξ ο = tanβ(λ ο /Η ο ) 1/2 = 0.2, which corresponds to a spilling breaker of medium strength, while a value of Re d = 250,000, is considered.In the slowdown zone, a value of Re d divided by 100, is utilized.The total length of the computational domain is L = 60, the flat inflow region has length L I = 15, while the swash zone is of length, L E = 11.05, and depth, d E = 0.03.The wave absorption zone and the slowdown zone have lengths L Α = 11 and L D = 2, respectively.The numerical parameters are: Δ 1 = 0.04, N = 128, M = 32, Δ 2 = 0.02 and Δt = 10 -4 . In Fig. 3, snapshots of the resolved free-surface elevation,  , at several time instants after 20 wave periods, are presented and compared to the experimental measurements of maximum (wave crest) and minimum (wave trough) values of the free surface elevation (Ting and Kirby 1994).The numerical model predicts accurately the breaking depth d b = 0.28, which corresponds to the position x 1 = 40.2,but underestimates the breaking height, as indicated by the deviation of about 9%, of the breaking freesurface elevation, 0.176  .This prediction is still better than the one in Lin and Liu (1998), Bradford (2000) and Christensen (2006).At the outer surf zone (40 < x 1 < 44), the numerical model underestimates the wave height dissipation, as opposed to the inner surf zone (x 1 > 44), where the prediction of the model for the height dissipation is very good.Generally, the numerical results for the minimum free-surface elevation, are in a very good agreement with the trough envelope of the experimental data.In the outer coastal zone, the numerical results for wave shoaling agree adequately with the corresponding experimental data, however, LWS predicts a monotonic wave height increase during shoaling.In the surf zone, the numerical results predict very well the wave setup.In Fig. 4, three typical snapshots of the spanwise vorticity distribution, ω 2 , in the surf zone, are presented at the time that incipient wave breaking occurs, and at time instants t = 0.5T and 0.8Τ after | incipient breaking.Negative vorticity is generated at the breaking wavefront during incipient wave breaking (wave crest at x 1 = 40.2),which corresponds to clockwise recirculation of the fluid.The strength of the surface roller increases, as the spilling breaker propagates towards the inner surf zone, until it reaches its full development (Fig. 4b), and subsequently, as indicated in Fig. 4c, vorticity is advected and diffused in the roller wake.The vorticity distribution close το the sloping bed, due to the bed shear stress, is also shown in Fig. 4, with its amplitude being considerably larger (up to 10 times) than the one developing in the surface roller.According to the LWS methodology, wave breaking and dissipation in the surf zone are coupled with the generation and combined action of the eddy and wave SGS stresses.The distribution of the wave stress 13   , which is the most significant SGS stress in terms of its magnitude, in the surf zone at two time instants (at the time of incipient breaking and 0.5T later), is presented in Fig. 5.It is indicated that the development of the surface roller (see Fig. 4) is connected to the continuous increase of the magnitude of wave stress 13   at the breaking wavefront, which takes place during a time interval Δt = 0.5Τ after the incipient breaking, corresponding to the region 1 ≥ d/d b ≥ 0.78.For time greater than 0.5T after breaking, its strength attenuates gradually before it vanishes in the inner surf zone (x 1 > 45).Similar behavior is also exhibited by the other SGS stresses on the x 1 -x 3 plane, while stresses in the x 2 direction are an order of magnitude smaller. As mentioned in the introduction, the development of the undertow current, resulting from wave breaking, is constrained by the fact that the period-averaged cross-shore water flux is zero.Fig. 6 presents a typical period-averaged velocity field in the surf zone where it is indicated that the numerical model is able to capture the occurrence of the undertow current.The period-averaged velocity distribution indicates the presence of an onshore directed current, which is due to the mechanisms of the Eulerian drift and the surface roller, in the upper layer of the water depth, and an offshore directed current, i.e., the undertow current, near the bed, which balances the onshore flux.Very close to the bed, a steady current of weak strength, the so-called wave boundary layer streaming, exists offshore to the breaking region and part of the outer surf zone, and is directed towards the shoreline.In Fig. 7, LWS-predicted profiles of the undertow current at four positions in the surf zone are compared to corresponding experimental measurements presented in figure 5 of Ting and Kirby (1994) for the case of spilling breaker.The period-averaged horizontal velocity, U 1 , is normalized with respect to the breaking depth, d b .Overall, the LWS prediction is deemed adequate, since the order of magnitude as well as the gradient of the numerical profiles, agree well with the experimental ones.A significant deviation related to the depth of the minimum of the velocity is observed between numerical and measured profiles, as is indicated in Figs.(c) and (d), which may be attributed to the large difference between our Re d value and the one of the experiments, which is about 7 times larger. Finally, in Fig. 8, the evolution of the resolved free-surface elevation and the bed shear stress, τ b , distribution at several time instants during a wave period, are presented.The amplitude of the bed stress variation is found to be substantially increased over the sloping bed, especially in the surf zone, becoming up to six times larger (at the region around x 1 = 42.5)than the corresponding amplitude in the flat region (x 1 < 15).The magnitude of τ b decreases in the inner surf zone, following the wave height attenuation, while the decrease of the wavelength with decreasing water depth is indicated by the boldlined snapshot.The position of maximum bed stress does not coincide with the breaking position, where the maximum free surface elevation is located, presenting a phase difference of about 0.5T. CONCLUSIONS A numerical model for the simulation of wave propagation and spilling breaking over a constant slope bed is presented.The model is formed by the coupling of the large-wave simulation (LWS) method with a numerical solver of the three-dimensional Navier-Stokes equations.According to the LWS methodology, the wave and eddy subgrid (SGS) stresses are modeled, by use of eddy-viscosity models, and then applied to the viscous flow solver, in order to capture wave breaking and wave energy dissipation in the surf zone.In general, the LWS model predictions related to the characteristics of incipient breaking (breaking depth -height) are very well.The validation of our results is based on the comparison with corresponding experimental measurements conducted by Ting and Kirby (1994).The development of the surface roller in the breaking wavefront is connected to the increase of the strength of the SGS stresses in the outer surf zone and their successive decrease at shallower depths close to the shore.The period-averaged velocity field in the surf zone predicts very well the qualitative characteristics of the undertow current generated along the cross-shore direction by the wave breaking, while the quantitative comparison to the corresponding experimental data (Ting and Kirby 1994) is good.Finally, it is found that the magnitude of the bed shear stress increases substantially in the surf zone becoming up to six times larger than the corresponding one in the inflow flat region. Figure 1 . Figure 1.Decomposition of free-surface elevation for the case of a spilling breaker. Figure 2 . Figure 2. Sketch of the computational flow domain. Figure 3 . Figure 3. Snapshots of the resolved free-surface elevation during shoaling and in the surf zone, over bed of constant slope (tanβ = 1/35).Symbols correspond to the experimental free-surface envelope presented in Ting and Kirby (1994). Figure 4 . Figure 4. Vorticity field in the surf zone, at three time instants during wave period, T. Snapshot (a) corresponds to incipient breaking, (b) at time instant t = 0.5T and (c) at t = 0.8T after incipient breaking.Dashed contour lines correspond to negative vorticity, while solid lines to positive one. Figure 5 . Figure 5. Snapshots of the wave SGS stress,   13 , that correspond to (a) and (b) of Fig. 4. Note that the first one includes two wave crests (at x 1 ≈ 40 and 44.5), while contour lines are plotted at equal intervals from 0 to 0.001 with a spacing of 0.0002. Figure 6 . Figure 6.Period-averaged and spanwise-averaged velocity distribution in the surf zone.Breaking occurs at x 1 ≈ 40. Figure 7 . Figure 7. Normalized period-averaged horizontal velocity profiles at four positions in the surf zone, compared to the corresponding experimental data (symbols) presented in figure 5 (c)-(e) of Ting and Kirby (1994). Figure 8 . Figure 8. Snapshots of the resolved free-surface elevation and the bed shear stress distribution.The bold lines correspond to the same time instant, while bed slope starts at x 1 = 15.
2018-12-02T08:51:25.102Z
2012-12-14T00:00:00.000
{ "year": 2012, "sha1": "11564a1390bd9538b2076548f1e9550107c4ebc8", "oa_license": "CCBY", "oa_url": "https://icce-ojs-tamu.tdl.org/icce/index.php/icce/article/download/6709/pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "11564a1390bd9538b2076548f1e9550107c4ebc8", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
253544707
pes2o/s2orc
v3-fos-license
Alzheimer’s Disease: Treatment Strategies and Their Limitations Alzheimer’s disease (AD) is the most frequent case of neurodegenerative disease and is becoming a major public health problem all over the world. Many therapeutic strategies have been explored for several decades; however, there is still no curative treatment, and the priority remains prevention. In this review, we present an update on the clinical and physiological phase of the AD spectrum, modifiable and non-modifiable risk factors for AD treatment with a focus on prevention strategies, then research models used in AD, followed by a discussion of treatment limitations. The prevention methods can significantly slow AD evolution and are currently the best strategy possible before the advanced stages of the disease. Indeed, current drug treatments have only symptomatic effects, and disease-modifying treatments are not yet available. Drug delivery to the central nervous system remains a complex process and represents a challenge for developing therapeutic and preventive strategies. Studies are underway to test new techniques to facilitate the bioavailability of molecules to the brain. After a deep study of the literature, we find the use of soft nanoparticles, in particular nanoliposomes and exosomes, as an innovative approach for preventive and therapeutic strategies in reducing the risk of AD and solving problems of brain bioavailability. Studies show the promising role of nanoliposomes and exosomes as smart drug delivery systems able to penetrate the blood–brain barrier and target brain tissues. Finally, the different drug administration techniques for neurological disorders are discussed. One of the promising therapeutic methods is the intranasal administration strategy which should be used for preclinical and clinical studies of neurodegenerative diseases. Introduction Neurodegenerative diseases (ND) represent a major current health challenge due to the aging and the lifestyle of the population, the number of people affected, the impact of these diseases on the life of the patients and their caretakers, and the financial impact that these entails [1,2]. Worldwide, more than 50 million people are affected by neurodegenerative diseases, and this number will almost triple to 152 million in 2050 if no effective preventive or therapeutic solutions are found. Alzheimer's disease (AD) is considered the most frequent type of neurodegenerative disease, occurring in 60% to 80% of all cases [3]. AD, discovered in 1907, has multiple etiologies, but the exact causes of the disease have not yet been clearly established. In addition, no curative treatment has been developed, even more than a century later. There are two forms of AD: (1) the genetic form or the autosomal-dominant AD (ADAD), which occurs before the age of 65, representing less than 1% of cases; (2) the sporadic form. Sporadic Alzheimer's Disease (SAD) usually occurs from degradation by enzymes in the mouth and stomach, digestive juices, bile acid, and intestinal microorganisms [41,42]. In this work, we review the clinical and physiopathology of SAD, including research models of AD, followed by a discussion of current treatments focusing on prevention strategies and the use of soft NP such as NL and exosome as innovative approaches for a preventive strategy in reducing the risk of AD. AD is defined as a neurodegenerative disease that develops over several years, leading to insidious onset cognitive disorders. It is characterized by the appearance of multiple cognitive deficits progressively increasing with time, including memory deterioration in the acquisition of new information and in the recovery of information [13], as well as the association of one or more of the following dysfunctions: aphasia, apraxia, agnosia, or dysexecutive syndrome. AD can be considered an amnestic syndrome of the hippocampal type. These neuropsychological disorders cause impairment in activities of daily living and represent a cognitive and functional decline compared to the previous levels of the individual [16]. Understanding Clinical Researchers have, in recent years, shown a growing interest in neuropsychiatric symptoms and behavioral disorders such as psychotic symptoms, depression, apathy, aggression, and sleep disturbances [43][44][45]. As a result, in 1996, the concept of behavioral and psychological symptoms of dementia was presented by the International Psychogeriatric Association to designate symptoms of disturbance of perception, the content of thought, mood, and behavior frequently appearing in subjects with ND [46]. AD can be viewed as a process of chemical, physiological and anatomical changes in the brain that can be identified many years before the onset of clinically noticeable cognitive-behavioral syndromes (CBS) [47] (Figure 1). Pathophysiology AD results from significant structural and functional damage in the CNS. Two distinct histological lesions have been identified in AD etiology: amyloid plaques and NFT. NFT formation begins in the internal part of the temporal lobe. The lesions may even be present in these hippocampal structures while the person demonstrates no symptoms of cognitive decline. The NFT then evolves in the external part of the temporal lobe before spreading to the posterior cortical associative areas and the entire cortex [6]. This topography of the lesions corresponds to the evolution of the symptoms of AD [48]. On the other hand, unlike the topography of NFT, the distribution of amyloid deposits is more diffuse. Indeed, these are found first in the neocortex, followed by the hippocampal region, the subcortical nuclei, and finally, in the cerebellum [5]. Amyloid plaques result from the aggregation and abnormal accumulation of the Aβ peptide in the extracellular medium outside the neurons ( Figure 1A). Aβ peptide is produced by the amyloidogenic pathway following the sequential proteolytic cleavage of amyloid precursor protein (APP) by βand γ-secretases [49]. Oligomers of soluble Aβ could interact with the cell surface potentially by direct membrane interaction or by binding a putative receptor leading to impairment of signal-transduction cascades, modified neuronal activities, and release of neurotoxic mediators by microglia, leading to early altered synaptic functions and plasticity [50]. Indeed, the initial oligomerization of soluble Aβ has been found to instigate synapse deterioration, inhibit axonal transport, impact astrocytes and microglia, plasticity dysfunction, oxidative stress, insulin resistance, aberrant tau phosphorylation, cholinergic impairment, selective neuron death [51,52]. Other factors, including abnormal lipid and glucose metabolism, neuroinflammation, cerebrovascular abnormalities [53], and endosomal pathway blockade [54], can also contribute to AD pathology in the brain [55]. The vascular system becomes impaired and fails to deliver sufficient blood and nutrients to the brain and clear away the debris of metabolic products, leading to chronic inflammation by the activation of astrocytes and microglia. The E4 isoform of the lipid-carrier protein apolipoprotein (apo)E, which is a significant AD risk factor, has been associated with increased Aβ production and impaired Aβ clearance. ApoE4 itself can be cleaved into toxic fragments that affect the cytoskeleton and impair mitochondrial functions [56], which may have a direct consequence on ApoE-mediated clearance of Aβ. Hyperphosphorylation of Tau proteins (Tubule Associated Unit, Tau) in neurons of the CNS leads to the abnormal conformation of Tau into pairs of helical filaments, which in contrast to amyloid plaques, aggregate inside the neurons to form NFT. Tau proteins detach from microtubules which disrupt intracellular transport, causing dysfunction of neurons and impaired brain activity, which can even lead to macroscopic brain atrophy and death [12]. Interestingly, unlike NFT, no correspondence was observed between the distribution of amyloid deposits and the symptoms of AD patients [48]. The hypothesis of the amyloid cascade suggests that the anomalous accumulation of Aβ peptide and formation of amyloid plaques induce NFT formation and makes neurons more sensitive to neurotoxic effects and neuronal death [57]. Timeline models have been proposed to indicate the process of events carried out during the different stages of SAD [47,58]. For several years, research has shown the role of pathological oligomers in the pathogenesis of AD. This recent understanding of oligomers is supplanting the amyloid cascade hypothesis. While abnormal metabolism of Aβ and Tau proteins are hallmarks of AD and the most trusted identifiers and predictors of AD, a recent paradigm shift has occurred that emphasizes the initial and central role of AβO in AD pathogenesis [11]. Alternative mechanistic models propose that anomalous accumulations of Aβ protein are not necessarily responsible for neurodegeneration. Indeed, amyloid pathology and tauopathy have been shown to appear independently under the influence of genetic and environmental factors [59,60]. Although the sequence of events remains unclear, the presence of both Aβ plaques and NFT processes undoubtedly accelerates AD-related neurodegenerative processes. Initially, before the apparition of Aβ plaques and NFT, the presence of soluble aggregates of Aβ led to the destruction of synapses, dendritic spines and neurons, dysfunction of major system neurotransmitters in the central nervous system, and glutamate and acetylcholine [61]. This includes a progressive loss of cholinergic innervation caused by dendritic, synaptic, and axonal degeneration [53,62]. They cause glutamatergic excitotoxicity and modify glutamatergic synapses and the plasticity process. Cholinergic and glutamatergic systems are important for processing memory, learning, and other aspects of cognition and play a key role in neuronal plasticity. Indeed, glutamatergic and cholinergic deficits are strongly correlated with cognitive deterioration in AD [63]. The cholinergic deficit sets in early histopathologic stage of AD, before the presence of clinical symptoms [64]. Because pathogenic soluble aggregates of Aβ appear early in the sequence of AD, they can represent interesting targets for therapeutics and diagnostics [51]. Aging and metabolic diseases alteration of brain metabolism, which progressively causes AD. Reprinted from [65], Copyright 2017, Yonsei University College of Medicine. (C) Differential diagnosis of AD using neuroimaging biomarkers. Reprinted with permission [66]. Copyright 2009, Elsevier. Diagnostic Criteria for AD 2.2.1. Criteria of the National Institute of Aging and the Alzheimer's Association (NIA-AA) In 1984, the diagnostic criteria of the National Institute on Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association were published. They were based mainly on clinical-pathologic criteria, particularly memory disorders [67]. However, these criteria did not reflect the clinical reality of all patients. Individuals may exhibit evidence of AD biomarkers without cognitive impairment and vice versa. In 2011, new diagnostic criteria of mild cognitive impairment (MCI) and AD were realized by the National Institute of Aging and the Alzheimer's Association [68,69], with the diagnosis of MCI currently based on clinical, functional, and cognitive examinations. The most typical MCI associated with AD pathology is MCI-amnestic (single-or multi-domain amnestic-MCI). In amnestic MCI (aMCI), a prodromal stage of AD, there is a memory disorder where the cognitive abilities of the person are inferior in comparison to their age group, gender, and level of education. Nevertheless, this cognitive deficit does not fulfill the dementia criteria. AD can be diagnosed before the onset of dementia if other elements are detected, including amnestic hippocampal syndrome and specific AD biomarkers ( Figure 1C). In 1984, the diagnostic criteria of the National Institute on Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association were published. They were based mainly on clinical-pathologic criteria, particularly memory disorders [67]. However, these criteria did not reflect the clinical reality of all patients. Individuals may exhibit evidence of AD biomarkers without cognitive impairment and vice versa. In 2011, new diagnostic criteria of mild cognitive impairment (MCI) and AD were realized by the National Institute of Aging and the Alzheimer's Association [68,69], with the diagnosis of MCI currently based on clinical, functional, and cognitive examinations. The most typical MCI associated with AD pathology is MCI-amnestic (single-or multi-domain amnestic-MCI). In amnestic MCI (aMCI), a prodromal stage of AD, there is a memory disorder where the cognitive abilities of the person are inferior in comparison to their age group, gender, and level of education. Nevertheless, this cognitive deficit does not fulfill the dementia criteria. AD can be diagnosed before the onset of dementia if other elements are detected, including amnestic hippocampal syndrome and specific AD biomarkers ( Figure 1C). Specific AD Biomarkers The NIA-AA and the International Working Group (IWG) consider AD as a slowly progressive neurological disease that begins before the onset of clinical symptoms. Indeed, AD is represented as a continuum that evolves in three stages, asymptomatic (preclinical AD), predementia (MCI due to AD), and dementia (due to AD) [22,70,71]. Although the diagnosis of AD is essentially clinical, the certainty is dependent on evidence of biomarkers indicative of AD-related physiopathological processes. Indeed, the new diagnostic criteria require the use of cerebrospinal fluid (CSF) biomarkers, including total and hyperphosphorylated Tau protein as well as Aβ42 or Aβ42/Aβ40 ratio and positron emission tomography (PET) of τau and amyloid to attribute a probability (high, medium, or low) of the underlying AD-related neurodegenerative processes contributing to the clinical observations [1,22,72]. In 2018, a new AD biological framework and model of pathologic AD biomarkers conceptualized a progressive sequence of neurophysiological, biochemical, and neuroanatomical abnormalities that can be identified years before noticeable CBS. While not the sole factors, abnormal deposits of protein Aβ and Tau remain hallmarks of AD pathology and make it possible to differentiate AD from other neurodegenerative diseases [71]. Among the pathophysiological markers of AD [58] include the specific presence of amyloid pathology (decreased Aβ1-42 peptide CSF levels or accumulation of the amyloid tracer in PET imaging) and Tau pathology (elevated Tau and phosphorylated Tau protein CSF levels or accumulation of the Tau tracer in PET imaging). Moreover, topographical markers for AD include volume modifications in the brain (temporoparietal, hippocampal atrophy, cortical thickness) assessed by magnetic resonance imaging (MRI) and glucose hypometabolism measured by fluorodeoxyglucose (FDG)-PET [73]. The classification by NIA-AA define and differentiate the Alzheimer's spectrum of these biomarker criteria using these three amyloid, tau, and neurodegeneration biomarkers. The first neuropathological signs of AD can occur 15 or 20 years before the disease begins [74]. Early Aβ modifications, including oligomerization in the brain, provoke dysfunction in dendrites, axonal processes, and synapses. Nevertheless, the origin of abnormal Aβ and oligomer formation is still unclear [14]. The lesions form slowly without clinical expression (no complaints from patients about troubles in everyday life) [12,14,15]. Preclinical AD is defined as Aβ biomarker evidence of AD pathological changes (PET amyloid retention, low CSF Aβ42) in cognitively healthy individuals or with subtle cognitive changes [71]. Several studies highlight the concept of cognitive reserve in AD, demonstrating that cognition remains stable despite brain Aβ lesions due to compensatory mechanisms (particularly linked with education level) until the early symptomatic stage (MCI). AD can develop at an advanced or very advanced age in people whose cognitive reserve is high. Indeed, these patients are able to compensate longer for the consequences of AD, thanks to the activation of a more extensive and effective neural network, the cognitive symptoms appearing at a more advanced stage [75][76][77][78]. For the asymptomatic stage, these criteria are used primarily for clinical research protocols rather than for diagnostic purposes. The Early Symptomatic Stage: Amnesic Mild Cognitive Impairment In the aMCI, people have cognitive complaints or deficits detected by their entourage. However, there is no significant impact on activities of daily living. An accurate and timely diagnosis of AD is needed, which would allow non-pharmacological and/or drug therapies in the MCI stage or even in the preclinical stage. Many clinical studies are in progress to address this issue [79,80]. In the early symptomatic stage, tests reveal a positive sign of the amyloid and Tau pathology biomarkers [71] without neurodegenerative syndromes. Con-versely, the absence of these biomarkers is associated with a low probability. The presence of the two CSF biomarkers, amyloidopathy (low CSF Aβ levels) and neuronal degeneration (CSF Tau and P-Tau levels), are considered indicators of a high risk of conversion to AD [71]. AD In the typical form of SAD, the patient exhibits all AD symptoms, including a progressive and significant disorder of episodic memory associated with other cognitive disorders (executive functions, apraxia, aphasia, and agnosia). In many cases, people also have neuropsychiatric disorders such as apathy (49%), depression (42%), aggression (40%), anxiety (39%), and sleep disorders (39%) [81]. These disorders have a significant impact on autonomy requiring external aid to perform the acts of everyday life. At the stage of dementia, the diagnosis is made based on clinical behavioral tests, where biomarkers are used only to increase the threshold of certainty of the diagnosis for atypical forms or young subjects [22]. The presence of biomarkers can be used to indicate the severity of the AD [7,71]: decreased CSF Aβ levels, increased CSF Tau and/or P-Tau levels, cortical thinning and hippocampal atrophy based on MRI, hypometabolism or hypoperfusion of posterior cingulate and temporoparietal cortex (FDG-PET), and detection of cortical amyloid fixation (PET), adding to neurodegeneration syndromes [71,82]. Finally, the certainty of AD diagnosis is evaluated by a level of probability, with definitive evidence provided only by biopsy or autopsy. Risk Factors of SAD Based on current research, AD etiology is multifactorial genetic and environmental risk factors that can be categorized as modifiable and non-modifiable factors [83][84][85][86][87]. Non-Modifiable Risk Factors Several studies suggest that the greatest non-modifiable risk factors for SAD are age, the APOE-ε4 allele, and gender [4,88]. Age The main risk factor for SAD is age. Indeed, increased life expectancy is correlated with a higher probability of developing neurodegenerative diseases, including AD [89,90]. In normal aging, the structure of the brain changes in membrane fluidity and lipid composition, regional brain volume, density, cortical thickness, and microstructure of the white and grey matter. There is a progressive loss of neuronal synapses, leading to a neuronal density decrease. Genetic Risk Factor While ADAD is caused by the mutation of one of the genes involved in amyloid metabolism, including amyloid precursor protein (APP), presenilin 1 (PSEN1), or presenilin 2 (PSEN2), the main genetic risk factor in SAD is the APOE gene [91][92][93][94]. APOE is associated with the transport of lipids, including cholesterol, in peripheral tissues and in the central nervous system. By virtue of its role in astrocyte-derived cholesterol transport to neurons, it ensures lipid delivery to neurons and, thus, membrane homeostasis, a critical process in neuron and brain lesion repair. The APOE gene has three alleles: ε2, ε3, and ε4. The ε4 allele is a genetic risk factor of SAD involved with a high risk of AD, found to be associated with atrophic hippocampal volume, abnormal accumulation of Aβ protein and increased amyloid deposits, and cerebral hypometabolism [95]. The ε4 has been linked to changes in neurotoxic and neuroprotective mechanisms, including Aβ peptide metabolism, aggregation, toxicity, tauopathy, synaptic plasticity, lipid transport, vascular integrity, and neuroinflammation [96]. It has been shown that having the ε4 allele increases four-fold the risk of developing AD, whereas the ε2 allele decreases AD risk; the ε3 allele has no effect on AD risk. However, some ε4 allele carriers never manifest AD, which indicates that other as yet to be identified determinants (genetic or otherwise) may be involved in AD development [88]. Gender AD disease prevalence and symptom progression are disproportionately higher in women [97][98][99]. Moreover, there are different risk factors for women and men (APOE genotype, cardiovascular disease, depression, hormonal depletion, sociocultural factors, sex-specific risk factors for women, and performance in verbal memory), which may contribute to this difference. To understand the influence of gender differences, studies are needed to determine gender influence on biomarker evolution across the life span, including cognitive abilities, neuroimaging, CSF, and blood-based biomarkers of AD, particularly at earlier ages [98]. In addition, preclinical and clinical studies in the development of AD therapeutics for both genders are needed. Modifiable Risk Factors Modifiable risk factors are of considerable interest since they are a lever for the action of preventive strategies. Cardiovascular damage is a risk factor for neurodegenerative diseases, and since the brain is supplied by a large network of blood vessels, a healthy cardiovascular system can be considered neuroprotective [100]. This could explain why part of the risk factors for cardiovascular disease are in common with AD, including hypertension, dyslipidemia, diabetes, obesity, dietary factors, smoking, and physical activity ( Figure 1B). Thus, lifestyle is also an important risk factor, where intellectual, physical, and social activities, as well as diet, may help to prevent AD [1,22,[100][101][102][103]. Metabolic Disorders and Dyslipidemia Although the brain represents 2% of the total body weight, 20% of body oxygen consumption and 25% of the glucose consumption can be attributed to this organ [101,104]. The brain is the most lipid-rich organ next to the adipose tissue. Indeed, lipids are part of gray matter, white matter, and nerve nuclei and are needed for neuronal growth and synaptogenesis. Lipids in the brain are composed of 50% phospholipids, 40% glycolipids, 10% cholesterol, cholesterol esters, and trace amounts of triglycerides [105]. Long-chain polyunsaturated fatty acids (LC-PUFAs) represent 25-30% of the total fatty acids (FAs) in CNS, in particular, docosahexaenoic acid (DHA) and arachidonic acid (AA). Cholesterol and long-chain omega-3 FAs, and especially DHA, play major roles in brain function. Research shows that the imbalance of lipid homeostasis is associated with a high risk of AD [105,106]. The brain is the organ that is the richest in cholesterol, containing 25% of all cholesterol in the human body. The cholesterol used in the brain is synthesized within the CNS. Altered cerebral cholesterol homeostasis may promote neurite pathology, Tau hyperphosphorylation, and the amyloidogenic pathway [49]. Increased brain cholesterol levels and dyslipidemias overall have been linked to AD incidence [49,55,100,107]. It could be assumed that similar to the etiology of cardiovascular disease, diabetes, and obesity, these changes in lipid homeostasis can increase the risk of age-related neurodegeneration and AD with time. Dyslipidemias are also associated with obesity, which has been linked to insulin resistance/hyperinsulinemia in the development of AD [108,109]. Indeed, brain insulin resistance has been shown to be involved in cellular and molecular mechanisms of neurofibrillary tangles formation and amyloid plaques [110], leading to AD being referred to as type III diabetes. These studies clearly indicate that preventive strategies aimed toward maintaining optimal brain lipid status may be useful in maintaining neuronal functions and synaptic plasticity, thereby reducing AD risk. Lipids provide a further advantage in that dietary intervention represents a fairly straightforward manner to achieve proper lipid homeostasis [111][112][113]. Other Risk Factors Low levels of cognitive, social, and physical activity may be linked to a greater risk of developing neurodegenerative diseases [3,83]. An enriched environment appears to favor the establishment of a cognitive reserve that includes the level of education (level of study, profession), the quality of social interactions, the variety of leisure activities, and the practice of physical exercise. Nevertheless, these cognitive and physical factors are not alone in affecting the reserve capacity. Other factors are involved, including nutrition and other environmental parameters that can protect cardiovascular pathologies, thereby reducing AD risk [87]. Symptoms of depression, anxiety, stress, and chronic psychological distress have also been associated with an increased risk of MCI and AD [114,115]. Moreover, excessive consumption of tobacco and alcohol increases cognitive impairments [20]. A history of head trauma and hearing loss may favor the onset of AD [3,102,116]. Recently studies have shown that air pollution may be linked to an increased risk of neurodegenerative diseases [117]. A recent report highlighted 12 modifiable risk factors representing about 40% of dementias in the world: a low level of education, hypertension, hearing impairment, smoking, obesity, depression, sedentary lifestyle, diabetes and poor social contact, excessive alcohol consumption, history of head trauma and air pollution [117]. These last three factors have been updated recently [3]. Research Models Used for AD In order to both study the underlying mechanisms and to test and identify possible preventative strategies, AD models [118] range from in vitro cell culture models (twodimensional (2D) or three-dimensional (3D)) to animal models. Generally, 2D cell culture models lack cues provided by the extracellular matrix (ECM). These models are easy to maintain and cost-effective but do not allow the study of glia-neuron communication and crosstalk [119]. 3D models include neurospheroids [120], organ-on-a-chip devices [121], and engineered brain tissue [122], mimicking the brain's complexity; however, they are more difficult to engineer and maintain [123]. Animal models are required for studying both physiological and behavioral mechanisms of AD, but interspecies differences may result in unexpected results in clinical trials [123,124]. The various AD models presented here are summarized in Table 1. 2D In Vitro Models of Alzheimer's Disease Researchers have attempted to simulate or induce the clinically observed increased Aβ42/40 ratio to study AD in vitro. Among the 2D models of interest, the most common human cell types used include human embryonal stem cells (hESC), induced pluripotent stem cells (iPSC) from AD patients, or neurons from AD patients with relevant mutations [118]. For example, Koch et al. observed an increased Aβ42/40 ratio due to a decrease in Aβ40 hESC-derived neurons overexpressing PSEN1 [125], also found by Mertens et al. [126]. Increased Aβ42 was only found in neurons derived from patients with a specific APP K724N mutation. Neurons are not alone in playing an important role in the Aβ42/40 ratio, as shown by Liao et al., who demonstrated secretion of Aβ42 by hiPSC-derived astrocytes [127]. Oksanen et al. used PSEN1 ∆E9 mutated hiPSC-derived astrocytes to show an increased Aβ42/40 ratio, as well as increased reactive oxygen species (ROS) and increased cytokine release [128], suggesting an increase in the pro-inflammatory response. Jones et al. used the PSEN1 M146L mutation to show disturbed astrocyte marker expression [129]. PSEN1 is not only responsible for the increased Aβ42/40 ratio but is also involved in mitochondrial impairment. Martin-Maestro et al. used hiPSC-derived neurons with a PSEN1 A246E mutation to show the role of mitochondrial dysfunction in AD [130]. 2D in vitro models hESC-derived neurons overexpressing PSEN1 Increased Aβ42/40 ratio due to depletion of Aβ40 [125] APP K724N mutated neurons from AD patients Increased Aβ42/40 ratio due to depletion of Aβ40 and increased secretion of Aβ42 [126] hiPSC-derived astrocytes Increased Aβ42/40 ratio in astrocytes is an important regulator of AD [127] PSEN1 ∆E9 mutated hiPSC-derived astrocytes Increased Aβ42/40 ratio, ROS, increased cytokine release PSEN1 M146L mutated hiPSC-derived astrocytes Disturbed expression of astrocyte markers [129] hiPSCs-derived neurons with PSEN1 A246E mutation Defective mitochondria have a key role in AD [130] ReN immortalized stem cell line Mutations in APP gene show accumulation of Aβ and phosphorylated tau [131] PC12 immortalized cell line GLP-1 neuroprotection and findings of Aβ toxicity [132] 3D in vitro models PSEN1 A246E iPSC-derived neurons Aβ aggregation without synthetic Aβ exposure or mutation induction [133] iPSC-derived NPCs encapsulated in wet electrospun PLGA Enhanced expression of Aβ42 and p-Tau [134] NSCs encapsulated in starPEG-heparin-based hydrogels Increased Aβ42 causes loss of neuroplasticity. System could allow for identification of therapeutic targets [135] Induced NSCs in silk protein scaffold with HSV-1-induced AD Aβ plaque formation, neuroinflammation, decreased functionality [136] iPSCs-derived neuro-spheroids Aβ aggregation; platform for testing of AD drugs [137] 3D human neural progenitor cells Show the importance of reducing the Aβ42/40 ratio for amelioration of AD; accurate tau pathology [138] Acoustofluidic platform for assembly of neurospheroids and Aβ plaques High throughput screening platform to test drugs against Aβ plaques [139] 3D triculture of neurons, astrocytes, and microglial cells Aβ aggregation, accumulation of p-tau, cytokine secretion [140] In vivo models APP overexpressing mice Aβ plaque formation, learning, and cognitive deficits after 6 months [141] Aβ-GFP transgenic mice Aβ is only able to form oligomers, thereby representing AD. Mice showed loss of memory, spine alterations, and increased p-tau levels. [142] hTauP301L transgenic mice Increased levels of phosphorylated tau, increased tau aggregation, neuronal loss [143] T40PL-GFP transgenic mice, with the P301L 2N4R tau mutation Increased levels of tau aggregation and tau pathology after 3 months [144] ICV injection of Aβ oligomers Memory loss in an ERK1/2-mediated fashion [145] Interestingly, Perez et al. used human iPSC with a loss of PITRM1 function (Pitrilysin metallopeptidase 1), an enzyme involved in mitochondrial degradation associated with AD, as a 2D model and to form 3D organoids [146]. They showed that only the organoids were able to provide the increased Aβ42/40 ratio and higher p-Tau levels, suggesting the need for cell-cell and cell-matrix interactions to fully simulate AD in vitro. Lastly, primary murine neurons and cell lines are often used to model AD. The ReN immortalized neural stem cell line contains various APP mutations and can differentiate toward neurons or glia cells, rendering it an ideal cell line for AD modeling [131]. Additionally, PC12 is an immortalized clonal cell line showing GLP-1 neuroprotection and Aβ plaque formation [132]. However, 2D models lack the proper cellular environment and support cells to model AD properly; hence, 3D culture models provide a solution. 3D Models of Alzheimer's Disease 3D models of AD offer the complexity of the brain without the ethical constraints of animal models [147]. Organoids, or other forms of 3D cell culture, can simulate AD pathology, as they are able to secrete sufficient levels of Aβ42 to form Aβ plaques and form NFT, unlike 2D cultures [148]. Hernandez-Sapiens et al. have used iPSC-derived neurons with the PSEN1 A246E mutation, as seen before, to simulate AD in vitro [133]. They were able to generate Aβ oligomers, representative of AD in vivo, by culturing the cells on a Matrigel platform. Ranjan et al. formulated poly(lactic-co-glycolic acid) (PLGA) scaffolds using wet electrospinning to encapsulate iPSC-derived neural progenitor cells (NPCs) and mimic the brain structure (Figure 2A) [134]. They were able to acquire pathogenic levels of Aβ42 and p-Tau using AD patient-derived iPSCs. Papadimitriou et al. employed a starPEG heparin-based hydrogel, which could incorporate both neural stem cells and pathogenic levels of Aβ42 to simulate AD-like physiopathology [135]. They demonstrated that Aβ42 was responsible for the loss of neural plasticity, similar to that observed in AD ( Figure 2B). Therefore, this system could allow for the identification of therapeutic targets to reduce the loss of neural plasticity. Moreover, Cairns et al. induced AD in human-induced neural stem cells in a silk protein scaffold using the herpes simplex virus type I (HSV-1) [136]. They were able to mimic the plaque formation, neuroinflammation, and decreased functionality of the cells without the use of exogenous AD mediators. It should be noted that after plaque formation, the model can no longer be used to evaluate preventative strategies. Other researchers have assembled spheroids to obtain a 3D environment. For example, Lee et al. used iPSCs from various AD patients' blood to form Aβ oligomers in neuro-spheroids, which provide a platform for high throughput testing of AD drugs [137]. Human NPCs were used to form differentiated 3D neurospheroids with Aβ aggregation, modeling AD [149]. Kwak et al. continued this research and assembled 3D spheroids expressing different levels and ratios of Aβ42/40 ( Figure 2C) [138]. They were able to show p-Tau pathology and the importance of lowering the Aβ42/40 ratio to reduce AD-related neurodegenerative processes. The use of microfluidics to create neurospheroids or brain-on-a-chip devices has been extensively used by researchers as well. For example, Cai et al. used an acoustofluidic platform to aid in the high-throughput formation of homogeneous neurospheroids [139]. Acoustic soundwaves allow for the rapid formation of spheroids, and the addition of Aβ aggregates into the spheroids resembles AD pathology ( Figure 2D). Moreover, Park et al. used a microfluidic system for a 3D triculture of astrocytes, neurons, and microglia [140]. The model showed Aβ aggregation, p-tau accumulation, and secretion of inflammatory cytokines can be used to study the pathogenesis of AD. In Vivo Animal Models of AD The genetic mutations found in ADAD and risk factors associated with SAD have provided useful information to develop animal AD models, particularly in mice [150]. This allows not only targeted genetic modifications related to AD pathology but also the evaluation of associated cognitive deficits [151]. Many transgenic mouse models are based on APP mutations, which can also be used in 2D in vitro cell-based models. However, the results have not been entirely conclusive. In Vivo Animal Models of AD The genetic mutations found in ADAD and risk factors associated with SAD have provided useful information to develop animal AD models, particularly in mice [150]. This allows not only targeted genetic modifications related to AD pathology but also the evaluation of associated cognitive deficits [151]. Many transgenic mouse models are based on APP mutations, which can also be used in 2D in vitro cell-based models. However, the results have not been entirely conclusive. Chen et al. demonstrated the relationship between a type of cognitive performance and b-amyloid plaque deposition. APP overexpression in mice showed increased Aβ plaque formation, as well as a spatial learning decline, but not all forms of learning and memory [141]. More recently, Ochiishi et al. used Aβ tagged with Green Fluorescent Protein (GFP) in mice to observe the cascade of events leading from Aβ oligomeric formation to synaptic dysfunction in vivo [142]. These mice showed elevated levels of p-Tau, impaired recognition memory, and altered spine morphology. Additionally, injection mice models are often used to model AD in vivo. Injection in tauP301L transgenic mice resulted in elevated p-tau levels close to the site of injection [143]. Similarly, T40PL-GFP transgenic mice, with the P301L 2N4R tau mutation, induced p-tau pathology after three months [144]. Moreover, intracerebroventricular (ICV) administration of Aβ oligomers into mice shows a failure to induce glucose intolerance, indicating Aβ oligomers target metabolic control [152]. Similarly, intra-hippocampal injections of Aβ oligomers showed loss of memory, correlated with the ERK1/2 pathway activation, which is involved in memory function [145]. The interest in these injection models is that they are not genetic forms of AD and are more similar to the SAD model. By exposing these models to certain risk factors, we get closer to a model more representative of human pathology and epidemiological data. Researchers have been able to simulate AD lesions, including amyloid plaque formation and Tau pathology, in various models, leading to the identification of therapeutic targets as well as drug testing. However, understanding the molecular mechanisms involved in the early onset of AD will help to develop strategies to reduce AD risk or even prevent the disease altogether. More research is required in order to develop an adequate model of the very early stages of AD, especially in models with sporadic forms, which represent the majority of AD cases. Non-human primates (NHP) can be used as models of sporadic age-associated brain β-amyloid deposition and pathological changes in AD. Recently, Latimer et al. showed that vervets and other NHP are promising models for exploring early-stage disease mechanisms and biomarkers and testing new therapeutic strategies [153]. These monkeys have the propensity to develop diseases relevant to humans during aging without genetic handling [154]. However, there are several limitations; vervets show amyloid deposits but do not have neurofibrillary tangles or tauopathy; therefore, they do not present generalized neurodegeneration. Vervets are best considered a pattern of early amyloid pathology and corresponding behavioral and biomarker changes, making them important for the study of the early stages of AD [153]. Drug Treatments Description Current drug treatments for AD are symptomatic-based rather than curative to limit the progression of cognitive symptoms and behavioral and psychological symptoms of dementia (BPSD). Four drugs (donepezil, memantine, galantamine, rivastigmine) are approved on the market and belong to two families: anticholinesterase inhibitors and anti-glutaminergics. These treatments are delivered through the oral or transdermal route [155,156]. Anticholinesterase inhibitors are molecules designed to increase acetylcholine levels in the brain, which is a molecule that allows the transmission of information between certain neurons and plays a role in memory. These treatments are intended to correct the acetylcholine deficiency that is observed in the CNS of persons with AD. Antiglutaminergics are used to regulate glutamate levels through a noncompetitive antagonist effect of N-methyl-D-aspartate (NMDA) receptors. Glutamate is a neurotransmitter that has a role in the brain functions of learning and memorization. High levels of glutamate are likely to cause pathological effects causing the death of neurons. These drug treatments are used in order to delay the evolution of the disease, to stabilize or to improve, albeit temporarily, the cognitive functions, and to control the disorders of the behavior. Although not curative, these treatments nevertheless help to maintain independence and improve the quality of life for AD people and for their caregivers. However, these treatments, whose effectiveness is only partial and temporary at best, affect only the consequences of AD rather than the cause [1,17,157,158]. These drug therapies may be more beneficial in the early asymptomatic stage before the process of neurodegeneration occurs. Other reasons also contribute to the modest effectiveness of these treatments, including the difficulty in brain drug targeting due to restricted passage from the circulation to the CNS through the BBB [159]. Indeed, many drug trials fail because of permeability issues at the BBB in AD. Because of this, the increased dosage is necessary, which could also increase the possibility of secondary undesired effects [160,161]. The BBB represents a challenge for CNS drug delivery, and many strategies have been developed to address this challenge [162]. Drug efficacy may also be reduced due to age-related modifications in neuronal membranes and membrane receptors, which is not necessarily considered in pre-clinical studies. Indeed, a recent study showed changes in the microdomains of synaptosomes isolated from aged mice, which increased their response to amyloid stress and inhibited the neuroprotective effects of the ciliary neurotrophic factor [163]. Another limitation in treatments may be due to their administration during the late stages of AD. For example, studies of mice with genetic mutations of ADAD causing early and rapid accumulation of amyloid plaques have allowed testing of anti-amyloid immunization to remove amyloid plaques [164]. However, in numerous human clinical trials using this approach, there was a decrease in amyloid load, but without significant clinical improvement nor reduction in disease progression [165]. It is, therefore, possible that the treatments are administered at a time when AD is already in the advanced stages and are thus less effective. Timely intervention may be important and emphasizes the need for better diagnosis of the early stages of AD using additional biomarkers [158]. Indeed, rapid and accurate diagnosis should take into account target populations with risk factors, including family history (genetic factors including the ε4 allele) and isolated memory complaints. Drug development has targeted amyloid plaque formation, but other novel targets need to be explored [155]. The absence of effective curative treatments and difficulties in accurately diagnosing early-stage AD clearly demonstrates the need for implementing preventive and neuroprotective strategies to slow down the neurodegenerative process (neuronal dysfunctions: axons, dendrites, synapses), thereby reducing AD risk [166]. Non-Pharmacological Therapies Non-pharmacological therapies, in addition to drug treatments, represent an alternative to the treatment of neurodegenerative diseases [156]. Several studies and international trials are completed or progress to investigate the multidomain intervention in AD [166][167][168][169][170][171][172], which is an approach using multiple activities. Indeed, research shows a positive correlation between increased physical activity, cognitive training, improved nutrition, and slowing cognitive and functional decline and intensity of BPSD. Nevertheless, these activities have been carried out over short periods, with little information on long-term studies. Prevention Strategies for Alzheimer's Disease A review of the current literature highlights the interest in prevention and non-drug therapies, specifically for MCI or the preclinical stage. As discussed above, AD abnormalities such as Abeta-induced synaptic dysfunctions or endosomal pathway blockade occur early, progressing with time, even over decades, by which time the available treatments have only modest effects. The decreased cognitive performance begins around midlife from 45 years [173] and increases with age due to structural and functional changes in the brain (for instance, regional brain volumes and integrity of the white matter, reduction in the fluidity of brain membranes, and changes in lipid composition) [163,[174][175][176]. Identification of modifiable risk factors is, therefore, essential in order to define preventive actions against this insidious disease. Here, because of the importance of lipids in brain structure and function and the relative ease with which lipid status can be optimized by dietary intervention, we focus on this as a modifiable risk factor to target using diet as a preventive strategy for reducing age-related cognitive impairment and risk of preclinical stage of AD. Dietary Intervention Fatty Acids: Omega 3, 6 PUFA has a key role in the production and storage of energy, synthesis and fluidity of cell membranes, and enzymatic activities, among others [23]. Two specific PUFAs, linoleic (LA, C18:2n-6) and alpha linolenic (ALA, C18:3n-3) acids, are essential since the body is unable to synthesize these fatty acids, and the only way to obtain them is from dietary sources [177]. LA and ALA are precursors for arachidonic acid (AA, C20:4n6), eicosapentaenoic acid (EPA, C20:5n-3) and DHA (C22:6n-3). PUFA is required for brain development, integrity, and function. Omega-3 (n-3) and 6 (n-6) are important components of biomembranes and have a key role in the integrity, development, neuron maintenance, and functions, including synaptic processes, neuronal differentiation, and neuronal growth [178][179][180]. Docosahexaenoic Acid The brain has a high level of n-3 fatty acid, DHA, mainly in membrane photoreceptors and synaptic membranes. It is present particularly in membrane phospholipids (PL) of phosphatidylethanolamine (PE) and phosphatidylserine (PS), with smaller amounts also found in phosphatidylcholine (PC). DHA is known for its neuroprotective effects and synaptic plasticity and has a key role in aging, memory, vision, and corneal nerve regeneration [181]. DHA accounts for more than 90% of n-3 PUFAs in the brain and 10% to 20% of total lipids and is found in particularly high levels of gray matter. The total volume of gray matter decreases with age, which corresponds to a loss in the composition of DHA [182]. DHA can influence cellular and physiological processes, including membrane fluidity, the release of neurotransmitters, myelination, neuroinflammation, and neuronal differentiation and growth [183]. Long-chain n-3 FA supplementation in the early stages of AD has shown promising potential [111]. Studies show a relationship between a diet rich in fish-derived n-3 FA and cognitive performance during aging [184]. Fish oil is an excellent source of DHA and has been studied as a potential preventing food supplement for AD. However, it seems that only patients with mild cognitive impairment who do not have ε4 allele of the APOE gene had better cognitive outcomes after treatment with fish oil [185]. This would suggest the importance of n-3 FA supplementation before the onset of AD symptoms, especially in people at risk. In fact, DHA is able to attenuate molecular mechanisms that are deleterious to the CNS in the early stages of AD. The modification in the fluidity of neuronal membranes is involved with brain aging and may play an important role in AD [106]. DHA supplementation in the diet could prevent age-related neuronal membrane changes and associated impairments. Furthermore, DHA supplementation can support reactivity to molecular therapeutic targets impaired in AD, such as the ciliary neurotrophic factor (CNTF), suggesting that DHA may be of more value in combination with other treatments, such as neuroprotective molecules, than alone [163]. Various in vitro and in vivo studies have described that DHA can have neuroprotective effects against neurotoxicity induced by Aβ [186]. DHA has been shown to lower Aβ production in various AD models [187], improve blood flow and decrease inflammation [188], further demonstrating its potential to reduce AD risk. Currently, several clinical trials are testing the neuroprotective effects of n-3 PUFA administration in patients with AD. Despite these numerous studies, the results in AD patients have not been consistent in terms of cognitive or neuronal improvement. In addition, the effects observed are generally short-term. These discrepancies can perhaps be explained by numerous factors, including the administration of DHA only in late-stage AD, insufficient age-influenced bioavailability, or the formulation of DHA used. In the AD, delivery of bioactive substances to the CNS is a very complex process and the development of prevention and therapeutic strategies is challenging. Research is being carried out to test new strategies to improve the penetration of molecules into the CNS. BBB and Soft Nanoparticles The BBB's main role is to separate the blood that circulates in the body, and that might contain toxic foreign molecules from the brain's extracellular fluid present in the CNS [189]. The German bacteriologist Paul Ehrlich discovered the BBB in 1885 when he successfully stained all animal organs except the brain following injection of aniline dye solution into the peripheral circulation. This discovery was later confirmed by his student Edwin Goldman in 1913 [190]. However, only once scanning electron microscopy (SEM) was invented in 1937 could the actual BBB membrane be observed. Anatomically, the BBB's structure includes the structure pericytes, astrocytes, neurons, and microglia ( Figure 3) [191]. Due to the presence of high electrical resistance and tight junctions between endothelial cells caused by their encapsulation by the pericytes and astrocytes, only water and other small molecules can penetrate without restrictions by passive transcellular diffusion through the BBB [189,192]. Conversely, molecules such as drugs, amino acids, and glucose are polar, hydrophilic, or possess a high electric charge to cross the BBB through active transport routes that depend on specific proteins [193]. Therefore, due to the rise in neurodegenerative diseases, the challenge of delivering and releasing drugs or bioactive compounds into the brain is attracting much attention. Soft nanoparticles, such as liposomes and exosomes, are nanovesicles that are able to deliver drugs and genes across the BBB (Figure 3) [33,37]. In the 1960s, Dr. Bangham discovered liposomes when he noticed that phospholipids, when surrounded by an aqueous medium, formed a closed bilayer [194,195]. Being amphiphilic, the hydrophobic acyl chains of phospholipids lead to the thermodynamically favorable formation of lipid spheres upon contact with water [42,196]. Electrostatic interactions, such as van der Waals forces and hydrogen bonding, further enhance this formation [197,198]. Liposomes generally consist of a lipid bilayer encircling an aqueous core [199,200]. Liposomes are able to encapsulate hydrophobic drugs in their bilayer and hydrophilic bioactive molecules in their core [201]. With the lower volume of hydration in the liposome core, hydrophilic molecules are entrapped with a lower efficiency than hydrophobic molecules [42]. Liposomes can be classified as conventional, PEGylated/stealth, or ligand-targeted based on the characteristics of their surface [202]. Liposomes have been researched for more than five decades, to the point where they are well-established drug delivery vectors, resulting in the marketing authorization of several clinically approved liposomal-based products [203]. Indeed, they offer better biocompatibility and safety than Liposomes In the 1960s, Dr. Bangham discovered liposomes when he noticed that phospholipids, when surrounded by an aqueous medium, formed a closed bilayer [194,195]. Being amphiphilic, the hydrophobic acyl chains of phospholipids lead to the thermodynamically favorable formation of lipid spheres upon contact with water [42,196]. Electrostatic interactions, such as van der Waals forces and hydrogen bonding, further enhance this formation [197,198]. Liposomes generally consist of a lipid bilayer encircling an aqueous core [199,200]. Liposomes are able to encapsulate hydrophobic drugs in their bilayer and hydrophilic bioactive molecules in their core [201]. With the lower volume of hydration in the liposome core, hydrophilic molecules are entrapped with a lower efficiency than hydrophobic molecules [42]. Liposomes can be classified as conventional, PEGylated/stealth, or ligand-targeted based on the characteristics of their surface [202]. Liposomes have been researched for more than five decades, to the point where they are well-established drug delivery vectors, resulting in the marketing authorization of several clinically approved liposomal-based products [203]. Indeed, they offer better biocompatibility and safety than polymeric and metal-based nanoparticles due to their resemblance to biomembranes [204,205]. AD drugs fail to generate therapeutic effects in part because they cannot pass through the BBB to enter the CNS. Even though conventional liposomes cannot cross the BBB, modifying their surface can enable them to pass through and unload their cargo directly into the CNS [206]. Proteins, peptides, and antibodies receptors found on the BBB's surface can be used to mediate the translocation of liposomes via receptor-mediated transcytosis. Transferrin (Tf)-functionalized liposomes have been used for BBB targeting since a transmembrane glycoprotein overexpressed on brain endothelial cells called the transferrin receptor (TfR) is one of the most commonly targeted receptors. The problem of endogenous Tf binding inhibition to the TfR is usually resolved by avoiding ligand competition and using specific antibodies against TfR [207][208][209][210]. Likewise, the mammalian cationic ironbinding glycoprotein lactoferrin (Lf) is overexpressed on the BBB. Lf-modified liposomes have also been created to cross the BBB via receptor-mediated transcytosis [211,212]. Simultaneously, cationic liposomes can penetrate the BBB via absorption-mediated transcytosis, taking advantage of the BBB's negative charge and consequently triggering, through electrostatic interactions, the cell internalization processes [213][214][215]. However, binding to serum proteins and the nonspecific uptake of cationic liposomes by peripheral tissues are major drawbacks that require the administration of highly toxic doses of liposomes to achieve therapeutic efficacy [32]. Another strategy is to bind nutrients, such as glucose and glutathione, to the surface of liposome to facilitate its translocation via carrier-mediated transcytosis [216]. Nutrients are normally transported to the brain from the blood by selective transporters overexpressed on the BBB's surface, such as amino acid, hexose, or monocarboxylate transporters. For this, glucose-functionalized liposomes have been developed to improve their transcytosis through the BBB [217,218]. G-Technology ® targets liposomes through the BBB by modifying them using glutathione that targets the glutathione transporters highly expressed on the BBB's surface [209,219,220]. Moreover, developing liposomes with more than one targeting ligand has been successfully used as a new strategy to deliver therapeutics to the brain. These bifunctional liposomal delivery carriers increase BBB targeting efficiency, most likely by overcoming the receptor or transporter saturation limitation of monofunctional liposomes [221]. Small molecules that present high affinity towards amyloid peptides were successfully loaded in bifunctional liposomes to create enhanced multifunctional carriers [208][209][210][222][223][224]. With regards to regulatory aspects, liposomes are one of the most popular nanocarrier systems available for the loading and delivery of drugs and genes, as can be seen from the increasing numbers of Investigational New Drug (IND) application submissions [32]. Exosomes Many similarities exist between exosomes and liposomes, as both of them range in size from 40 to 120 nm and are composed of a lipid bilayer. Nevertheless, they have major differences as well, such as the exosomes' complex surface composition. The unique lipid composition of exosomes differentiates them from other nanovesicles and dictates their in vivo fate due to their role in interactions with serum proteins. Tetraspanins and other membrane proteins increase the efficiency of exosome's targeting ability by facilitating their cellular uptake. Moreover, exosomes are more biocompatible, can evade phagocytosis, and have an extended blood half-life compared to liposomes, micelles, and other synthetic soft nanovesicles [225][226][227]. Exosomes' smart targeting behavior of specific receptors is governed by the donor cells in the form of lipid and cellular adhesion molecules found on their surfaces [228]. Furthermore, their high biocompatibility and low immunogenicity enhance their uptake profile, their stability in systemic circulation, and their in vitro and in vivo therapeutic efficacy [229,230]. The exosomal content depends on the originating cell or organism, but in general, all exosomes contain non-coding RNAs, microRNAs, mRNAs, small molecule metabolites, proteins, and lipids [231]. In addition, their surface contains valuable receptors responsible for the identification of exosomes and the transportation of encapsulated materials to recipient cells [232]. Exosomes can be isolated using filtration, polymer-based precipitation, chromatography, differential centrifugation, ultracentrifugation, and immunological separation. Research is actively ongoing with the goal of developing a gold-standard universal method that is efficient for isolating exosomes (with a high yield) and does not compromise their biological function [233]. Previously, exosomes and their cargo have been shown to have a central role in normal CNS communications, immune response, synaptic function, plasticity, nerve regeneration, and the propagation of neurodegenerative diseases [232]. The important role of exosomes in normal and diseased brain states indicates that they may also play a significant role in the pathogenesis of mental disorders [234]. One important finding in biomarker and drug delivery research was that the content of exosomes can remain active after they cross the BBB. The effective brain delivery of siRNA-loaded exosomes by systemic injection in mice has been successfully demonstrated by Alvarez-Erviti et al. [235]. To reduce immunogenicity and to target exosomes to the brain, they isolated exosomes from self-derived dendritic cells, targeted them with lysosome-associated membrane protein 2 (Lamp2b) fused to the neuron-specific rabies virus glycoprotein (RVG) peptide, and loaded them with siRNA via electroporation. These engineered targeted exosomes caused a specific gene knockdown exclusively in the brain following the delivery of GAPDH siRNA. Other researchers used intranasal injections to successfully deliver exosomes to the brain of mice [236]. Recently, evidence of brain-body communication via exosomes was identified when Gómez-Molina et al. recovered exosomes expressing a fluorescently tagged protein that is only found in the brain from the blood of rats [237]. Even though the exact mechanism of exosome transport across the BBB is not fully characterized, these studies prove that exosomes in circulation can access the brain and vice versa. The transport of exosomes across the BBB has been hypothesized to be through [33]: (1) nonspecific/lipid raft; (2) macropinocytosis; (3) cell signaling induced by the adhesion and fusion of exosomes to the cell surface, which causes the release of the exosomal content; (4) a signaling cascade induced by the association of exosomes with a protein G-coupled receptor found on the cell surface; or (5) receptor-mediated transcytosis. Exosomes' use as drug-delivery vesicles to the CNS has become a topic of interest since their competence to penetrate the BBB was discovered, and they have already been used as drug-delivery nanocarriers in cancer and neurodegenerative diseases. Yang and coworkers reported that the delivery of anticancer drugs encapsulated in exosomes across the BBB significantly decreased the fluorescent intensity of xenotransplanted cancer cells and tumor growth markers [238]. A formulation of catalase loaded in exosomes accumulated in Parkinson's disease mouse brain and provided significant neuroprotective [239]. Liu et al. treated morphine addiction using exosomes expressing RVG peptide loaded with opioid receptor mu (MOR) siRNA [240]. The siRNA-loaded exosomes strongly inhibited the morphine relapse after delivering efficiently and specifically the MOR siRNA into the mouse brain, which significantly reduced the MOR mRNA and protein levels. These studies showcase the promising role of nanoliposomes and exosomes as smart drug delivery systems able to cross the BBB and target brain tissues. The different drug administration techniques are presented below. Oral Administration Administration of therapeutic molecules by the oral route or prevention strategies, including dietary intervention, are the simplest to apply and are used both in clinical trials with patients and in studies using animal models. It nevertheless has drawbacks due to limited passage through the intestinal barrier and the BBB, as well as clearance by other organs, including the liver and kidney [241]. Nanocarriers, including nanoliposomes or exosomes, can be used to overcome these different obstacles and to enhance drug bioavailability and pharmacokinetic behavior [242,243], as discussed above. Intravenous and Intracerebral Administration Therapeutics can be injected directly into the circulation (intravenous) to overcome the intestinal barrier, but the BBB remains an obstacle. Other methods of administration to avoid the BBB consist of directly injecting the molecule, either intraventricularly or intrathecally, into CSF or into brain parenchyma. Intracerebral or intracerebroventricular injections using stereotaxy are a common procedure for in vivo animal models. In humans, this is invasive and costly since it requires surgical intervention and can be painful (intrathecal injections) [241,244]. Because of this, these options are only considered in severe conditions, when the patient is most likely already hospitalized. Nevertheless, this approach is of interest for introducing slow-release implants or a colony of stem cells in the CNS for the timed release of therapeutic molecules [245]. Intranasal Administration Intranasal (IN) administration provides an attractive and promising alternative for drug delivery to the CNS. It is non-invasive, painless, non-stressful, and relatively easy to perform without requiring a medical specialist [246][247][248]. The IN route bypasses the BBB, enhancing drug bioavailability by avoiding first-pass metabolism and intestinal degradation. Drugs can be directly administrated through the IN route to the CNS via the olfactory mucosa [249]. The small volumes that can be applied intranasal is a limiting factor and, as such, requires concentrated solutions. Nevertheless, nasal mucosa provides a large surface area and rich vasculature for very efficient drug absorption. Recent studies have analyzed the direct transport of proteins and peptides via IN [250]. Currently, no nanosystem has passed the clinical development stage. There are several in vivo studies, but conclusions are not yet clear due to differences in nasal anatomy between humans and animals. This approach is increasingly used in clinical studies for the treatment of neurological diseases [251] like AD [32,252], brain injuries [253], and autism [254]. Currently, more than a hundred clinical trials are ongoing and investigating intranasal drug delivery, especially in CNS-related diseases, with the greatest promise appearing to be IN the administration of insulin or oxytocin. Even if the intranasal mechanisms of drugs to the brain are not yet completely understood, IN delivery represents a promising pathway of administration and should be investigated for future pre-clinical and clinical studies in the treatment of neurological diseases such as AD [255]. Novel Administration Strategies Technological advantages have allowed the emergence of novel strategies in recent years to potentiate the delivery of therapeutic substances. Ultrasound and Electromagnetism Several studies have investigated the effects of ultrasound on BBB permeability. Ultrasound can be useful to temporarily open the BBB without damaging healthy tissues. The tight junctions of the BBB can be opened reversibly by focused ultrasound-induced mi-crobubble oscillation. Lin et al. used focused ultrasound to reversibly open the BBB to deliver cationic liposomes loaded with doxorubicin to optimize glioma targeting capabilities [256]. A study has shown in people with brain tumors that the targeted diffusion of ultrasound can improve the porosity of the BBB and the transport of molecules to the targeted areas. Disruption of the lipid organization of membranes by ultrasound increases membrane permeability and allows greater penetration of the therapeutic molecules [257]. Other research shows that the application of a resonant magnetic field gradient (RMFG) allows the detection of nanometer movements associated with ultrasound. A magnetic resonance guide can be a good solution to focus ultrasound in combination with intravenously injected microbubbles in order to transiently open the BBB and reduce Aβ and τau pathology in animal models of AD [258]. This method is presented as a safe, reversible, repeatable, and non-invasive method for access across the BBB, and thus, it is a promising method for patients with AD [259][260][261]. Transdermal Delivery Systems via Microneedles One of the emerging minimally invasive drug delivery tools is microneedles [262][263][264]. Microneedles are capable of delivering therapeutics and nanocarriers in an active or passive fashion. Such tools can be paired with smart wearable electronics to control the dosing of drugs and ensure patient compliance. However, substantial research is needed to ensure the effectiveness of therapeutics delivered subcutaneously. This technology is promising and can be adapted to a range of drug-delivery applications [265]. Transdermal administration of Alzheimer's drugs is an interesting and promising topic, which should be further elaborated on and studied [266,267]. Conclusions and Future Perspectives AD is a progressive cortical neurodegenerative disease leading to the insidious onset of multiple cognitive disorders and gradual evolution over time. Existing therapeutic strategies are aimed towards slowing the progression of AD but are not curative approaches. Strategies of prevention play an important role in the primary prevention of cognitive symptoms and are called for in this disease, where initial lesions form at an early preclinical stage and progress insidiously for years. Drug delivery to the CNS is a very complex process and challenging to ensure that the brain absorbs the full benefit of the drugs. In this review, new strategies to improve access to the therapeutic drug to the CNS have been analyzed and discussed, including liposomes and exosomes, shown to be effective drug delivery systems for the brain. Choosing the best pathway of medications and liposomes and exosomes to the brain is a very important topic in order to have the best efficiency as well as the better targeting zone in the brain. Among the different methods used in the literature, the IN method shows a better absorption efficiency because of the speed of the flow of drugs or NP to the brain. IN administration of liposomes is promising for the treatment of AD by virtue of its potential to facilitate molecule penetration across the BBB, better bioavailability, and efficacy by protecting the drug from peripheral degradation. Other applications that also need to be considered include the use of gene editing for AD treatment [267,268], as well as antibody therapy targeting Aβ and tau proteins [269]. These topics have been discussed in other reviews as cited. Amyloid and aggregate clearance systems in non-human organisms such as yeast may also bring new insight into potential anti-Alzheimer therapeutics [270][271][272]. The combined use of strategies to reduce the amyloid burden and tau-protein aggregation with efficient delivery to brain tissues could be of potential benefit to AD patients. Acknowledgments: K.E. acknowledges financial support from the Ministry of Higher Education, Research and Innovation. The authors acknowledge financial support from the "Impact Biomolecules" project of the "Lorraine Université d'Excellence" (in the context of the «Investissements d'avenir» program implemented by the French National Research Agency (ANR)). Conflicts of Interest: The authors declare no conflict of interest.
2022-11-16T16:31:49.413Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "9dbe6b562aec1dc05309ecc0b7a89b81e6fd01bf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/22/13954/pdf?version=1668244956", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8e500aa64a363f6674e0511985905a0b1d20448", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
46867187
pes2o/s2orc
v3-fos-license
New Relations for Gauge-Theory and Gravity Amplitudes at Loop Level In this letter, we extend the tree-level Kawai--Lewellen--Tye (KLT) and Bern--Carrasco--Johansson (BCJ) amplitude relations to loop integrands of gauge theory and gravity. By rearranging the propagators of gauge and gravity loop integrands, we propose the first manifestly gauge- and diffeomorphism invariant formulation of their double-copy relations. The one-loop KLT formula expresses gravity integrands in terms of more basic gauge invariant building blocks for gauge-theory amplitudes, dubbed partial integrands. The latter obey a one-loop analogue of the BCJ relations, and both KLT and BCJ relations are universal to bosons and fermions in any number of spacetime dimensions and independent on the amount of supersymmetry. Also, one-loop integrands of Einstein--Yang--Mills (EYM) theory are related to partial integrands of pure gauge theories. INTRODUCTION A unified perspective on fundamental forces intertwining gravity with gauge interactions is suggested by string theory: Gravitons arise as the massless vibration modes of closed strings which are in turn formed by joining the endpoints of open strings with gauge bosons among their ground-state excitations. A prominent perturbative manifestation of gravity's resulting double-copy structure was revealed by Kawai, Lewellen and Tye (KLT) in 1985: The KLT formula [1] assembles the tree-level S-matrix of closed strings from squares of color-stripped open-string amplitudes. Accordingly, its point-particle limit relates tree amplitudes of Einstein gravity (and its supersymmetric extensions) to squares of gauge-theory partial amplitudes. The structure of the KLT formula turned out to apply universally to tree amplitudes in a variety of nongravitational theories, opening up a double-copy perspective on Born-Infeld theory and special Galileons [2], as well as, surprisingly, even the open string [3][4][5]. It is remarkable that the double-copy structure appears to extend to the quantum regime [6], as supported by impressive constructions of multiloop supergravity amplitudes from gauge-theory input such as [7,8]. The unprecedented efficiency of this method gives rise to hope that it allows to pinpoint the onset of ultraviolet divergences in various supergravity theories, see e.g. [8]. So far, such double-copy constructions have been carried out at the level of cubic diagrams which have to be represented in a particular gauge of the spin-one constituents and thereby obscure the diffeomorphism invariance of gravity amplitudes. In this letter, we close this gap and give the first manifestly gauge-and diffeomorphism invariant "KLT-like" double-copy formula for their loop integrands. It does not depend on any particular gauge in arranging the cubic-diagram representation of the gauge-theory integrands, and it takes a universal form for bosons and fermions, regardless of the number of spacetime dimensions and supersymmetries. The double-copy approach to perturbative gravity re-lies on a hidden symmetry of the gauge-theory S-matrix -the duality between color and kinematics due to Bern, Carrasco and Johansson (BCJ) [9]. Similar to the KLT formula, the manifestly gauge invariant tree-level incarnation of this duality known as the BCJ relations among color-stripped amplitudes was initially derived from string theory [10] and turns out to also apply to effective scalar theories including non-linear sigma models (NLSM) [5,11]. Generalizations of BCJ relations to oneloop integrands have already been given in a field-theory [12] and string-theory [13] context, and we will provide an alternative formulation which is tailored to play out with a KLT formula for one-loop gravity integrands. A convenient framework to complement the stringtheory perspective on double copies as well as KLT and BCJ relations is the CHY formalism due to Cachazo, Yuan and one of the current authors [14]. The CHY prescription for loop amplitudes [15,16] manifests their relation with forward limits of tree-level building blocks [17] where one sums over the polarization and color degrees of freedom of two extra legs with back-to-back momenta [18]. Indeed, the implementation of forward limits in [16,17] led us to identify the main results of this letter, as will be detailed in [19]. By the same argument, we find similar results for Einstein-Yang-Mills (EYM) theory. Given the wide range of double-copy theories, our results should extend to the NLSM and its double copies and reveal universal structural insights into the quantum regime of perturbative field and string theory. A. Definition of loop integrands: The key ingredient in our construction is a new representation of loop integrands in gauge and gravity theories, first considered in [16] and [20]. The new representation can be obtained by first rearranging any Feynman loop integrand via partial-fraction relations and then shifting the loop momentum, which will not change the integrated result in dimensional regularization. At one-loop level, this procedure converts an n-gon integral into a sum of n terms, where all the inverse "propagators" but one become linear in the loop momentum. For example, the n-gon scalar integral in figure 1 can be written as with inverse propagators s ij... ≡ 1 2 (k i +k j + . . .) 2 and s ij...,ℓ ≡ ℓ · (k i +k j + . . .) The procedure can be applied to any one-loop amplitude with local propagators, and the result can be identified as tree diagrams involving two off-shell legs with momenta ±ℓ [17]. For non-supersymmetric theories, tree amplitudes diverge in the forward limit, but the divergences can be regulated using the prescription of [20], or that of [17] in the CHY representation. In this way, one can rewrite one-loop n-point gravity amplitudes M n in the new representation as We will refer to m n (ℓ) as the integrand for one-loop gravity amplitudes -its n−1 propagators are linear in ℓ after stripping off the overall 1/ℓ 2 , and it can be obtained from a KLT formula to be spelt out below. B. Partial integrands: The backbone of our KLT formula for m n (ℓ) is a novel refinement of color-stripped gauge-theory amplitudes A(1, 2, . . . , n). Given the rearrangement of loop integrals in (1), it is natural to collect all terms with the loop momentum flowing from leg i to i+1 (cf. figure 1) in the more basic building block a(1, 2, . . . , i, −, +, i+1, . . . , n). Identifying k ± ≡ ±ℓ, this partial integrand can be viewed as a tree amplitude with cyclic ordering (1, 2, . . . , i, −, +, i+1, . . . , n) in the forward limit of two off-shell legs − and +. Hence, partial integrands are individually gauge invariant, assuming that forward-limit divergences are suitably regulated. By definition, the sum of n such partial integrands gives the complete integrand of one-loop color-ordered amplitudes: The notion of partial integrands is naturally motivated by the one-loop formula in [16], but we emphasize that the definition here is general and independent of any particular representation. Moreover, the decomposition (4) of color-ordered amplitudes differs significantly from Q-cut approach in eq. (6) of [20]. In the following we propose BCJ relations among these a's, and KLT relations that combine them to the gravity integrands m n in (3). C. One-loop BCJ relations: Having defined partial integrands of the form a(π(1, 2, . . . , n), −, +) with permutations π ∈ S n , one defines a(α, −, β, +) for nonadjacent legs − and + (with multiparticle labels such as α = {α 1 , α 2 , . . . , α p }) via Kleiss-Kuijf relations of the underlying (n+2)-point trees [21]. We conjecture that they further satisfy universal BCJ relations like (n+2)point trees [9], leaving at most (n−1)! independent partial integrands, even though additional relations may exist for special theories. These relations can be generated by fundamental BCJ relations which involve changing the position of one leg only. For instance, choosing + or 1 and defining k 12.. (5) and, by a similar use of momentum conservation, The BCJ relations should hold universally in D spacetime dimensions, for external bosons and fermions in the adjoint representation of the gauge group and independently on the extent of supersymmetry in a,ã. EXAMPLES In this section, we provide evidence for one-loop BCJ and KLT relations, using examples of (partial) integrands a and m n in gauge theories and supergravities with maximal or half-maximal supersymmetry. Supergravity integrands with 24 supercharges can be obtained from the double copy of a 1/2 andã max . We exemplify the degenerations of the BCJ basis in these supersymmetric cases below (n−1)! elements and leave explicit checks for theories without supersymmetry to the future. B. 3pt half-maximal: The infrared regularization prescription of [25] for SYM amplitudes in D ≤ 6 with 8 supercharges gives rise to three-point partial integrands where the bubble numerators s ij (e i · e j )(k i · e p ) compensate for the divergent propagators s −1 ij , and the triangle numerator vanishes upon ℓ m → k m j . Kleiss-Kuijf relations [21] and momentum conservation yield such that the KLT formula with one factor of a 1/2 (ρ(1, 2), −, 3, +) in each term identifies a vanishing supergravity integrand, m In the full single-trace integrand (4), the propagators (ℓ−k 12...j ) 2 quadratic in ℓ can be recovered from (16) by the following property of the vector pentagon: The supergravity integrand m max 5 following from the KLT relation (7) is checked to reproduce the 5! pentagon diagrams and 10 · 4! box diagrams expected from the BCJ duality and double-copy [6,26,28]. Note that the naively 24-dimensional BCJ basis of partial integrands is again degenerate by maximal supersymmetrythe gauge invariant organization of [26] leaves three linearly independent kinematic factors, two permutations of A tree YM (1, 2, 3, 4, 5) and one ℓ-dependent invariant. D. 4pt half-maximal: In the four-point SYM integrands of [25], all the non-trivial tree-level diagrams involving leg 1 are absorbed into the gauge invariant quantities C 1|ijk , C m 1|ij,k , C mn 1|i,j,k defined in the reference, This expression is valid in D ≤ 6 and arises from solely the hypermultiplet running in the loop. Contributions from additional vector multiplets can be obtained by linear combinations of (20) and its maximally supersymmetric counterpart (10). BCJ relations and reflection properties can be checked through the permutations 1|23,4 +k leave several linearly independent partial integrands, and the role of the anomaly from ℓ m ℓ n C mn 1|2,3,4 in the KLT formula for m 1/2 4 will be investigated in the future [19]. E. MHV maximal: BCJ numerators for one-loop MHV amplitudes in N = 4 SYM have been given at all multiplicities [30] in terms of X i,j ≡ 1| k i k j |1 [31]. IMPLICATIONS FOR EINSTEIN-YANG-MILLS Given the one-loop KLT relations (7) between amplitudes of pure (super-)gravity and pure gauge theories, it is natural to investigate their minimal coupling within EYM theories. At tree level, EYM amplitudes were related to pure gauge-theory amplitudes [32,33], and we will give one-loop extensions of such relations in this section. where any propagator in the second line is rendered linear in ℓ via (1). With the convention that (24) only tracks the propagation of gauge multiplets in the loop, any a EYM can be obtained from the forward limit of a tree-level EYM amplitude with single-trace ordering (1, 2, . . . , i, −, +, i+1, . . . , n). Extensions of (24) to incorporate graviton propagators are related to trees with additional p j and go beyond the scope of this work. B. One-loop amplitude relations: The forward limit of the tree-level amplitude relations such as [32] A tree EYM (1, 2, . . . , n; p) = among partial integrands whose gauge invariance follows from the BCJ relation (6). The maximally supersymmetric four-point instance can be easily checked to descend from three box integrals: A similar identification of legs (1, n) → (+, −) can be performed to promote the results of [33] with additional graviton insertions to loop level, resulting for instance in the two-graviton example (28) in the appendix. Such amplitude relations take a universal form, irrespective of the supersymmetries preserved by a or a EYM , and a CHY derivation will be given in [34]. It would be interesting to convert the multitrace results of [33] into one-loop contributions to EYM amplitudes with graviton propagators. CONCLUSIONS AND FURTHER DIRECTIONS In this letter, we have identified partial integrands as basic gauge invariant building blocks for one-loop gaugetheory amplitudes; they arise naturally from the new representation of one-loop amplitudes [16,20], such as (1) of n-gon integrals, and can be derived using forward limits of tree amplitudes, or CHY representations [17]. These partial integrands inherit BCJ relations (5), (6) from tree level, and similar relations for partial integrands of one-loop EYM amplitudes (25). Most importantly, they are suitable for constructing one-loop gravity integrands through our main result (7), a one-loop generalization of the KLT formula [1] valid for the gravitational double copies of a wide range of gauge theories. Furthermore, the notion of partial integrands and their one-loop BCJ relations directly carry over to NLSM amplitudes. Parallel to tree-level case [2,35], the one-loop KLT formula with two copies of NLSM gives the oneloop integrand of special Galileons, and that with NLSM and (super-)Yang-Mills theory yields integrands of Born-Infeld theory along with supersymmetric extensions to Dirac-Born-Infeld-Volkov-Akulov theories [36]. The one-loop KLT formula is manifestly gauge-and diffeomorphism invariant, and it holds regardless of any particular representation of gauge-theory partial integrands. Once a gauge-theory amplitude has been expressed in accordance with the duality between color and kinematics [9], one can view the formula (7) as reorganizing the cubic diagrams in the double-copy representations of [6] into gauge invariant building blocks. In absence of duality-satisfying representations, however, (7) yields supergravity integrands which have been previously out of reach. In order to take maximal advantage of the KLT formula, it remains to systematically develop integration routines for rearranged loop integrals in m n (see [20] for an example of four-point one-loop integration relevant to both partial integrands and Q-cuts). It should be feasible to reinstate the standard propagators quadratic in the loop momenta through an algorithmic procedure. On the other hand, it is highly desirable to directly extract physical information, such as unitarity cuts or ultraviolet divergences of one-loop amplitudes, from the new representation of the integrands. Although the discussion has been adapted to the oneloop case, we expect that the notion of partial integrands and their KLT composition to permutation invariant gravity integrands extends to any loop order. For instance, we have identified all two-loop four-point partial integrands in maximal SYM, obtained from double forward limits of eight-point tree amplitudes [37]. These naturally lead to a new proposal for maximal supergravity integrands through the corresponding KLT formula, which is similar to that of eight-point trees. We leave it to future work to verify the proposal and to gather more evidence for higher loops and multiplicities.
2017-06-14T14:10:56.000Z
2016-12-01T00:00:00.000
{ "year": 2017, "sha1": "2ca813f374722912d09a879e7d4a95566d670e65", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1612.00417", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2ca813f374722912d09a879e7d4a95566d670e65", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
234342453
pes2o/s2orc
v3-fos-license
Investigation on the Convergence of the Genetic Algorithm of an Aerodynamic Feeding System Due to the Enlargement of the Solution Space . To meet the demands for flexible assembly technology, an aerodynamic feeding system has been developed. The system autonomously finds the optimal configuration of four parameters - two angles of inclination, nozzle pressure and component speed - using a genetic algorithm, which has been presented in earlier work. To increase the flexibility of the feeding system, an actuator was implemented, that enables the variation of the nozzle position orthogonally to the moving direction of the components. This paper investigates the effects of the more flexible flow against the components on their behavior when passing the nozzle. Additionally, the nozzle position was implemented into the genetic algo-rithm as a fifth parameter. Therefore, the impact of the enlargement of the solu-tion space of the genetic algorithm due to the implementation of a fifth parameter is investigated in this paper as well. Introduction The buyer's market is changing, which places new demands on products. These demands include individual design, a high standard of quality and a minimum price. Added to this is the shortening of the product's lifespan [1]. Production must adapt to these demands while the industry is pursuing cost reduction in order to maximize profits. Secondary processes that do not make a direct contribution to assembly must therefore be kept lean, reliable and inexpensive. Apart from organizational and constructive measures, automation is one way to rationalize assembly processes [2]. The costs of an automated production line are largely generated by feeding and transport systems. The actual assembly process is responsible for about 20% of the costs [3]. Feeding plays an important role, as the objects are transported as bulk material for cost reasons. Bulk material is cheaper and easier to handle [4]. For the following process, however, the objects are required in a defined position. For this reason, a targeted orientation from the bulk material must take place so that the next process can be performed [5]. The feeding process can be divided into four subtasks [3]. • Separation: The objects are sorted from the bulk material. • Transport: The ordered objects must now be transported to the next process. • Orientation: After the ordering of the objects, each part has an arbitrary orientation. The orientation process aligns the objects into a defined orientation. • Positioning: The objects are now designed for the next process so that direct processing is possible. Often, a vibratory bowl feeder is used to perform these tasks. It has a simple design, can be used for a wide range of geometries and is robust in operation [2,6]. Objects that are not oriented correctly are returned to the process [7]. The configuration of the vibratory bowl feeder depends on the geometry of the objects and takes place experimentally, which is time intensive [8]. One reason for the high amount of time required is that it is not possible to make general statements about the behavior of objects in a vibratory bowl feeder [9]. Therefore, feeding technology has a high potential for optimization. To meet the demands for a highly flexible and simultaneously efficient feeding technology, an aerodynamic feeding system has been developed at the Leibniz University of Hanover [10][11][12][13]. The system uses a constant air jet to exert a force on the components passing the nozzle. Using a genetic algorithm, the system is designed to parameterize itself for an optimal output rate. The principle of aerodynamic orientation as well as the genetic algorithm will be elucidated in the following. The Aerodynamic Feeding System Basic Principle. The aerodynamic feeding system presented and used in this work operates with only one air jet, which every component passes. In other work, systems have been presented that use multiple nozzles or air cushions to orient and transport parts [14,15]. Figure 1 shows the process of aerodynamic orientation in the described feeding system. It becomes clear that the component behaves differently depending on the orientation it has when arriving at the nozzle. If the workpiece arrives in the wrong orientation, it is turned over by the air jet, as can be seen in Fig. 1a), whereas it keeps its orientation, if it already arrives in the correct orientation (Fig. 1b)). The reason for the different behaviors of the component depending on the initial orientation lies in the shape and the mass distribution of the workpiece. The exemplary workpiece in Fig. 1 has a varying projected area against the airflow. Therefore, the wider part of the component experiences a higher drag force than the thinner part, which results in a momentum generating the rotation of the component. In the example, the angle of inclination α promotes clockwise rotation and hinders counterclockwise rotation, resulting in the same output orientation regardless of the input orientation. Apart from the angle of inclination α, the orientation process is primarily influenced by three additional parameters, seen in Fig. 1: The angle β influences the force of gravity acting on the component on the one hand and determines the impact of the friction between the component and the guiding plane. The nozzle pressure p directly affects the magnitude of drag force acting on the workpiece. If it is set too low, the component might not rotate at all, whereas a higher pressure can lead to multiple and unpredictable rotations. Lastly, the component speed v determines, how fast a workpiece passes the air jet and therefore how long it is affected by the drag forces. The parameter can be controlled by adjusting the speed of a conveyor located ahead of the nozzle. Fig. 1. Illustration of the aerodynamic orientation process [10] After the orientation process, each component's orientation is determined using a line scan camera. By dividing the number of components in the right orientation by the number of all components measured, an orientation rate between 0 and 100% is calculated. In various experiments, it was shown that the nozzle pressure p has the highest impact on the orientation rate, followed by the interaction between p and v as well as p and β [16]. The identified main effects and interactions are shown in Fig. 2. Even though the effects of parameter changes on the orientation process are known, the parametrization of the feeding system for new components takes a lot of time and expertise with the equipment. To tackle this disadvantage, a genetic algorithm has been implemented in the systems control, which will be presented in the following section. Values of the main effects and interactions between parameters on the orientation rate [16] Genetic Algorithm. Finding a set of parameters inducing a satisfactory orientation rate (e.g. >95%) constitutes a non-linear optimization problem. Additionally, the interrelation between input (the parameters) and the output (orientation rate) is not necessarily a continuous function. Therefore, a genetic algorithm (GA) is used as an optimizer [10,11,16]. The structure of the genetic algorithm is shown in Fig. 3. One generation contains 4 individuals whose fitness is evaluated by the orientation rate. The parameters of the GA were optimized in previous studies carried out by busch [16]. The best individual is automatically taken over as parent individual in the next generation, the second parent individual is determined by roulette selection. Recombination is done via uniform crossover and the mutation rate is 55%. With the range and increments of the four "old" parameters, as shown in Table 1, a large solution space with up to 14,214,771 possible configurations is spanned. Nevertheless, the genetic algorithm has proven to be a very effective and time-saving regarding the adjustment of the feeding system to new workpieces [16]. Taking into account the fifth parameter, the solution space would grow to up to 440,657,901 possible configurations. This shows, why it is important to investigate the effect of a fifth parameter to the system and the algorithm on the convergence of the very same. Implementation of the Nozzle Position as Fifth Parameter Previously, a fixed nozzle position had to be manually selected for each component, which could be set via a manual positioning table. In the case of rotationally symmetrical components, positioning the center of the nozzle half the diameter of the component away from the guiding plane seems reasonable. This way, the workpiece should receive the maximum amount of drag force, which would lead to a minimal pressure needed. In practice, experiments show that, depending on the dimensions and geometry of the part, a centered air jet can cause an inflow paradox, where the component is aspirated and in consequence slowed down by the air jet. The reason for this lies in Bernoulli's principle, which states that increasing the speed of a flowing fluid is accompanied by a decrease of the pressure [17]. This effect can occur in the gap between the nozzle and the component passing it. Preliminary experiments show, that this effect can be significantly reduced by moving the nozzle orthogonally to the moving direction of the components. Another problem occurs, when adapting the feeding system to more complex components that have irregular shapes. Manually adjusting the nozzle position can easily become an optimization problem of its own. In order to expand the spectrum of components the feeding system can handle and reduce the effects of the inflow paradox a linear actor that can vary the position of the nozzle orthogonally to the moving direction of the components was implemented in the feeding system. This parameter, called z, is shown in Fig. 4. The magnitude of z (Table 1) is defined as the distance between the center of the nozzle and the guiding plane. To automatically control parameter z, a motorized linear positioning table with a preloaded spindle drive was chosen. With this hardware, a positioning accuracy of 0.01 mm can be reached. The stroke is 75 mm. The high precision and stroke are chosen to ensure that the actuator can continue to be used even in future modifications of the feeding system. The position of the nozzle is controlled using an analog output with an output range of 0-10 V DC and a resolution of 16 bit. The trim range of the linear actuator can be specified via setup software. The position of the nozzle can be set either manually by the user or autonomously by the genetic algorithm. The implementation of the nozzle position into the genetic algorithm was achieved by expanding the chromosomes from four to five alleles. The processes of selection, recombination and mutation also had to be adapted to the extended chromosomes while the principles -e.g. one-point-, two-point-and uniform-crossover -remained unchanged. Effect of the Nozzle Position on the Orientation Process To assess the effect of the variation of the nozzle position on the orientation process, the behavior of the workpieces at a varying inflow is evaluated. The entire orientation process of one workpiece, from the first contact with the air jet to the impact on the chute, takes about 0.2 s. To allow for the analysis of the orientation process, it is filmed with a frame rate of 240 fps. This way, the behavior of the workpieces can be reviewed properly. In the following, two exemplary components are examined for their behavior under different inflow conditions. Pneumatic Plug. As first exemplary part, a plug for pneumatic pipes is used. The part can be seen in Fig. 4. The workpiece is well suited for first experiments, as it has a simple geometry due to the rotational symmetry. In addition, due to the strongly varying projected area, it is a component that is generally very well suited for aerodynamic orientation. The measurements of the component are shown in Fig. 5. Since the nozzle pressure has the strongest influence on the orientation process, to reduce the testing effort, only the parameters p and z are varied. The step size of the pressure p is chosen relatively high, with 0.05 bar to reduce the testing effort. Usually the system controls p with a resolution of 0.01 bar, because the workpieces have a low weight and the orientation process is sensitive to pressure changes. The resulting experimental plan is shown in Table 2. For each measurement, five workpieces were delivered to the nozzle in the wrong orientation and five workpieces were delivered in the right orientation. The orientation process of each workpiece is then evaluated to determine the orientation rate presented in Table 2. Entries with a dash indicate, that no orientation process takes place, which means, that neither the workpieces arriving at the nozzle in the right orientation nor those arriving in the wrong orientation are rotated by the air stream. A value of 0.9 means, for example, that 9 of 10 workpieces leave the orientation process in the right orientation. The examination of the results in Table 2 shows that good orientation rates can be achieved even with the nozzle not aligned to the centerline of the workpiece. The variation of the nozzle position allows for high orientation rates even at nozzle pressures that would normally lead to poor orientation rates. This can be seen when comparing the second column of Table 2 (z = 2 mm) to the third column (z = 4 mm): While the orientation rate rapidly drops with pressures above 0.15 bar with z = 4 mm, a high orientation rate can be achieved at pressures of 0.2 and 0.25 bar, when the nozzle is at z = 2 mm. This leads to the conclusion that although the solution space becomes larger due to the addition of a fifth parameter, new parameter combinations with a high orientation rate arise. In addition to the evaluation of the orientation process via the orientation rate, a qualitative evaluation of the process is also carried out in the following by considering the trajectory of the components. Figure 6 shows the trajectories of four components during the orientation process. They differ by the set of system parameters and the incoming orientation as described in the subframes. It becomes clear, that the position of the nozzle has decisive influence on the trajectory of the workpieces. The comparison of Fig. 6a) and b) shows that a very stable reorientation of the component can be achieved even with a non-centered nozzle position. The fact that the component in Fig. 6a) does not lift off the chute is to be seen as a major advantage. When the component hits the chute out of flight as seen in Fig. 6b), the impact impulse can lead to uncontrolled jumping of the component on the chute, thus preventing optimal exploitation of the orientation process. Particularly noteworthy is the stable behavior of those components, which already arrive at the nozzle in the correct orientation. It was observed in all tests, for which Fig. 6c) and d) are exemplary, that the components exhibit a much more predictable and reproducible behavior when the nozzle position is not centered. With a centered nozzle position, a small pressure range must be found in which the incorrectly arriving components are still reoriented but the correctly arriving components are not reoriented yet. With the non-centered nozzle position, on the other hand, the varying projected area of the component against the inflow can be utilized much better. Therefore, a higher range of nozzle pressure can be harnessed, which has a positive effect on the convergence of the genetic algorithm. Printed Sleeve. In addition to the pneumatic plugs, the effect of a flexible inflow was also investigated on plastic sleeves. The sleeves are rotationally symmetrical parts as well. However, in contrast to the pneumatic plugs, the sleeves have a completely homogenous projected inflow surface. Because of these characteristics and the higher diameter, it was expected that the inflow paradox caused by Bernoulli's principle would have an impact on the orientation process. This assumption was confirmed during the evaluation of the tests. The dimensions of the sleeves are shown in Fig. 7. The sleeves were manufactured using a 3D printer and the eccentricity is 10%. The trajectories of the components during the orientation processes with different parameter settings are shown in Fig. 8. To better illustrate the orientation of the cylindrical sleeves, the end with the center of mass has been digitally marked with a + symbol. Considering Fig. 8a), it becomes clear, that a nozzle pressure of 0.2 bar is enough to reorient the plastic sleeve with z = 2 mm. Nevertheless, with z = 6 mm (centered) no reorientation takes place (Fig. 8b). The different amounts of uplift on the components also becomes clear by comparing Fig. 8c) and d): When the sleeve arrives at the nozzle positioned 2 mm from the guiding plane, it is slightly lifted but does not rotate more than a few degrees. The component arriving with the nozzle centered (z = 6 mm) passes about half of its length over the nozzle without getting any lift. This circumstance is attributed to the Bernoulli Effect. When the sleeve passes over the nozzle, it creates a gap between itself and the nozzle carrier. Therefore, the flow path of the air jet is narrowed with results in a higher velocity of the fluid. This, according to Bernoulli's principle, leads to a decrease of pressure between the sleeve and the carrier and results in the part being dragged down. This is contrasted by the behavior of the component when the nozzle position is not centered. On the one hand, this increases the distance between the nozzle and the workpiece. On the other hand, the air jet does not hit the workpiece inside the narrow gap between workpiece and nozzle carrier, which prevents the acceleration of the air flow and therefore a decrease of pressure. Analysis of all trajectories of the experiments with the pneumatic plugs and the plastic sleeves shows the advantages of the variable nozzle position even with geometrically simple components. Essentially, four findings can be derived: 1. Even at higher pressures, the trajectory of the workpiece is lower when the nozzle position is not centered. This is an advantage, because the impulse at the impact on the slide is lower. This in turn leads to less jumping of the components on the slide and thus, finally, to a more stable and reliable feeding process. 2. Components that already arrive at the nozzle in the right orientation are reoriented easily, when the nozzle position is aligned to their centerline. When the nozzle position is not centered, components arriving in the right orientation have a much lower risk of being inadvertently reoriented. The reason for that is, that the varying projected area of the component can be exploited much better, when the core of the air jet is not aligned with the centerline of the component. This way, during the passing of the thicker part, much more momentum is generated than during the passing of the thinner part. 3. With the nozzle position at extreme values (z = 0 mm or z = 10 mm) very little lift is generated. Therefore, it is concluded that the nozzle bore must be positioned in the range of the measurements of the fed component. 4. The unwanted effect of Bernoulli's principle can be significantly reduced by varying the nozzle position. Reducing this effect leads to a more stable orientation process that can be achieved with lower nozzle pressures. Convergence of the Genetic Algorithm In order to investigate and evaluate the impact of the fifth parameter on the convergence and setting time of the genetic algorithm, additional trials needed to be carried out. To do so, the genetic algorithm was run five times with and five times without a variable nozzle position. The tests were carried out alternately to compensate for the influence of environmental influences like changes in ambient pressure or non-measurable variables like pollution of the slide by dust or abrasion of the components. To determine the orientation rate of one individual, the orientation of 100 components is measured. With a feeding rate of about 200 parts per minute for the experimental feeding system (limited by the centrifugal feeder) two individuals can be tested per minute. As exemplary component, the pneumatic plug from previous testing was chosen. The range of the parameters α, β and v was chosen according to Table 1. Based on the preliminary tests in Sect. 4 the minimum and maximum values of p were set to 0.1 and 0.3 bar respectively. Also, the range of the nozzle position was set from 1 to 9 mm in accordance with the aforementioned preliminary testing. Figure 9 shows the distribution of the number of individuals needed by the GA to reach an orientation rate of 95% or higher. It becomes clear, that with a variable nozzle position, the genetic algorithm needs far fewer individuals to find a satisfying solution. The longest setting time with the variable nozzle position is about as long as the shortest setting time with fixed nozzle position. Additionally, the deviation of the maximum and minimum setting time from the average setting time is much smaller with a variable nozzle position. The advantages of the variable nozzle position as fifth parameter also become clear when looking at Table 3. The average number of individuals, which correspond directly to the setting time is reduced by 64% with a variable nozzle position compared to a fixed nozzle position. Also, the maximum number of individuals of five runs with variable nozzle position corresponds approximately to a third of the maximum number of individuals of five runs with fixed nozzle position. This is a huge advantage considering that the setting time is directly dependent on the number of individuals and that during the setting process the system is not productive. All in all the experiments clearly show, that adding the fifth setting parameter does not impair the convergence of the GA and therefore the setting time of the feeding system. On the contrary, the average setting time is significantly reduced. Figure 10 shows the distribution of the system parameters at the end of each test run, when convergence (orientation rate ≥ 95%) was reached. In each plot, the left box shows the distribution for a fixed nozzle position, whereas the right box shows the distribution for a variable nozzle position. While α and β show no significant differences, the nozzle pressure p (Fig. 10c)) is generally higher with a variable nozzle position. At the same time, the range of p is also wider with the variable nozzle position. Considering that for a system configuration with only four parameters, the nozzle pressure p has the highest effect on the orientation rate (c.f. Fig. 2), it is assumed, that the higher acceptable range of the pressure p significantly contributes to the shorter setting time. Figure 10e) shows that all values for z are between 1 mm and 3 mm with a median of 2 mm. This shows that the fixed nozzle position of 4 mm was not the optimal position and that -using the fifth parameter -the system is now able to determine the optimal nozzle position autonomously, which in turn reduces the setting time. The higher median and range of the orientation rate at convergence (Fig. 10f)) is an indication for a higher process stability that can be achieved with a non-centered nozzle position. Conclusion and Outlook In this work, the extension of an aerodynamic feeding system was presented. In order to increase the flexibility of the system, the position of the nozzle perpendicular to the direction of movement of the components was introduced as fifth adjustment parameter in addition to two angles, the nozzle pressure and the feeding speed. As a result of the new parameter, the number of possible configurations of the system increased significantly. In order to investigate the effects of the nozzle position on the autonomous adjustment algorithm (GA) of the aerodynamic feeding system, the behaviour of the components in the orientation process was examined in detail. It was found that even with simple components, a flexible inflow can lead to an increased resilience against variation of nozzle and ambient pressure. Since the pressure has been identified as main factor determining the orientation rate, this higher resilience induces an elevated process reliability. In addition, the disturbing influence of Bernoulli's effect could be reduced by means of a displaced inflow. Subsequently, it was investigated how the setting time of the aerodynamic feeding system changes due to the enlarged solution space of the genetic algorithm. It was found that the adjustment time with a variable nozzle position can be reduced by more than 60% on average compared to a fixed nozzle position, despite the larger solution space. The reason for this is the higher range of possible nozzle pressures, generating a high orientation rate and the higher process stability mentioned above. Further experiments on the convergence of the GA are to be carried out in future work. The component spectrum and complexity will be varied, expecting to show further advantages of the variable nozzle position. In addition, the analysis of the parameter sets at convergence (Fig. 10) shows that the effects of the parameters on the orientation rate have shifted. For example, the system's sensitivity to pressure changes seems to be lower, while the nozzle position seems to have a high impact on the orientation process. It is therefore necessary to determine the effects of the system parameters on the orientation rate again, using Design of Experiments methods. Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
2021-05-11T00:06:28.505Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e4551cbc120733fae93d07e72da630593c4a3baa", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-72632-4_5.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "fb9fb61faae0dce1c8199eb400b515aad5020034", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
119160538
pes2o/s2orc
v3-fos-license
On the upper bounds for the constants of the Hardy-Littlewood inequality The best known upper estimates for the constants of the Hardy--Littlewood inequality for $m$-linear forms on $\ell_{p}$ spaces are of the form $\left(\sqrt{2}\right) ^{m-1}.$ We present better estimates which depend on $p$ and $m$. An interesting consequence is that if $p\geq m^{2}$ then the constants have a subpolynomial growth as $m$ tends to infinity. Introduction Let K be R or C. Given an integer m ≥ 2, the Hardy-Littlewood inequality (see [1,8,12]) asserts that for 2m ≤ p ≤ ∞ there exists a constant C K m,p ≥ 1 such that, for all continuous m-linear forms T : ℓ n p × · · · × ℓ n p → K and all positive integers n, ≤ C K m,p T . Using the generalized Kahane-Salem-Zygmund inequality (see [1]) one can easily verify that the exponents 2mp mp+p−2m are optimal. The case p = ∞ recovers the classical Bohnenblust-Hille inequality (see [4]). More precisely, it asserts that there exists a constant B mult K,m such that for all continuous m-linear forms T : ℓ n ∞ × · · · × ℓ n ∞ → K and all positive integers n, From [3,11] we know that B mult K,m has a subpolynomial growth. On the other hand, the best known upper bounds for the constants in (1) are √ 2 m−1 (see [1,2,6]). In this paper we show that √ 2 m−1 can be improved to for real scalars and to for complex scalars. These estimates are quite better than √ 2 m−1 because B mult K,m has a subpolynomial growth. Moreover, our estimates depend on p and m and catch more subtle information. For instance, if p ≥ m 2 we conclude that C K m,p ∞ m=1 has a subpolynomial growth. Our main result is the following: Theorem 1.1. Let m ≥ 2 be a positive integer and 2m ≤ p ≤ ∞. Then, for all continuous m-linear forms T : ℓ n p × · · · × ℓ n p → K and all positive integers n, we have   n j1,...,jm=1 |T (e j1 , ..., e jm )| The authors are supported by CNPq Grant 313797/2013-7 -PVE -Linha 2. and The proof We recall that the Khinchin inequality (see [5]) asserts that for any 0 < q < ∞, there are positive constants A q , B q such that regardless of the scalar sequence (a j ) ∞ j=1 in ℓ 2 we have where r j are the Rademacher functions. More generally, from the above inequality together with the Minkowski inequality we know that for I = [0, 1] m and all (a j1....jm ) ∞ j1,...,jm=1 in ℓ 2 . The notation of the constant A q above will be used in all this paper. Let 1 ≤ s ≤ 2 and Since from the generalized Bohnenblust-Hille inequality (see [1]) we know that there is a constant C m ≥ 1 such that for all m-linear forms T : Above, n ji=1 means the sum over all j k for all k = i. If we choose The multiple exponent (λ 0 , s, s, ..., s) can be obtained by interpolating the multiple exponents (1, 2..., 2) and 2m m+1 , ..., 2m m+1 with, respectively, in the sense of [1]. 2) i = k It is important to note that λ k−1 < λ k ≤ s. Denoting, for i = 1, ...., m, Therefore, using Hölder's inequality twice we obtain |T (e j1 , ..., e jm )| s We know from the case i = k that Now we investigate the first factor in (7). From Hölder's inequality and (6) it follows that Replacing (8) and (9) in (7) we finally conclude that Since λ m = s the proof is done. Constants with subpolynomial growth The optimal constants of the Khinchin's inequality (these constants are due to U. Haagerup [7]) are for q > q 0 ∼ = 1.847 and A q = 2 1 2 − 1 q for q ≤ q 0 , where q 0 ∈ (0, 2) is the unique real number satisfying For complex scalars if we use the Khinchin inequality for Steinhaus variables we have A q = Γ q + 2 2 1 q for all 1 ≤ q < 2 (see [9]). The best known upper estimates for B mult R,m and B mult C,m (from [3]) are Combining these results we have
2014-05-22T18:32:29.000Z
2014-05-22T00:00:00.000
{ "year": 2014, "sha1": "c36629d5bd4b4d71b2dfa94a4523154366c79f8f", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jfa.2014.06.014", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "c36629d5bd4b4d71b2dfa94a4523154366c79f8f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
109053545
pes2o/s2orc
v3-fos-license
Studying Growth and Vigor as Quantitative Traits in Grapevine Populations Vigor is considered as a propensity to assimilate, store, and/or use nonstructural carbohydrates for producing large canopies, and it is associated with high metabolism and fast growth. Growth involves cell expansion and cell division. Cell division depends on hormonal and metabolic processes. Cell expansion occurs because cell walls are extensible, meaning they deform under the action of tensile forces, generally caused by turgor. There is increasing interest in understanding the genetic basis of vigor and biomass production. It is well established that growth and vigor are quantitative traits and their genetic architecture consists of a big number of genes with small individual effects. The search for groups of genes with small individual effects, which control a specific quantitative trait, is performed by QTL analysis and genetic mapping. Today, several linkage maps are available, like “Syrah” × “grenache,” “Riesling” × “Cabernet Sauvignon,” and “Ramsey” × Vitis riparia . This last progeny segregates for vigor and constituted an interesting tool for our genetic studies on growth. Introduction In 1865, Mendelian studies gave birth to genetics as a science. The Mendelian model accurately explains inheritance for qualitative traits, with discontinuous distributions. But, what happens with quantitative, continuous traits, like growth or vigor? These quantitative, polygenic, complex traits reveal the expression of many genes with small but additive effects. The part of the chromosome where these genes are clustered is called quantitative trait loci or QTLs. The main economic interesting traits, like production, growth, and vigor, have quantitative distributions and respond to QTLs. In addition, as they are being controlled by many genes, similar phenotypes may have different allelic variations, or plants with the same QTLs may have very different phenotypes in different environments. Additionally, the epistatic effect, caused by allelic combinations of different genes-meaning that the expression of a certain gene may affect the expression of another-adds variations in the final expression of the phenotype. Sax [1] was the first to describe the theory of QTL mapping. Later, Thoday [2] suggested that it was possible to apply the well-known concept of segregation of simple genes, to linked QTL detection. The vital participation of molecular markers that have been developed through the years allowed improving the technique, permitting, in many cases, the identification of a certain gene or few genes responsible for the quantitative phenotypic variation [3]. In a very elegant thesis, Donoso Contreras [4] adopts the "needle in the hay" analogy to picture the difficulty in finding, in a whole genome, one gene with quantitative effect. QTL analysis allows dividing the hay into several "bunches of hay" and systematically looking for the "needle." QTL analysis links two types of information-phenotypic data (measurements) and genetic data (molecular markers)-in an attempt to explain the genetic bases of variations in complex traits [5,6]. This analysis allows linking certain complex phenotypes to certain regions in the chromosomes. The original premise is to discover locus by co-segregation of the phenotypes with the markers. Two things are essential for QTL mapping. In the first place, two contrasting parents for a certain trait are crossed, and a segregating population must be obtained. Later, genetic markers that distinguish the two parental lines are involved in the mapping. In this sense, molecular markers are preferred as they will rarely affect the studied trait. The markers linked to a QTL that influences the character or trait of interest will segregate with the trait (in high-frequency, lower recombination rate), while the non-linked markers will segregate separately (high recombination). For highly heterocygous species, like grapevines, to obtain pure homocygous lines is almost impossible, and the F1 progenies that do segregate are feasible to be studied. This progenies are called pseudo F1 progenies. There are three statistic methodologies for the detection of a QTL: single marker analysis, simple interval mapping (SIM), and composite interval mapping (CIM). In the first case, single marker analysis, the technique is based on ANOVA and simple linear regression. It is simple and easy to do, not needing a genetic map as it analyzes the relation between each marker with the phenotype. On the other hand, SIM uses a genetic map to define the interval among adjacent pairs of linked markers [7]. Finally, CIM is combined with SIM for a single QTL in a given interval with multiple regression analysis of associated markers to other QTLs, including additional genetic markers or cofactors that control the genetic background. This is the most efficient and effective approach [8]. The results of QTL analysis are presented in terms of logarithm of the odds (LOD) scores or probabilities [9]. Strictly, a QTL is considered significant when its LOD score is higher than the LOD score calculated by permutation tests [10]. After localizing the QTL, the explained variability is calculated by means of the average values of the phenotypes of the genetic groups of the QTL, in the position of the map with maximum LOD score [3]. Vigor as a quantitative trait Vigor is considered the genotype's propensity to assimilate, store, and/or use nonstructural carbohydrates for producing large canopies, and it is associated with intense metabolism and fast shoot growth [11,12]. Carbon assimilation (A) turns to be the vital mechanism that makes growth possible. For A to occur, CO 2 must diffuse into the leaf mesophyll, through opened stomata. The trade-off of C assimilation is loss of water from the leaf to the atmosphere. This inevitable water loss through opened stomata (and the depreciable diffusion through cuticle) constitutes transpiration (E). This means that A and stomatal conductance (g s ) are tightly correlated [13] and stomata are directly responsible for optimizing E vs. A [14]. Growth involves cell expansion and cell division [15]. Cell expansion takes place when cell walls deform under the action of tensile forces, generally caused by turgor [16]. The plant water uptake capacity is influenced by the hydraulic conductance (k H ) of the roots which in turn confers different hydration and turgor to the canopy [17,18], conferring different growth levels by cellular extension [19]. Keller [20] found that k H adapts to support canopy growth and carbon partitioning but may limit shoot vigor in grapevines. These differences in k H that account for variation in growth among genotypes have a genetic correlate. Marguerit et al. [21] detected quantitative trait loci (QTL) for E, soil water extraction capacity, and water use efficiency (WUE) when studying water stress response of Vitis vinifera cv. Cabernet Sauvignon × Vitis riparia cv. Gloire de Montpellier progeny. They observed that their QTLs co-localized with genes involved in the expression of hydraulic regulation and aquaporin activity that directly affect the plant k H , as previously proposed [18]. There is increasing interest in deepening on the genetic basis of vigor and biomass production. It is well stablished that growth and vigor are quantitative traits and their genetic architecture consists of multiple genes with small individual effects. Today, several linkage maps are available, like Syrah × grenache, Cabernet Sauvignon × Riesling, and Ramsey × Vitis riparia [22][23][24]. Lowe and Walker concluded that the Ramsey × V. riparia linkage map was a valuable tool with which to examine and map traits like biotic resistance, drought tolerance, and vigor. This map was used to study vigor and map QTLs in relation to this trait. Physiological component of vigor In 1997, under code 9715, in the University of California, Davis, a cross between Ramsey (Vitis champinii) and Vitis riparia Gloire de Montpellier (Figure 1) was done. The purpose of this cross was to study biotic resistances. Later, it was observed that the population also segregated for vigor and vegetative growth, among other quantitative traits [24]. This allowed the opportunity of inquiring about the genetic and mechanistic bases of this characteristic. This population is a pseudo F1 cross of Ramsey and V. riparia GM. In grapevine, the high heterozygosity makes it impossible to recover pure homocygous lines and obtain F2 crosses or backcrosses. Segregation is possible in pseudo F1 populations. In this way, our F1 from Ramsey and V. riparia GM was obtained with the intention of studying biotic and abiotic resistances and vigor. One hundred thirty-eight genotypes from a F1 progeny between Ramsey and V. riparia GM were evaluated at UC Davis, California, in the summer of 2014 and 2015. Shoot growth rate (b); leaf area (LA); leaf, shoot, and root dry biomasses (DWL, DWS, DWR); plant hydraulic conductance (k H ); root hydraulic conductance (Lp r ); stomatal conductance (g s ); and water potential (Ψ) were measured as vigor-related traits. Specific leaf area (SLA: LA/leaf biomass) was calculated, and QTL mapping and detection were performed on both parental and consensus maps. A complete description of the techniques and methods used to measure and assess the variables studied is published by Hugalde et al. [25]. Hydraulic variables were not mapped, as they were measured in a smaller number of genotypes given the time-consuming nature of the methods that asses them. However, significant statistics evidenced an important role of root hydraulics in vigor definition [25]. A principal component analysis (PCA) of a subset of 50 genotypes explained 80% of the variability (Figure 2). Component 1 showed strong positive effects of LA, growth rate (b), and root dry weight (DWR), while strong and negative effect was found for specific root hydraulic conductance (L pr , hydraulic conductance per gram of dry biomass). This negative effect explains that more vigor corresponds to lower L pr , meaning that smaller plants and smaller root systems tend to be, when considered per biomass weight, more effective in water absorption than vigorous plants. This was also observed by Lovisolo et al. [26] in olive dwarfing rootstocks, Herralde et al. [17] when studying grapevine rootstocks under water stress, and Kaldenhoff et al. [27] with Arabidopsis thaliana and an antisense construct targeted to the PIP1b aquaporin gene. Later, similar results were observed in kiwi plants, where leaf area-specific conductance and g s were both higher in the low-vigor rootstocks [28]. Finally, one more study with two chickpea progenies showed the same type of behavior, being the low-vigor plants the ones with higher root hydraulic conductivity and higher transpiration rates [29]. This higher Lp r in small root systems of low-vigor plants seems to try to compensate the low biomass production, while vigorous plants, which may be less efficient per biomass unit, have bigger root systems, with more biomass accumulation, and in conclusion higher total root hydraulic conductance. For component 2, positive effects were explained by specific leaf area (SLA) and the partitioning index constituted by leaf area (LA) and total biomass. SLA is an [25]. This analysis was carried out with Statgraphics centurion XVI, 16 important parameter of growth rate because the larger the SLA, the larger the area for capturing light per unit of previously captured mass. These indices indicate that different genotypes with different vigor also have different partitioning pathways; as for vigorous plants, more LA vs. total biomass can be expected, while for smaller plants, the opposite is expected. However, when comparing dry weights (biomass), low-vigor plants tend to have small canopies and also small root systems. This clearly shows how LA, which depends on leaf biomass and the hydraulic situation (turgor that allows cell expansion), is so different between opposite genotypes. Big plants with higher total plant hydraulic conductance have more leaf area, with respect to their biomasses, than small plants [25]. Genetic component of vigor: QTL mapping in a grapevine population The Ramsey × V. riparia GM progeny showed transgressive segregation and significant differences between small, intermediate, and big plants. Figure 3 shows vigor (canopy biomass, B) for the complete progeny and the parents, for 2014. Data for 2015 (not shown) showed similar results [25]. For V. riparia GM, during the first year of study, 16 significant QTLs at the chromosome level were found (LOD scores higher than the threshold value calculated after 1000 permutations, for α 0.05), but only three resulted significant genome wide (LOD scores higher than the threshold calculated for the genome). The partitioning indices related to canopy vs. root biomass were significant at the group level and considered putative ( Table 1). For LA vs. total plant biomass and SLA, QTLs explaining 11.4 and 9% of variance were found in chromosome 1, next to a putative QTL for LA. For LA, another QTL, explaining 12% of total variance, was found in chromosome 4. During the second year of study and mapping, the parental map of V. riparia GM showed five QTLs, significant at the chromosome level ( Table 2). This time, chromosomes 4 and 16 showed once more QTLs for traits related to biomass partitioning and LA. This result allowed us to have good confidence about these QTLs, previously considered as putative, but found in two independent mapping processes. On the other side, for variables like SLA and growth rate, new QTLs were found during 2015. For the parental Ramsey map ( Table 3), during 2014, the first year of mapping, seven putative QTLs were found. LA/total biomass, SLA, and partitioning indices were mapped. No QTLs for LA, growth rate, canopy, or total biomass could be detected. During the second mapping, in 2015, Ramsey showed 21 QTLs (Table 4), among which four were genome-wide significant, being all the rest considered as putative (significant at the chromosome level). Among these putative QTLs, it is worthy to mention that the mapped traits were LA, growth rate, canopy, and total biomass, also found in the Riparia map. In addition, one of the putative QTLs corresponded to shoot biomass (DWS), also found in chromosome 14, in 2014. The four genome-wide significant QTLs were found in chromosomes 1 and 19 of the Ramsey map, corresponding to partitioning variables like DWR/DWL, DWR/total biomass, canopy/total biomass, and LA/total biomass. This last trait, which explains 11% of the phenotypic variance, has almost the same biological meaning as SLA, as it represents the possibility of the plant to transform biomass from its "whole body," into sunlight-receiving screen, for photosynthesis. This variable was mapped in Table 5) showed significant QTLs at the chromosome level, but not genome wide. There was positive interaction in chromosomes 5 and 7 for leaf density and in chromosomes 5, 4, and 13 for LA, variables that were not mapped in the parents. In these consensus maps, significant QTLs were also mapped in chromosomes 3, 10, and 11 for canopy biomass (what we consider vigor), LA, and biomass partitioning (canopy/DWR). Negative interaction was also found in chromosome 13 of Ramsey. LA/total biomass, LA/DWR, SLA, and DWS/total biomass were mapped in the parental map but were not found in the consensus map. Studying Growth and Vigor as Quantitative Traits in Grapevine Populations DOI: http://dx.doi.org /10.5772/intechopen.82537 With regard to the consensus map of 2015 (Table 6), many QTLs that were not mapped in 2014 were mapped this time. Six QTLs were found to be significant at the chromosome level, while only one was significant genome wide. In chromosome 19, one QTL for LA/total biomass, also found in Ramsey, explained 15% of total variance. As observed in 2014, negative interaction was also found in 2015. This time, DWS, canopy, leaf number, growth rate, total biomass, canopy/DWR, DWR/DWS, SLA, DWS/DWL, and DWS/DWL were mapped in the parental map of Ramsey, but were not found in consensus map. The same happened for SLA and growth rate in reference to the V. riparia GM parental map that showed QTLs for these traits, but were not found in consensus. Identifying other quantitative traits in grapevine: QTL maps and underlying phenotypes One major purpose in grapevine genetics is to identify quantitative loci, and underlying genes, that explain the natural genetic variation of specific traits. The frequent quantitative nature of genetic variation in grapevine requires the use of QTL mapping to understand the genetic architecture of traits. Several maps have been created and studied in grapevine with these purposes. Crosses between contrasting varieties have given birth to several progenies that constitute the basis for QTL/genetic mapping. Agronomic interesting traits like resistances to powdery and downy mildew, Phylloxera, Pierce's disease, and Xiphinema were studied in V. vinifera complex hybrids, V. cinerea, V. rupestris, and V. arizonica [30][31][32][33][34][35][36][37]. QTLs related to growth and development were found in progenies like Picovine × Ugni blanc [38], Riesling × Gewurztraminer [39], and Syrah and Grenache [40]. Also, in V. vinifera complex hybrids and V. cinerea, V. rupestris, and V. arizonica, traits related to plant physiology were studied: flowering and ripening dates, flower sex, and mineral deficiencies [21, 30-32, 41, 42]. Additionally, in Syrah × Pinot Noir, Grzeskowiak et al. [43] detected QTLs related to budburst, flowering beginning, the onset of ripening (véraison), and total fertility, while Bayo Canha [44] studied Monastrell × Syrah in search for QTLs related to phenology, enology-related traits, and productive and morphological traits. Breeding purposes include a wide spectrum of objectives. Classic breeding programs have searched for biotic and abiotic resistances, as well as production, quality, growth, and developmental characteristics. Genomic studies and genetic mapping can significantly speed up the selection of seedlings with desired traits. Early identification of individuals carrying the desired allele combinations results in decreased maintenance and evaluation costs. The identification of genes and molecular markers underlying specific traits will help accelerate the breeding process, generating new prospects for crop improvement [44]. Conclusions Vigor, a quantitative character, is particularly difficult to address. A large number of variables need to be studied in order to achieve a fine comprehension of the phenomena involved. In our study, we analyzed vigor from a wide physiological view and a genetic mapping approach. The mathematical function that represents growth, called sigmoid, starts with an initial plateau where small effects occur. Later, as these small effects accumulate, and cause successive effects, the function turns exponential. For quantitative characters, where positive feedbacks (typically exponential) can cause large effects, low but statistically significant explanatory levels, like the QTLs found, as well as the physiologic results, may have impressive effects. It turns interesting to observe that many variables that physiologically showed to be significant in vigor explanation could be mapped and significant QTLs were found for them. The most important ones, SLA, LA, and LA/total biomass, showed to be significant in the PCA analysis as well as for the QTL mapping. Previous studies bring support to our findings. When mapping the population of Picovine × Ugni blanc, Houel et al. [38] also found a QTL for LA in chromosome 4 of the parental map of ugni blanc and one QTL for LA in chromosome 9 of Picovine. In addition, QTLs related to budbreak explaining 11 and 12% of variation were mapped in chromosomes 4 and 19 in the Riesling × Gewurztraminer population [39], and five QTLs for growth rate were found in linkage groups 4, 10, 15, 17, and 18, in the Syrah and grenache population, altogether accounting for up to 30% of total variance [40]. Moreover, Díaz-Riquelme et al. [45] found that five MIKC genes (that encode for transcription factors with growth and developmental functions in plants) of grapevine were localized in chromosome 1. In our mapping, the major number of QTLs was found in chromosomes 1, 3, 5, 13, and 19, coincident with other studies. After the QTL mapping, the next step would be to manage the search of candidate genes by saturating the portion of the chromosome that includes the interesting QTL and narrowing the piece of DNA that includes the candidate genes. As an example, by saturating chromosome 19, we could try to find candidate genes for the expression of the relation among LA and biomass production. This would finally © 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Studying Growth and Vigor as Quantitative Traits in Grapevine Populations DOI: http://dx.doi.org/10.5772/intechopen.82537 support a breeding strategy, where to have a more efficient growing plant could turn to be important. Vigor in grapevine, as many quantitative traits, appears to have a complex genetic background. This character, beside its biological significance, has a wide agronomical impact, not only related to the plant behavior but also linked to the amount and the quality of the harvest. In this paper, the analysis over a segregating progeny of Ramsey × V. riparia GM was able to identify several vigor-linked traits with good statistical support. Whereas the effect expected to be explained for each individual trait appears to be small, it will shed light to this complex character. The phenotyping of segregating progenies constitutes a valuable tool for clarifying the genetic basis of traits of complex nature. An accurate choice of the parameters to be studied is crucial in order to optimize the experimental procedure and data analysis. In consequence, a previous understanding of the physiological basis of a trait of interest, or at least a very well-supported hypothesis, should lead a population genetics study. When these issues are considered, the obtained results would be able to achieve the expected goal.
2019-04-12T13:41:41.376Z
2019-03-20T00:00:00.000
{ "year": 2019, "sha1": "a979fb53253d9815f3c8acf78d24918411320c54", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/64779", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5f3b51660633197742176afe459bd98ccc7ce08d", "s2fieldsofstudy": [ "Biology", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
8716498
pes2o/s2orc
v3-fos-license
Alzheimer Disease Aβ Production in the Absence of S-Palmitoylation-dependent Targeting of BACE1 to Lipid Rafts* Alzheimer disease β-amyloid (Aβ) peptides are generated via sequential proteolysis of amyloid precursor protein (APP) by BACE1 and γ-secretase. A subset of BACE1 localizes to cholesterol-rich membrane microdomains, termed lipid rafts. BACE1 processing in raft microdomains of cultured cells and neurons was characterized in previous studies by disrupting the integrity of lipid rafts by cholesterol depletion. These studies found either inhibition or elevation of Aβ production depending on the extent of cholesterol depletion, generating controversy. The intricate interplay between cholesterol levels, APP trafficking, and BACE1 processing is not clearly understood because cholesterol depletion has pleiotropic effects on Golgi morphology, vesicular trafficking, and membrane bulk fluidity. In this study, we used an alternate strategy to explore the function of BACE1 in membrane microdomains without altering the cellular cholesterol level. We demonstrate that BACE1 undergoes S-palmitoylation at four Cys residues at the junction of transmembrane and cytosolic domains, and Ala substitution at these four residues is sufficient to displace BACE1 from lipid rafts. Analysis of wild type and mutant BACE1 expressed in BACE1 null fibroblasts and neuroblastoma cells revealed that S-palmitoylation neither contributes to protein stability nor subcellular localization of BACE1. Surprisingly, non-raft localization of palmitoylation-deficient BACE1 did not have discernible influence on BACE1 processing of APP or secretion of Aβ. These results indicate that post-translational S-palmitoylation of BACE1 is not required for APP processing, and that BACE1 can efficiently cleave APP in both raft and non-raft microdomains. Alzheimer disease ␤-amyloid (A␤) peptides are generated via sequential proteolysis of amyloid precursor protein (APP) by BACE1 and ␥-secretase. A subset of BACE1 localizes to cholesterol-rich membrane microdomains, termed lipid rafts. BACE1 processing in raft microdomains of cultured cells and neurons was characterized in previous studies by disrupting the integrity of lipid rafts by cholesterol depletion. These studies found either inhibition or elevation of A␤ production depending on the extent of cholesterol depletion, generating controversy. The intricate interplay between cholesterol levels, APP trafficking, and BACE1 processing is not clearly understood because cholesterol depletion has pleiotropic effects on Golgi morphology, vesicular trafficking, and membrane bulk fluidity. In this study, we used an alternate strategy to explore the function of BACE1 in membrane microdomains without altering the cellular cholesterol level. We demonstrate that BACE1 undergoes S-palmitoylation at four Cys residues at the junction of transmembrane and cytosolic domains, and Ala substitution at these four residues is sufficient to displace BACE1 from lipid rafts. Analysis of wild type and mutant BACE1 expressed in BACE1 null fibroblasts and neuroblastoma cells revealed that S-palmitoylation neither contributes to protein stability nor subcellular localization of BACE1. Surprisingly, non-raft localization of palmitoylation-deficient BACE1 did not have discernible influence on BACE1 processing of APP or secretion of A␤. These results indicate that post-translational S-palmitoylation of BACE1 is not required for APP processing, and that BACE1 can efficiently cleave APP in both raft and non-raft microdomains. Alzheimer disease-associated ␤-amyloid (A␤) 3 peptides are derived from the sequential proteolysis of ␤-amyloid precursor protein (APP) by ␤and ␥-secretases. The major ␤-secretase is an aspartyl protease, termed BACE1 (␤-site APP-cleaving enzyme 1) (1)(2)(3)(4). BACE1 cleaves APP within the extracellular domain of APP, generating the N terminus of A␤. In addition, BACE1 also cleaves to a lesser extent within the A␤ domain between Tyr 10 and Glu 11 (␤Ј-cleavage site). Processing of APP at these sites results in the shedding/secretion of the large ectodomain (sAPP␤) and generating membrane-tethered C-terminal fragments ϩ1 and ϩ11 (␤-CTF) (5). The multimeric ␥-secretase cleaves at multiple sites within the transmembrane domain of ␤-CTF, generating C-terminal heterogeneous A␤ peptides (ranging in length between 38 and 43 residues) that are secreted, as well as cytosolic APP intracellular domains (6). In addition to BACE1, APP can be cleaved by ␣-secretase within the A␤ domain between Lys 16 and Leu 17 , releasing sAPP␣ and generating ␣-CTF. ␥-Secretase cleavage of ␣-CTF generates N-terminal truncated A␤, termed p3. BACE1 is a type I transmembrane protein with a long extracellular domain harboring a catalytic domain and a short cytoplasmic tail. BACE1 is synthesized as a proenzyme, which undergoes post-translational modifications that include removal of a pro-domain by a furin-like protease, N-glycosylation, phosphorylation, S-palmitoylation, and acetylation, during the transit in the secretory pathway (16 -20). In non-neuronal cells the majority of BACE1 localizes to late Golgi/TGN and endosomes at steady-state and a fraction of BACE1 also cycles between the cell surface and endosomes (21). The steady-state localization of BACE1 is consistent with the acidic pH optimum of BACE1 in vitro, and BACE1 cleavage of APP is observed in the Golgi apparatus, TGN, and endosomes (22)(23)(24)(25). BACE1 endocytosis and recycling are mediated by the GGA family of adaptors binding to a dileucine motif ( 496 DISLL) in its cytoplasmic tail (21, 26 -31). Phosphorylation at Ser 498 within this motif modulates GGA-dependent retrograde transport of BACE1 from endosomes to TGN (21, 26 -31). Over the years, a functional relationship between cellular cholesterol level and A␤ production has been uncovered, raising the intriguing possibility that cholesterol levels may determine the balance between amyloidogenic and non-amyloidogenic processing of APP (32)(33)(34). Furthermore, several lines of evidence from in vitro and in vivo studies indicate that cholesterol-and sphingolipid-rich membrane microdomains, termed lipid rafts, might be the critical link between cholesterol levels and amyloidogenic processing of APP. Lipid rafts function in the trafficking of proteins in the secretory and endocytic pathways in epithelial cells and neurons, and participate in a number of important biological functions (35). BACE1 undergoes S-palmitoylation (19), a reversible post-translational modification responsible for targeting a variety of peripheral and integral membrane proteins to lipid rafts (36). Indeed, a significant fraction of BACE1 is localized in lipid raft microdomains in a cholesterol-dependent manner, and addition of glycosylphosphatidylinositol (GPI) anchor to target BACE1 exclusively to lipid rafts increases APP processing at the ␤-cleavage site (37,38). Antibody-mediated co-patching of cell surface APP and BACE1 has provided further evidence for BACE1 processing of APP in raft microdomains (33,39). Components of the ␥-secretase complex also associate with detergent-resistant membrane (DRM) fractions enriched in raft markers such as caveolin, flotillin, PrP, and ganglioside GM1 (40). The above findings suggest a model whereby APP is sequentially processed by BACE1 and ␥-secretase in lipid rafts. Despite the accumulating evidence, cleavage of APP by BACE1 in non-raft membrane regions cannot be unambiguously ruled out because of the paucity of full-length APP (APP FL) and BACE1 in DRM isolated from adult brain and cultured cells (41). Moreover, it was recently reported that moderate reduction of cholesterol (Ͻ25%) displaces BACE1 from raft domains, and increases BACE1 processing by promoting the membrane proximity of BACE1 and APP in non-raft domains (34). Nevertheless, this study also found that BACE1 processing of APP is inhibited with further loss of cholesterol (Ͼ35%), consistent with earlier studies (32,33). Nevertheless, given the pleiotropic effects of cholesterol depletion on membrane properties and vesicular trafficking of secretory and endocytic proteins (42)(43)(44)(45)(46)(47), unequivocal conclusions regarding BACE1 processing of APP in lipid rafts cannot be reached based on cholesterol depletion studies. In this study, we explored the function of BACE1 in lipid raft microdomains without manipulating cellular cholesterol levels. In addition to the previously reported S-palmitoylation sites (Cys 478 /Cys 482 /Cys 485 ) within the cytosolic tail of BACE1 (19), we have identified a fourth site (Cys 474 ) within the transmembrane domain of BACE1 that undergoes S-palmitoylation. A BACE1 mutant with Ala substitution of all four Cys residues (BACE1-4C/A) fails to associate with DRM in cultured cells, but is not otherwise different from wtBACE1 in terms of protein stability, maturation, or subcellular localization. Surprisingly, APP processing and A␤ generation were unaffected in cells stably expressing the BACE1-4C/A mutant. Finally, we observed an increase in the levels of APP CTFs in detergentsoluble fractions of BACE1-4C/A as compared with wtBACE1 cells. Thus, our data collectively indicate a non-obligatory role of S-palmitoylation and lipid raft localization of BACE1 in amyloidogenic processing of APP. EXPERIMENTAL PROCEDURES cDNA Constructs-Plasmids encoding C-terminal FLAGtagged wtBACE1 and 3C/A (C478A/C482A/C485A) (19) and hemagglutinin-tagged Asp-His-His-Cys (DHHC)-rich protein acyltransferases (PATs) have been described (48). A plasmid containing placental alkaline phosphatase (PLAP) cDNA was obtained from ATCC (clone MGC-5096). BACE1-3C/A cDNA was used as the template to generate BACE1-4C/A (C474A/ C478A/C482A/C485A) by PCR mutagenesis, and the amplified segment was verified by sequencing. BACE1-GPI cDNA was constructed by overlap PCR by replacing the transmembrane and C-terminal sequences of BACE1 with the GPI anchor domain from PLAP. For retroviral expression, the cDNAs were subcloned into retroviral vector, pMXpuro (provided by Dr. Toshio Kitamura, University of Tokyo, Japan) or pLHCX (Clonetech). To construct a retroviral vector for low-level transgene expression (pMXpuroIRES), we cloned the internal ribosome entry site (IRES) from pIRES (Clontech) downstream of the puromycin resistance cassette in pMX vector. BACE1 cDNAs were then subcloned downstream of the IRES. Retroviral Infections and Generation of Stable Cell Lines-BACE1 Ϫ/Ϫ mouse embryonic fibroblasts (MEF) have been previously described (49). N2a cells stably expressing c-Myc epitope-tagged wtAPP (N2a 695.13) and APP Swe (N2a Swe.10) have been described previously (24). The Plat-E retroviral packaging cell line was kindly provided by Dr. Toshio Kitamura (University of Tokyo, Japan). Retroviral infections were performed as described previously (50). Briefly, retroviral supernatants collected 48 h after transfection of Plat-E cells with expression vectors were used to infect BACE1 Ϫ/Ϫ MEF or N2a cells in the presence of 10 g/ml Polybrene. Stably transduced pools of MEF, N2a 695.13, or Swe.10 cells were selected in the presence of 4 g/ml puromycin or hygromycin (400 g/ml). Lipid Raft Fractionation-Lipid rafts were isolated from 0.5% Lubrol WX (Lubrol 17A17; Serva) lysates of cultured cells by discontinuous flotation density gradients as described previously (40,41). For the analysis of cell surface rafts, subconfluent cultures were surface biotinylated with NHS S-S biotin (Pierce) as described previously (24) and then subjected to lipid raft fractionation. Cell surface-biotinylated proteins in gradient fractions were captured with streptavidin beads (Pierce) and analyzed by immunoblotting. For quantifications, optimal exposures of Western blots were analyzed by standard densitometry, and a transmission calibration step tablet (Stouffer Industries, Inc.) was used to convert raw optical densities to relative fold-differences in signal intensity using Metamorph software (Molecular Devices). Protein Analyses-Metabolic and pulse-chase labeling using [ 35 S]Met/Cys were performed essentially as described (24,53). To assess the stability of BACE1, parallel dishes were pulselabeled for 30 min with 250 Ci/ml [ 35 S]Met/Cys (MP Biomedicals) and chased for various time points. BACE1 was immunoprecipitated from cell lysates using anti-BACE1 antibody. For analysis of APP, cells were pulse-labeled for 15 min or continuously labeled for 3 h. Full-length APP and APP CTFs were immunoprecipitated from cell lysates using CTM1 antibody. A␤ and p3 fragments were immunoprecipitated from the conditioned medium using mAb 4G8. ␤-CTFs (starting at ϩ1 residue of A␤) were identified by probing the blots with mAb 26D6. A␤, sAPP␣, and sAPP␤ Measurements-Conditioned media were collected 48 h after plating the cells and the levels of secreted A␤ and sAPP␣ were quantified by ELISA as described previously (53). A␤1-40, A␤x-40, and A␤1-x were measured using specific sandwich ELISAs. A␤ peptides were captured using mAb B113 for A␤1-40 and A␤x-40 ELISA, and mAb B436 for A␤1-x ELISA. Bound peptides were detected using alkaline phosphatase-conjugated mAb B436 for A␤1-40 ELISA or biotinylated mAb 4G8 in combination with the streptavidin-alkaline phosphatase complex for A␤x-40 and A␤1-x ELISA. Alkaline phosphatase activity was measured using CSPD-Sapphire II Luminescence Substrate (Applied Biosystems) and relative luminescence unit values were measured using a standard 96-well luminometer. Each sample was assayed in duplicate using appropriate dilution of the condi-tioned media so that the relative luminescent units were in the linear range of the standards included on each plate. Synthetic A␤40 peptide (Bachem) was diluted in culture medium to generate standard curve (ranging from 1 to 1000 pg/well). sAPP␣ was quantified by sandwich ELISA using mAb 5228 for capture and mAb B436 for detection, and quantified using a standard curve prepared using affinity-purified sAPP␣ as described (53). sAPP␤ was quantified using a commercial s␤APP wild-type ELISA kit and Meso Scale Sector Imager 6000 (Meso Scale Discovery, Gaithersburg, MD) for detection following the manufacturer's recommended protocol. Captured sAPP␤ was quantified by comparing the signals of the samples to a standard curve included on each plate prepared using recombinant sAPP␤ in complete medium. Analysis of BACE1 Palmitoylation-COS7 cells were cotransfected with BACE1 and DHHC plasmids using Lipofectamine 2000 (Invitrogen) and labeled 48 h after transfection. Stable pools of BACE1 Ϫ/Ϫ MEF and N2a cells were grown to subconfluence prior to labeling. Cells were preincubated for 1 h in Dulbecco's modified Eagle's medium supplemented with 1 mg/ml fatty acid-free bovine serum albumin (Sigma) and labeled for 4 h with 0.5 mCi/ml [10, H]palmitic acid (American Radiolabeled Chemicals) diluted in the preincubation medium. Cells were scrapped in lysis buffer (150 mM NaCl, 50 mM Tris-HCl, pH 7.4, 0.5% Nonidet P-40, 0.5% sodium deoxycholate, 5 mM EDTA, 0.25% SDS, 0.25 mM phenylmethylsulfonyl fluoride, Roche Protease Inhibitor Mixture 1X), and sonicated for 30 s on ice. Aliquots of lysates (adjusted to trichloroacetic acid precipitable radioactivity) were incubated overnight with 2 l of anti-FLAG M2 antibody to immunoprecipitate BACE1. Immunoprecipitates were fractionated by SDS-PAGE, transferred to polyvinylidene difluoride membrane (Bio-Rad), and [ 3 H]palmitic acid-labeled BACE1 was detected by PhosphorImager analysis (GE Healthcare). The membranes were subsequently subject to Western blotting with FLAG M2 antibody (1:20,000) to reveal total immunoprecipitated BACE1. To compare the relative efficiencies of BACE1 palmitoylation by each DHHC, the ratio between [ 3 H]palmitic acid-labeled BACE1 and immunoblot BACE1 signal intensities were quantified using Image J software. Immunofluorescence Microscopy-Cells cultured on poly-Llysine-coated coverslips were fixed using 4% paraformaldehyde. Polyclonal BACE1 antiserum 7523 and mAb against ␥-adaptin or transferrin receptor were diluted in phosphatebuffered saline containing 3% bovine serum albumin and 0.2% Tween 20, and incubated with fixed cells at room temperature for 2 h. Images were acquired on a Zeiss confocal microscope (Pascal) using ϫ100 1.45 NA Plan-Apochromat oil objective. Images were processed using Metamorph software (Molecular Devices). BACE1 Is S-Palmitoylated at 4 Cysteine Residues-Previ- ously it was reported that BACE1 is S-palmitoylated at three Cys residues (Cys 478 , Cys 482 , and Cys 485 ) within its cytosolic tail ( Fig. 1A) (19). In stably transduced BACE1 Ϫ/Ϫ MEF, overexpressed wtBACE1 can be readily labeled with [ 3 H]palmitic acid (Fig. 1B). As described previously, labeling was significantly reduced, but still clearly detectable, in MEF stably expressing a BACE1 mutant (BACE1-3C/A) harboring Cys to Ala substitutions at the three Cys residues previously identified as the sites of S-palmitoylation (Fig. 1B). This suggested that additional site(s) in BACE1 might be S-palmitoylated. To test whether Cys 474 located within the putative transmembrane domain is a target site for S-palmitoylation, we introduced Cys to Ala substitutions at this position. Stable MEF pools expressing combined substitution of all the four Cys residues with Ala (BACE1-4C/A) completely abolished S-palmitoylation of BACE1 when stably expressed in BACE1 Ϫ/Ϫ MEF (Fig. 1B). These results were confirmed by stably expressing the wtBACE1 and BACE1-4C/A mutant in neuronal (N2a) cells. These results indicate that BACE1 is palmitoylated at four Cys residues located in the cytoplasmic membrane boundary. S-Palmitoylation Is Not Required for Stability of BACE1-Post-translational S-palmitoylation is known to function as a regulatory mechanism, which in many cases confers stability to the target protein (reviewed in Ref. 54). Examples include ␥-secretase subunits nicastrin and APH-1, sortilin, cationic-independent mannose 6-phosphate receptor, chemokine receptor CCR5, human A1 adenosine receptor, and Rous sarcoma virus transmembrane protein (54 -56). Therefore, we performed pulse-chase experiments using [ 35 S]Met/Cys labeling to test the stability of nascent wtBACE1 and palmitoylation-deficient BACE1 mutant. Parallel dishes of N2a cells stably expressing either wtBACE1 or BACE1-4C/A were pulse-labeled for 30 min and chased for varying lengths of time in the presence of cycloheximide. At the end of the chase period, the cell lysates were prepared and analyzed by immunoprecipitation with anti-BACE1 antibodies. In agreement with previous findings (16), maturation of wtBACE1 into a complex glycosylated protein with markedly reduced migration on SDS gels was evident at 2 h of chase. Mature wtBACE1 was relatively stable up to 8 h of chase (Fig. 1C). Maturation and stability of BACE1-4C/A were indistinguishable from that of wtBACE1. These results indicate that S-palmitoylation is not required for the stability of BACE1. Characterization of BACE1 S-Palmitoylation by DHHC Family of Protein Acyltransferases-A family of 23 integral membrane PAT, which share a characteristic DHHC-cysteine-rich domain, mediate protein palmitoylation in human (48). Co-expression of individual PATs with the substrate protein to screen for PAT that increase incorporation of radiolabeled palmitate is the method of choice to identify a cognate enzyme-substrate pair. This pioneering approach was established first for the identification of PATs that specifically enhanced palmitoylation of neuronal scaffold protein PSD-95 (48). Using this approach, we sought to identify the PATs that increased palmitate incorporation into BACE1. We co-transfected COS7 cells individually with each of the 23 DHHC cDNAs and wtBACE1, and examined S-palmitoylation of BACE1 by [ 3 H]palmitic acid labeling and immunoprecipitation (Fig. 1D). In this experiment, we also used BACE1-4C/A as an internal negative control for the lack of [ 3 H]palmitic acid labeling of BACE1. Transfection of BACE1 in COS7 cells allowed us to detect both immature and mature (complex glycosylated) BACE1 (Fig. 1D) revealed that co-expression of DHHC 3, 4, 7, 15, and 20 enhanced immature BACE1 palmitoylation by 1.4 -1.7-fold (Fig. 1D). S-Palmitoylation of immature, core glycosylated BACE1 in transiently transfected COS7 cells indicates that nascent BACE1 can undergo S-palmitoylation in the endoplasmic reticulum or during early secretory trafficking through the Golgi apparatus. S-Palmitoylation Does Not Affect the Subcellular Localization of BACE1-BACE1 undergoes complex post-translational modifications that are particularly important in defining its subcellular organelle destination and steady-state distribution. For example, reversible acetylation in the luminal domain of BACE1 regulates its stability as well as exit from the ER (20,57). Moreover, phosphorylation regulates the fate of BACE1 endocytosed from the cell surface; whereas phosphorylated BACE1 is transported from the endosomes to the TGN, non-phosphorylated BACE1 is directly recycled to the cell surface (18,29). Therefore we sought to investigate the importance of palmitoylation on subcellular localization of BACE1. To this end, we performed immunofluorescence microscopy analysis of wtBACE1 and BACE1-4C/A. The polyclonal antibody 7523 did not show any background staining in BACE1 Ϫ/Ϫ MEF transduced with an empty retroviral vector. In agreement with previous reports (16,58), analysis of BACE1 Ϫ/Ϫ MEF stably expressing wtBACE1 revealed predominant co-localization of wtBACE1 with ␥-adaptin and transferrin receptor, which are markers of the TGN and endosomes, respectively (Fig. 2, A and B). Similar to wtBACE1, the BACE1-4C/A mutant also mainly localized to the TGN and endosomes. The studies described above did not reveal any difference in TGN and endosome localization of wtBACE1 and BACE1-4C/A mutant suggesting that lack of S-palmitoylation did not affect the steady-state distribution of BACE1 in intracellular organelles. Next, we performed cell surface biotinylation studies to determine whether S-palmitoylation regulates the steadystate levels of BACE1 at the cell surface. In control N2a cells, following surface biotinylation ϳ5% of endogenous BACE1 bound to streptavidin beads, indicating that only a very small fraction of BACE1 is present at the cell surface at steady-state. Similar to endogenous BACE1, ϳ6% (WT, 6.24 Ϯ 0.93, versus 4C/A, 6.82 Ϯ 1.17) of stably overexpressed wtBACE1 and BACE1-4C/A was found to reside at the cell surface (Fig. 2B). Reprobing the same blots showed that Ͼ70% of CD147, a cell surface-localized type I transmembrane protein is isolated by this method in these experiments (Fig. 2C). Together, these results indicate that S-palmitoylation of BACE1 does not regulate intracellular or cell surface distribution of BACE1. S-Palmitoylation Is Required for Association of BACE1 with Lipid Raft Membranes-S-Palmitoylation is an essential signal for lipid raft association of several soluble and integral membrane proteins (59). However, not all palmitoylated proteins are targeted to lipid rafts. Therefore, we were interested to determine the role of S-palmitoylation in lipid raft targeting of BACE1. Lipid rafts are biochemically defined as detergent-resistant membrane microdomains that resist extraction with detergents such as Triton X-100 and Lubrol at 4°C (60). Although DRM isolated by biochemical fractionation differ in some characteristics from pre-existing lipid raft domains in live cell membranes, DRM fractionation remains as the standard method to identify raft-targeting signals (36). In previous studies we characterized lipid raft association of ␥-secretase as well as APP CTFs in cultured cells and mouse brain by fractionation of membranes solubilized in Lubrol WX (40,41). We used the same strategy to assess DRM association of wtBACE1, BACE1-3C/A, and BACE1-4C/A. Membrane rafts from stably transduced BACE1 Ϫ/Ϫ MEF pools were prepared on the basis of detergent insolubility and low buoyant density on sucrose density gradients, essentially as described (40). Fractions enriched FEBRUARY 6, 2009 • VOLUME 284 • NUMBER 6 in DRMs were identified by the enrichment of lipid raft marker, flotillin-2. In agreement with previous studies (37,41), only a subset of wtBACE1 was recovered in gradient fractions enriched in flotillin-2 (Fig. 3A). We found a small decrease in the extent of DRM association of BACE1-3C/A mutant when compared with wtBACE1. On the other hand, the BACE1-4C/A palmitoylation-deficient mutant showed remarkable loss of DRM association (Fig. 3A). These results were confirmed by lipid raft fractionation of N2a neuroblastoma cells stably expressing wtBACE1 or BACE1-4C/A mutant (Fig. 3B). Quantification from three independent experiments showed that about 25% (25.18 Ϯ 1.17) of wtBACE1 is found in fractions enriched in lipid rafts. Consistent with the results obtained from the BACE1 Ϫ/Ϫ MEF described above, lack of S-palmitoylation resulted in a 10-fold reduction in raft association of BACE1-4C/A mutant (Fig. 3B). Raft Targeting Is Dispensable for BACE1 Processing of APP Next, we examined raft and non-raft distribution of BACE1 localized at the plasma membrane. For these studies, we surface biotinylated stable N2a cells expressing wtBACE1 or BACE1-4C/A prior to lipid raft fractionation. Total cell surface proteins were isolated using streptavidin from pooled raft and non-raft fractions. Quantification of relative distribution of wtBACE1 showed that Ͻ1% wtBACE1 expressed in N2a cells is localized in cell surface DRMs, whereas about 6% of total wtBACE1 was found in detergent-soluble domains at the cell surface (Fig. 3, C and D). Analysis of cells expressing BACE1-4C/A revealed that lack of S-palmitoylation impaired cell surface raft association of mutant BACE1. However, as expected from surface biotinylation studies (Fig. 2B), the levels of WT and mutant BACE1 were similar in cell surface non-raft domains (Fig. 3, C and D). Together, these results indicate that S-palmitoylation at four Cys residues mediates lipid raft localization of BACE1 in both neuronal and non-neuronal cells. S-Palmitoylation of BACE1 Is Not Required for APP Processing-The studies described above indicate that the lack of S-palmitoylation in BACE1 does not affect protein stability or subcellular localization but completely displaced BACE1 from lipid raft domains. Thus, the BACE1-4C/A mutant is ideal to ascertain the importance of lipid raft residence of BACE1 on APP processing without pharmacological manipulation of cellular cholesterol levels. To facilitate APP metabolism studies, we first retrovirally infected a stable N2a cell line overexpressing wtAPP (N2a 695.13) and generated a stable pool of cells expressing various levels of wtBACE1 or BACE1-4C/A. Quantifications from [ 35 S]Met/Cys labeling experiments indicate that the stable pools generated using pMX-IRES and pMX retroviral vectors overexpress BACE1 at 2-or 10-fold higher than endogenous BACE1 expression, respectively (Fig. 4A). Short pulse labeling with [ 35 S]Met/Cys showed that wtAPP expression is similar in the stable pools analyzed (data not shown). Continuous labeling for 3 h showed a marked reduction in the levels of mature APP in stable cells overexpressing wtBACE1, indicative of efficient processing by BACE1, as observed in previous studies (61). To confirm this notion, we examined the proteolytic products derived from BACE1 cleavage of APP, i.e. ␤-APP CTFs and sAPP␤. To determine the levels of ␤-APP CTFs we immunoprecipitated detergent lysates prepared from cells labeled for 3 h with [ 35 S]Met/Cys using APP C-terminal antibodies. Low level overexpression of wtBACE1 resulted in a small increase in the levels of APP ␤-CTFs ϩ1 and ϩ11 CTFs, relative to vector control cells (Fig. 4B). The increase in ␤-CTFs ϩ1 and ϩ11 CTFs was readily apparent in cells expressing wtBACE1 at high levels. Contrary to expectation, non-raft localized BACE1-4C/A also efficiently proteolyzed APP, and generated ␤-APP CTFs at levels comparable with that of wtBACE1. When the cells were treated with a ␥-secretase inhibitor (Compound E), we observed a marked accumulation of APP CTFs as expected, but still found no quantitative differences in the levels of APP CTFs between cells expressing wtBACE1 or BACE1-4C/A (Fig. 4B). To confirm the above findings, we collected conditioned media and performed ELISA to quantify the levels of sAPP␣ and sAPP␤, derived from proc- Cells were solubilized in 0.5% Lubrol WX and subject to sucrose gradient fractionation. The gradients were harvested from the top, and the distribution of BACE1 was determined by Western blot analysis. Fractions containing lipid raft-associated proteins were identified by the presence of raft marker flotillin-2. Fractions 1-3 were excluded because there were no detectable signals for any of the proteins tested. In the case of N2a cells (panel B), raft (4 and 5) and non-raft (8 -12) fractions were pooled and analyzed by Western blotting. The gels were loaded with 10% of pooled raft fractions and 4% of non-raft fractions. Signal intensities from Western blots were quantified as described under "Experimental Procedures" and plotted. Values represent mean Ϯ S.E. of three experiments. C, raft association of BACE1 at the cell surface. Subconfluent dishes of N2a cells were surface biotinylated and then fractionated using sucrose gradient fractionation. Biotinylated proteins were isolated from pooled raft and non-raft fractions using streptavidin beads and analyzed by immunoblotting. Because of the differences in relative abundance, the gels were loaded with 100 (raft) or 40% (non-raft) of material bound to streptavidin-agarose, and 10 (raft) or 4% (non-raft) of input lysate. D, signal intensities of immunoblots were quantified and plotted. essing of APP by ␣and ␤-secretase, respectively. Results showed that the levels of sAPP␤ were markedly elevated with a concomitant decrease in the levels of sAPP␣ in cells overexpressing wtBACE1 as compared with vector cells. The levels of sAPP␤ and sAPP␣ in the media of BACE1-4C/A cells were indistinguishable from that of wtBACE1 cells, providing direct evidence that S-palmitoylation does not contribute to BACE1 activity in cultured cells (Fig. 4C). Collectively, these results demonstrate that BACE1 processing of wtAPP in N2a cells was unaffected by lack of S-palmitoylation and the resulting nonraft localization of BACE1. The FAD-linked "Swedish" APP variant (APP Swe ) differs considerably from wtAPP with respect to intracellular trafficking itinerary and subcellular site(s) of BACE1 processing (reviewed in Ref. 62). Therefore, we decided to analyze processing of APP Swe by wtBACE1 and the BACE1-4C/A mutant. For these studies, we generated stable N2a cells co-expressing APP Swe and wtBACE1 or BACE1-4C/A. Similar to what we observed in cells expressing wtAPP, stable overexpression of BACE1 resulted in marked reduction in the levels of mature APP Swe and a concomitant increase in the levels of ␤-APP CTFs relative to APP Swe /vector control cells (Fig. 5A). Overexpression of the BACE1-4C/A mutant also yielded similar levels of ␤-CTFs, further supporting our conclusion that BACE1-4C/A mutant is capable of efficiently processing APP. Finally, to for-mally rule out the contribution of endogenous wtBACE1, we generated BACE1 Ϫ/Ϫ MEF co-expressing APP Swe and wtBACE1 or BACE1-4C/A. As in N2a cells, we observed a marked reduction in mature APP and selective increase in ␤-CTFs upon low or high level expression of either wtBACE1 or BACE1-4C/A. ␤-APP CTFs originating at ϩ1 were distinguished from that of ϩ11 CTFs by selective detection of ϩ1 CTFs using mAb antibody 26D6, which reacts with the N-terminal region of A␤ (Fig. 5B). These results indicate that BACE1 can efficiently cleave APP irrespective of post-translational modification by S-palmitoylation and localization in raft or non-raft microdomains. Raft Association of APP CTFs-The results presented above indicate that the BACE1-4C/A mutant predominantly resides in non-raft membrane domains and is capable of processing APP when overexpressed in cultured cell lines. We reasoned that if APP processing by BACE1-4C/A mutant occurs in detergent-soluble membrane domains, we should be able to see quantitative differences in raft versus non-raft residence of APP CTFs in BACE1-4C/A cells as compared with wtBACE1 cells. To test this notion, we performed lipid raft fractionation of N2a 695.13 cells stably overexpressing wtBACE1 or BACE1-4C/A mutant. Cells were pretreated with 10 nM Compound E to block ␥-secretase processing and cause accumulation of APP CTFs, which facilitates their detection. As expected, we found significant differences in raft versus non-raft distribution of APP CTFs between wtBACE1 and BACE1-4C/A cells (Fig. 6). In wtBACE1 cells, the majority of APP CTFs were found in DRM fractions. In the case of the BACE1-4C/A mutant, there is a considerable shift in the distribution of APP CTFs in the gradient toward the fractions containing detergent-soluble proteins. Quantifications revealed that in the case of wtBACE1, only 15% of APP CTFs was present in detergent-soluble non-raft fractions, whereas 44% of APP CTFs was recovered in non-raft fractions of BACE1-4C/A cells. Reprobing of the blots with raft-resident protein flotillin-2 and PS1 revealed no significant differences in the DRM distribution of these proteins between the cells (Fig. 5A). Thus, we observed a clear shift in the steadystate localization of APP CTFs to detergent-soluble mem- Swe.10 (A) or BACE1 Ϫ/Ϫ APP Swe MEF (B) were lysed and equal amounts of total proteins were analyzed by Western blotting with antibodies against BACE1 and APP. APP FL and APP CTFs were detected by antibody CTM1 (raised against the C terminus of APP) and ␤-CTF were selectively detected using mAb 26D6 (epitope 1-12 of A␤). FEBRUARY 6, 2009 • VOLUME 284 • NUMBER 6 branes in cells expressing BACE1-4C/A, indicative of BACE1-4C/A processing of APP in non-raft domains. Raft Targeting Is Dispensable for BACE1 Processing of APP Analysis of A␤ Secretion in Cells Expressing S-Palmitoylation-deficient BACE1-In the experiments described above, we note that at steady-state the majority of APP CTFs in both wtBACE1 and BACE1-4C/A cells (85 and 56%, respectively) were found in lipid raft fractions, suggesting that APP CTFs may have intrinsic signals that facilitate raft recruitment, irrespective of the membrane microdomains where they are generated by BACE1 processing of APP FL. Because ␥-secretase is also found in lipid rafts, one would predict little or no difference in the levels of A␤ generated in BACE1-4C/A cells, relative to wtBACE1 cells. To ascertain A␤ production we performed metabolic labeling with [ 35 S]Met/Cys in N2a cells coexpressing wtAPP and BACE1. By immunoprecipitation analysis of conditioned media we observed increased A␤ secretion by BACE1 overexpression, but found no quantitative differences between cells expressing wtBACE1 or BACE1-4C/A (Fig. 7A). We then quantified the levels of A␤ by ELISA using antibodies capable of detecting A␤ with heterogeneous N and C termini. These studies revealed that overexpression of BACE1 significantly increased the levels of secreted A␤ and that cells overexpressing wtBACE1 and BACE1-4C/A secreted very similar levels of A␤ species (Fig. 7, B-D). These results are consistent with APP FL processing by BACE1 in raft or non-raft domains followed by efficient processing of APP CTFs by ␥-secretase. DISCUSSION Underlying eukaryotic cellular organization and function are the elaborate mechanisms that compartmentalize multiple biological activities not only in distinct organelles but also in specialized membrane microdomains. Evidence from multiple lines of investigations suggests that cholesterol-rich membrane microdomains are involved in the proteolytic processing of APP by transmembrane proteases BACE1 and ␥-secretase (reviewed in Refs. 63 and 64)). Whereas the majority of APP CTFs and ␥-secretase subunits associate with DRM (41), only a subset of BACE1 and APP FL in cultured cells and brain are found in DRM (34,41). A␤ production in lipid rafts despite the discordant DRM distribution of APP FL versus APP CTF and BACE1 versus ␥-secretase can be explained by two possible scenarios: first, only the subset of raft-associated APP FL are processed by BACE1, thus generating APP CTFs within lipid raft microdomains, which are subsequently processed by ␥-secretase; second, a small fraction of APP FL undergoes BACE1 cleavage regardless of raft or non-raft membrane microdomain localization, and the resulting APP CTFs undergo ␥-secretase cleavage within raft domains (34,64). To explore these possibilities, we first characterized BACE1 S-palmitoylation at four Cys residues and report that site-directed mutagenesis of these Cys residues is sufficient to prevent BACE1 targeting to lipid rafts without altering BACE1 stability, maturation, or subcellular localization. We then compared APP processing by wtBACE1 or S-palmitoylation-deficient BACE1-4C/A mutant expressed in BACE1 Ϫ/Ϫ MEF and N2a neuroblastoma cells, and document that the BACE1-4C/A mutant is capable of processing APP FL in non-raft domains. Moreover, we show that APP CTFs generated by BACE1-4C/A cleavage in non-raft domains subsequently become associated with lipid rafts and are processed to A␤. Together these results suggest that BACE1 is targeted to lipid rafts following S-palmitoylation. However, S-palmitoylation-dependent raft localization of BACE1 is not prerequisite for amyloidogenic processing of APP. Cholesterol is present in both leaflets of cellular membranes and plays an important role in stabilizing liquid-ordered lipid raft microdomains enriched in sphingolipids and cholesterol. Consequently, depletion of cholesterol disrupts lipid raft integrity (65). Previous efforts in characterizing BACE1 cleavage of APP in lipid rafts primarily relied on cholesterol depletion as the strategy to perturb raft association of APP and BACE1. The sensitivity of ␤-CTF and A␤ production to cellular cholesterol depletion (with a combination of the lipophilic statin, lovastatin, and the cholesterol extracting agent methyl ␤-cyclodextrin), led to the conclusion that BACE1 processing of APP occurs within lipid rafts in cultured cells and neurons (32,33). However, a recent report has challenged this view and suggested that membrane cholesterol levels can have a positive or negative effect on BACE1 processing of APP, depending on the extent of cholesterol depletion (34). Moderate loss of cholesterol in hippocampal neurons (Ͻ25% loss) facilitated colocalization of APP and BACE1 and increased A␤ production by promoting BACE1 cleavage of APP. In addition to these apparent discrepancies, proper interpretation of cholesterol depletion studies are confounded by the pleiotropic effects of cholesterol depletion on endocytosis, Golgi morphology, and vesicular trafficking, as well as perturbation in lateral diffusion of raft and non-raft proteins due to alterations in membrane fluidity and curvature (42,43,(45)(46)(47). For example, as cholesterol-rich lipid microdomains are required for the biogenesis of secretory vesicles from the TGN (44), it is highly likely that secretory trafficking of APP, BACE1, and ␥-secretase are all compromised in cells depleted of cholesterol. Moreover, the commonly used cholesterol-lowering agent lovastatin is known to have cholesterol-independent effects on APP trafficking and processing (66,67). Thus it is difficult to draw unambiguous conclusions regarding BACE1 processing of APP in lipid raft domains solely based on cholesterol depletion studies. In this study, we investigated the significance of BACE1 processing of APP in lipid rafts without altering cellular cholesterol levels. Protein acylation such as palmitoylation and myristoylation targets a variety of cytosolic and membrane proteins to lipid rafts due to the high affinity of acyl chains for the ordered lipid environment within raft domains (68). Because BACE1 was reported to be S-palmitoylated at three Cys residues within the cytoplasmic tail, we took advantage of this rafttargeting signal to study BACE1 processing of APP. In addition to the previously reported sites (Cys 478 /Cys 482 /Cys 485 ), we identified a fourth residue (Cys 474 ) within the transmembrane domain of BACE1 as a site that undergoes S-palmitoylation. Combined substitutions of the four Cys residues results in complete displacement of BACE1 from raft domains. It is interesting to note that the BACE1-3C/A mutant with only the single transmembrane S-palmitoylation site still could be labeled with [ 3 H]palmitic acid and continued to associate with lipid rafts (Figs. 1 and 2). Thus, the importance of tandem S-palmitoylation in BACE1 is currently unclear. In this regard, S-palmitoylation is a reversible modification mediated by a family of DHHC PATs and acylprotein thioesterases (reviewed in Ref. 54). In transfected COS cells at least five PATs (DHHC 3, 4, 7, 15, and 20) enhanced BACE1 S-palmitoylation (Fig. 1). Although our studies suggest that lack of S-palmitoylation did not affect subcellular localization of BACE1 in N2a neuroblastoma, COS, or MEF, the functional significance of this lipid modification in the context of neuronal trafficking is as yet unclear. It is important to note that the presence of S-palmitate on proteins such as ␣-amino-3-hydroxyl-5-methyl-4-isoxazole propionic acid receptor subunits, PSD-95, GAP-43, Huntingtin, SNAP-25, and synaptotagmin regulates their sorting and function in neuronal presynaptic and post-synaptic compartments (48,54,69,70). In future studies we plan to investigate whether reversible S-palmitoylation, in combination with the unique subcellular distribution of substrate-specific PATs serves as a potential mechanism to fine tune BACE1 trafficking in neurons. Flotation gradient analysis of DRM has provided important information over the years about the functional role of lipid rafts in numerous biological processes, and continues to be the most commonly used method to assess lipid raft association of proteins (reviewed in Ref. 36). However, there has been growing concern in the use of detergent resistance criteria to draw conclusions about association of proteins with nanoscale sized, short-lived, cholesterol-enriched membrane domains in intact cells (71). Whereas it is clear that raft marker proteins such as flotillins and caveolins readily partition into low buoyant density fractions, the main criticism of this biochemical method is the potential for detergents to create or cause mixing of membrane domains, thereby inducing the merger of proteins/lipids localized in spatially distinct membrane domains and organelles of intact cells. This appears not to be a problem when we use Lubrol WX for raft isolation. In earlier studies, we found no evidence of mixed raft patches containing both syntaxin 6 (raft-associated t-SNARE localized in TGN/TGN vesicles) and SNAP-23 (raft-associated t-SNARE localized at the plasma membrane) by magnetic immunoisolation, essentially ruling out a possible merger of detergent-resistant domains of different subcellular organelles during cell lysis or subsequent raft isolation procedure (40). Moreover, the marked difference in the phase separation of BACE1-3C/A and -4C/A mutants, which differ in just one palmitic acid modification, further supports the usefulness of the biochemical method to study raft association of BACE1 (Fig. 3). We also observed clear separation of wtBACE1 (ϳ25% raft association) and BACE1-GPI (ϳ90% raft association) using the DRM fractionation method (not shown). Still, unequivocal demonstration of clustered localization of BACE1 in cholesterolrich microdomains will necessitate the direct imaging of intact cells at nanoscale resolution. Antibody-induced copatching of proteins has been previously used to define raft association of APP and BACE1 at the plasma membrane (33,39). When we used this method to assess copatching of PLAP and BACE-GPI, we made two unexpected observations. First, although the BACE1 and PLAP antibodies could successfully label the respective protein at the cell surface, we did not observe complete overlap between the patches formed by BACE1 and PLAP antibodies. Second, when analyzing % colocalization in randomly chosen cells expressing both proteins using Metamorph software, we noted that the extent of copatching observed between BACE1-GPI and PLAP strongly correlated with the average fluorescence intensity of each protein (i.e. their level of expression), indicating that the antibody co-patching approach might not be sensitive enough to detect subtle differences in lipid raft association of proteins when they are overexpressed (supplementary Fig. S1). Nevertheless, using stably transfected N2a cells with moderate overexpression of BACE1 we were able to visualize copatching of wtBACE1 with the lipid raft marker PLAP, and a small decrease in the extent of copatching between BACE1-4C/A and PLAP (supplementary Fig. S2). The major weakness in applying this Raft Targeting Is Dispensable for BACE1 Processing of APP FEBRUARY 6, 2009 • VOLUME 284 • NUMBER 6 approach to study raft association of BACE1 is the paucity of BACE1 at the cell surface. We and others have estimated using cultured cells that only 5% of endogenous or overexpressed BACE1 is resident at the plasma membrane at steady-state (Fig. 2). Considering the complex mechanisms that regulate sorting of proteins and lipids at various stages of secretory and endocytic trafficking, sampling only a minor pool of BACE1 by imaging plasma membrane is unlikely to yield satisfactory estimation of the extent to which BACE1 associates with lipid rafts in cellular membranes. Unlike what has been observed in many S-palmitoylated proteins such as chemokine receptor CCR5, human A1 adenosine receptor and Rous sarcoma virus transmembrane protein (reviewed in Ref. 54) lack of S-palmitoylation in BACE1 neither influenced protein stability nor subcellular localization. Nonetheless, WT and palmitoylation-deficient BACE1 mutants markedly differ in the extent of DRM association, enabling us to investigate the significance of BACE1 raft localization in APP processing. We performed detailed metabolic labeling studies to analyze APP ␤-CTF levels, and quantified sAPP␣, sAPP␤, and A␤ levels by ELISA. These studies revealed that BACE1 processing of APP is indistinguishable in cells overexpressing wtBACE1 or BACE1-4C/A. To rule out potential protein overexpression artifacts, we confirmed these results using cells where BACE1 expression was only 2-fold higher than the endogenous levels. A measurable increase in the steady-state levels of APP CTFs in detergent-soluble fractions of BACE1-4C/A cells as compared with that of WT cells further strengthens our conclusion that palmitoylation-deficient BACE1 mutant efficiently cleaves APP in non-raft domains (Fig. 6). Although it is clear that S-palmitoylation and lipid raft localization of BACE1 is not obligatory for APP processing, they still might be important for the processing of other raft-localized BACE1 substrates such as neuregulin-1, lipoprotein receptorrelated protein, or P-selectin-1 (72). It is also interesting to note that APP CTFs generated by BACE1-4C/A mutant are also recovered in DRM fractions. This finding indicates that APP CTFs may contain intrinsic signals that target them to cholesterol-rich membrane domains. Alternatively, ectodomain release following cleavage of APP FL by BACE1 or ␣-secretase relieves certain steric hindrance that underlies the paucity of APP FL in lipid rafts. Finally, it remains to be examined whether any of the multiple adaptors including Mint1, Mint2, Mint3, Dab1, and Fe65 that bind near the NPTY-motif in the cytosolic tail of APP facilitates raft association.
2018-04-03T01:05:13.473Z
2009-02-06T00:00:00.000
{ "year": 2009, "sha1": "9fcf9f606ca8e99d33002abfb188daf43c15f028", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/284/6/3793.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "76aaf411857c2bd4ccaad589200c4d1135283c9f", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
238636815
pes2o/s2orc
v3-fos-license
Epidural use among women with spontaneous onset of labour – an observational study using data from a cluster-randomised controlled trial Objective: To investigate whether the proportion of pregnant women who use epidural analgesia during birth differed between women registered at a maternity clinic randomised to Mindfetalness or to routine care. Design: An observational study including women born in Sweden with singleton pregnancies, with spontaneous onset of labour from 32 weeks’ gestation. Data used from a cluster-randomised controlled trial applying the intention-to-treat principle in 67 maternity clinics where women were randomised to Mindfetalness or to routine care. ClinicalTrials.gov (NCT02865759). Interventions: Midwives were instructed to distribute a leaflet about Mindfetalness to pregnant women at 25 weeks’ gestation. Mindfetalness is a self-assessment method for the woman to use to become familiar with the unborn baby’s fetal movement pattern. When practising the method in third trimester, the women are instructed to daily lie down on their side, when the baby is awake, and focus on the movements’ intensity, character and frequency (but not to count each movement). Findings: Of the 18 501 women with spontaneous onset of labour, 47 percent used epidural during birth. Epidu-ral was used to a lower extent among women registered at a maternity clinic randomised to Mindfetalness than women in the routine-care group (46.2% versus 47.8%, RR 0.97, CI 0.94–1.00, p = 0.04). Epidural was more common among primiparous women, women younger than 35 years, those with educational levels below university, with BMI ≥ 25 and with a history of receiving psychiatric care or psychological treatment for mental illness. Conclusions and implications for practice: Pregnant women who were informed about a self-assessment method, with the aim of becoming familiar with the unborn baby’s fetal movement pattern, used epidural to a lower extent than women who were not informed about the method. Future studies are needed to investigate and understand the association between Mindfetalness and the reduced usage of epidural during birth. Introduction The birthing process is unique to every pregnant woman, as is the experience of pain ( Whitburn et al. 2019 ).Unlike other acute pain, which is usually associated with injury or pathology, labour pain is part of a normal physiological process ( Lowe, 2002 ;Whitburn et al., 2019 ).Epidural anaesthesia (EDA) is effective in reducing pain during labour and is used in up to 60 percent of all births in high-income countries ( Anim-Somuah et al., 2018 ;Ruppen et al., 2006 ).However, the experience of pain is complex and multifactorial ( Lowe, 2002 ).Women's experiences of pain have previously been investigated in a randomized controlled trial ( Waldenström and Nilsson, 1994 ).No differences in intensity of pain were seen in women giving birth in birth centres (a home-like environment and team midwifery with restricted use of pharmacological pain relief) compared to women in standard obstetric care, despite women in standard care using significantly more pharmacologic pain relief (epidural, pethidine, nitrous oxide, pudendal block) ( Waldenström and Nilsson, 1994 ).The use of EDA is more common among first-time mothers and among women within unfavourable social situations (low-qualified job or single) ( Le Ray et al., 2008 ).The use of EDA is also more common among women with previous use of EDA and women who have a partner who prefers EDA ( Jennifer et al., 2010 ).The need for EDA during birth is also associated with giving birth to a child with high birthweight ( Ekéus et al., 2009 ).Further, the use of EDA during birth is related to psychological factors and maternal-fetal attachment ( Smorti et al., 2020 ).Women who gave birth without EDA had lower levels of anxiety and lower levels of fear of childbirth during pregnancy than women who gave birth using EDA.Women who gave birth without using EDA had higher levels of prenatal attachment to the unborn child.Additionally, women giving birth in a midwife-led continuity care model are less likely to use EDA during labour and birth ( Sandall et al., 2016 ). The women's choice of anesthesia during birth affects their pain perception, but also the labour progress and outcome ( Anim-Somuah et al., 2018 ).In a Cochrane review assessing the effectiveness and safety of EDA, the authors concluded that EDA is associated with prolonged firstand second-stage labour ( Anim-Somuah et al., 2018 ); however, the evidence was drawn from low-to moderate-quality evidence.Further, they found that EDA is associated with higher risk of assisted births, but does not have an immediate effect on risk of low Apgar score or transfer to neonatal care ( Anim-Somuah et al., 2018 ).Most of the studies included in the review compared EDA with opioids.However, more recent studies report an association between EDA and increased risk of low Apgar score and admission to neonatal care ( Høtoft and Maimburg, 2020 ;Ravelli et al., 2020 ).Additionally, it has been reported that EDA can have an adverse effect on breastfeeding.The babies' behaviour directly after birth differs where the mother had analgesia; the hand massagelike movements and sucking at the breast are reduced among babies to mothers who have used EDA during birth ( Ransjö-Arvidson et al., 2001 ;Riordan et al., 2000 ).However, this may be a dose-related effect ( Brimdyr et al., 2015;National Library of Medicine, U.S. 2006 ).A systematic review found differing results, but an association between EDA and non-successful breastfeeding was found in the majority of the studies ( French et al., 2016 ). In Sweden, 38.8 percent of women giving birth use EDA (58.7 percent among nullipara and 24.4 among multipara).Large regional differences are seen in the use of EDA among first-time mothers in Sweden (38.9 percent to 71.0 percent) ( Socialstyrelsen, 2018 ). In a large cluster-randomised controlled trial, including 39 865 women, we evaluated Mindfetalness, a method for the pregnant woman to use to become familiar with the unborn baby's fetal movement pattern ( Akselsson et al., 2020 ).In the last trimester, the women in the study group were instructed to lie down on their side and focus on the unborn baby's fetal movements, noting their intensity, character and frequency (but without counting each movement) ( Radestad 2012 ).This observation was to be made daily, for 15 minutes, when the baby was awake.The women in the control group received routine antenatal care.We found that women registered at a maternity clinic randomised to Mindfetalness started their labour spontaneously to a higher extent than the routine care group.Additionally, the number of cesarean sections and labour inductions were lower in the Mindfetalness group.Mindfetalness can be defined as a form of Mindfulness, in which the unborn baby is included in the process.The theory behind the Mindfetalnesseffect on spontaneous start of delivery is that the method reduces the level of stress among the women, which is advantageous for the hormones in the birth process ( Uvnäs-Moberg et al., 2005 ).Thus, during periods of stress, a woman's levels of catecholamines increase, which activates the sympathetic system and the body prepares for fight or flight ( Kozlowska et al., 2015 ;Uvnäs-Moberg et al., 2005 ).This inhibits the birth hormone oxytocin, which is important for uterine contractility ( Lederman et al., 1978 ;Sato et al., 1996 ).Mindfulness-based programmes are shown to reduce levels of stress, anxiety, and depression, and to increase a positive state of mind as well as childbirth self-efficacy scores ( Lönnberg et al., 2019 ;Pan et al., 2019 ).Midwives in the intervention thought the women embraced the information about Mindfetalness positively and expressed perceived reduced stress and anxiety among the women ( Rådestad et al. 2020 ).Based on the evidence that exists regarding the association between the choice of labour anaesthesia and attachment, safety, anxiety and fear, the hypothesis was posed that Mindfetalness can influence women's use of pain relief.The aim for this study was to investigate, in a sub-analysis from the cluster-randomised controlled trial, whether the proportion of pregnant women who use EDA during birth differed between women registered at a maternity clinic who were either randomized to Mindfetalness or to routine care. Methods The study base consists of women born in Sweden who gave birth from 32 weeks' gestation with spontaneous onset of labour, included in a cluster-randomised controlled trial to evaluate the Mindfetalness method.Of the 67 maternity clinics in Stockholm, 33 were randomized to the intervention with Mindfetalness and 34 to routine care.Before randomization, the size of the clinic and its socio-economic area were taken into account.Further information about the randomization process can be found in previous papers ( Akselsson et al., 2020 ;Rådestad et al., 2016 ).One of the maternity clinics randomized to Mindfetalness declined participation but is included in the analysis, due to the intention-to-treat design.In the maternity clinics randomized to Mindfetalness, 19 639 women were registered, of whom 13 029 were born in Sweden.The corresponding figures for the routine-care group were 20 226 women, with 13 456 born in Sweden.The number of women with spontaneous onset of labour was 9238 in the Mindfetalness group, and 9263 in the routine care group.Fig. 1 shows the flow chart, illustrating the number of women registered at the clinics, during the time of the study. The research coordinator (AA) started the intervention in August 2016 by holding a 30-minute lecture for the midwives at the maternity clinics randomised to Mindfetalness.The midwives working in these clinics were instructed to distribute a leaflet at a scheduled visit at 24 weeks' gestation.The leaflet included general information about fetal movements and instructions on how to practise the Mindfetalness method from 28 weeks' gestation (appendix).A website ( www.mindfetalness.com) with the same information was made available for anyone to access, and posters were visible in the waiting rooms.The routine care group did not receive any information about the study or the randomization.The midwives in these clinics continued with standard care according to new guidelines introduced by the Swedish National Board of Health and Welfare in October 2016 (at the time the intervention started), which state that all pregnant women should receive verbal information about fetal movements when attending a standard visit at 24 weeks' gestation ( Socialstyrelsen, 2016 ).Further, no written information was given to women included in the routine-care group. From August to October 2016, the midwives in all maternity clinics randomised to Mindfetalness started distributing leaflets during the runin period, which was considered to be complete in November.The first four weeks after the women received information about Mindfetalness was determined to be a training period.The leaflets were distributed until 31 January 2018.When analysing, we included all women registered at the maternity clinics, with spontaneous onset of labour from 32 weeks' gestation, who gave birth from 1 November 2016.All women who were registered until 31 January 2018 were followed until the birth of their baby. The data were retrieved from The Swedish Pregnancy Register ( The Swedish Pregnancy Register 2021 ), a population-based register including information from early pregnancy to a few months after birth.We used the ICD-10 codes ( Internetmedicin, 2018 ) and combined the variable epidural-and spinal anaesthesia (usage of epidural/spinal anaesthesia during labour) into one variable when analysing data.We calculated descriptive statistics using percentages, and when comparing characteristics between groups we used Fischer's exact test.We calculated rate ratios and 95% confidence intervals and, by using logbinomial regression models, we adjusted the rate ratios for potential confounders, one single variable at a time, and, additionally, all the variables combined.To further investigate and take into account any potential confounding effects, we divided the women according to their characteristics to evaluate the effect within different strata. The cluster-randomized controlled trial which data were retrieved from were registered in ClinicalTrials.gov(NCT02865759) before start.Ethics approval was obtained from The Regional Ethics Committee in Stockholm, Sweden (Dnr 2015/2105-31/1).The women were informed by the midwives that it was voluntary for them to use the Mindfetalness method.Data were retrieved from a population-based quality register and informed consent regarding the use of data in research was obtained from the women when they were registered at the maternity clinics. Results Of the 26 485 Swedish women included in the randomized controlled trial, 18 501 (69.9%) started their labour spontaneously.Table 1 shows the characteristics for the women with spontaneous onset of labour who gave birth from 32 weeks' gestation in the Mindfetalness-group and in the Routine-care group.The two compared groups are similar in characteristics, except for the category of women older than 35 years of age, where the Mindfetalness-group had a lower proportion. Of the total 18 501 women who had a spontaneous onset of labour, 8696 (47.0%) used EDA.The usage of EDA differed between hospitals, with a range from 39.4 percent to 51.1 percent (not in table).As shown in Table 2 , it was more common to use EDA among women younger than 35 years, women with an educational level below university, primipara women, women with BMI ≥ 25 and women with a history of receiving psychiatric care or psychological treatment for mental illness. Further, it was more common to use EDA among women giving birth from 40 weeks' gestation ( n = 5293, 60.9% versus n = 5007, 51.1%, pvalue < 0.001).Women giving birth to a baby with higher weight used EDA more often, when compared to those who gave birth to a baby with a lower birthweight (mean birthweight 3575.1 grams versus 3529.9 grams, p -value < 0.001).Oxytocin infusion due to labour dystocia was used to a higher extent among women with EDA than women without ( n = 5562, 64.0% versus n = 1345, 13.7%, p -value < 0.001). Women registered at a maternity clinic randomised to Mindfetalness used EDA to a lower extent than women in the Routine-care group ( n = 4271, 46.2% versus n = 4425, 47.8%, RR 0.97, CI 0.94-1.00,p -value 0.04).When adjusting for birthweight, birth clinic and age, the point estimates almost did not change ( Table 3 ).Women in the Mindfetalness group breastfed with a correct technique two hours after birth to a Fig. 2 shows the use of EDA among the women randomized to Mindfetalness versus those in the routine-care group in relation to the women's characteristics.In general, compared to routine care, the proportion of EDA is lower for all categories among women randomized to Mindfetalness, with two exceptions.In the educational level, "up to elementary school ", the proportion of women who used EDA was the same for both groups.Further, among women with low BMI (less than 18.5), 41.2 percent used EDA in the Mindfetalness group, and 39.8 percent used EDA in the routine care group. Discussion Pregnant women registered at a maternity clinic randomized to be informed about Mindfetalness used EDA during labour to a lower extent than pregnant women registered at a maternity clinic randomized to routine care.EDA use was more common among primiparous women and among women younger than 35 years of age, those with educational levels below university, with a body mass index of 25 or over and with psychiatric history or treatment for mental illness. Women's self-efficacy expectancy to cope with labour pain and a low level of anxiety is associated with reduced perception of pain and a decreased need of anaesthesia during labour ( Lang et al., 2006 ;Manning and Wright, 1983 ;Reading and Cox, 1985 ). Mindfulness-based interventions reduce anxiety, depression and stress in the perinatal period ( Lavender et al., 2016 ;Lever Taylor et al., 2016 ;Matvienko-Sikar et al., 2016 ).Mindfetalness can be perceived as a type of mindfulness method, which includes the unborn baby in the process.Both midwives and women describe the method as a tool for pregnant women to wind down, stay in the present and form an attachment with the unborn child ( Akselsson et al., 2017 ;Rådestad et al., 2020 ).The pregnant women in our study were instructed to practise Mindfetalness for 15 minutes daily until birth from 28 weeks' gestation.For a pregnancy that lasts until term, this means about 1260 minutes of practice (21 hours) if the woman follows the Mindfetalness method instructions.In a previous study, pregnant women were randomized to either an online mindfulness intervention, practising four times a week for three weeks, or to routine care.The women who practised mindfulness had significantly lower levels of prenatal stress and a reduction of the hormone cortisol on awakening and at evening time, compared to the women in the routine care group ( Matvienko-Sikar and Dockray, 2017 ).Thus, this intervention included a significantly shorter duration of engagement by the pregnant women than the Mindfetalness intervention applied here, but the method still provided clear effects in stress reduction. The fact that women with psychiatric history or treatment for mental illness use EDA to a higher extent might be linked to their having higher levels of general anxiety and fear.Anxiety, depression and fear of birth reduce a woman's ability to cope with pain and can affect the intensity of pain ( Sitras et al., 2017 ).The results of our study show that the largest reduction in use of EDA during labour, when comparing the Mindfetalness group with routine care group, occurred among women with psychiatric history, which indicates positive psychological effects in practising Mindfetalness. In a study by Smorti et al. (2020) , women giving birth without EDA rated their fear of birth lower when compared to women who gave birth with EDA.Further, they had higher scores on the CES-scale (Centrality of events), i.e., to what extent the pregnancy is a central event in life, than women who used EDA ( Smorti et al., 2020 ).Mindfetalness could have had positive effects in women with anxiety, high stress levels and fear of birth, in lowering the need for EDA, as shown in Fig. 3 . The study also showed that women who did not use EDA during birth had higher scores in maternal-fetal attachment than women who gave birth with EDA ( Smorti et al., 2020 ).The Prenatal Attachment Inventory scale (PAI), which is used to evaluate maternal-fetal attachment, includes, on several levels, the mother's interactions with the unborn baby and knowledge about fetal movements.An association has been found between a high awareness of fetal movements and attachment ( Malm et al., 2016 ).A possible association between high maternalfetal attachment and gaining increased self-efficacy is also discussed by Smorti et al. (2020) .The pregnant woman becomes more prone to per- ceive the birth as a condition in which her body is working to birth her baby and thus has less fear of birth, which leads to not wanting EDA during birth.Additionally, an association between sense of coherence (SOC) and the preference for using EDA during birth has been found ( Jeschke et al., 2012 ).Women with high SOC more often preferred to give birth without EDA.Additionally, a woman's degree of SOC is a strong predictor for well-being ( Helga et al., 2004 ).Pregnant women with higher levels of SOC in life had better results relating to their well -being, anxiety and predisposition to depression ( Helga et al., 2004 ). Women in the Mindfetalness group breastfed, with a correct technique two hours after birth, to a higher extent.This may be an indirect effect due to the lower rate of EDA use during labour, as associations have been found between EDA and a negative effect on breastfeeding ( French et al., 2016 ).However, it is also possible that increased maternal-fetal attachment through Mindfetalness affects breastfeeding.When investigating pregnant women's intentions for infant feeding method in the third trimester, high maternal-fetal attachment was associated with intention to breastfeed ( Huang et al., 2004 ).Additionally, in a systematic review, an association was found between higher levels of attachment and initiation of breastfeeding as well as preference for breastfeeding over bottle-feeding ( Linde et al., 2020 ). If some women choose to give birth with EDA due to fear and anxiety, the midwife needs to be aware of possible ways to support them to make an informed choice.By reducing stress, facilitating a positive state of mind and creating possibilities for the pregnant women to attach to their unborn baby through Mindfetalness, more women may feel confident to give birth and cope with pain during labour. Methodological considerations There are several strengths in the study design.The data were retrieved from a high-quality population-based register, and the randomization process minimizes the risk of confounding factors.By only including women with singleton pregnancies with spontaneous onset of labour, the compared groups are similar.Additionally, by only including women born in Sweden, any dilution effects are reduced, as the leaflets were distributed in nine languages, i.e., many women did not receive the information in their own language.However, a dilution effect is probably inevitable anyway, as we know from the original study that only 79 percent of the leaflets were distributed.Contamination between the two groups is also possible, as the website was open for anyone to use, and women and midwives talk to each other.Taking all of these issues into consideration, the effect we can see is probably stronger in reality.However, when conducting a sub-group analysis it is important to consider that it is a higher risk for false positive findings ( Wang et al. 2007 ). The use of EDA during labour is affected by many factors, physical as well as psychological.Other possible confounding factors that were not included in this analysis may have affected the results.Additionally, the factors that have been taken into consideration could have been associated with each other, for example, body mass index may be associated with parity, age and birth weight, and parity with educational level. Determining the women's preference for EDA and level of anxiety before they started to practise Mindfetalness would have been valuable measurements when comparing that group with the routine-care group.Additionally, the professional support provided during birth may have affected the women's choice of EDA.It also would have been valuable to have measured the level of fear of birth in the two compared groups.Additionally, it is possible that the instruction of how to practice Mindfetalness is in fact the mechanism behind the lower rate of epidural use, i.e. laying down on the side for 15 minutes a day contributed to reduced stress and anxiety. Conclusion In this observational study it has been shown that the method Mindfetalness including laying down for 15 minutes a day in third trimester, focusing on the unborn baby, decreases the use of EDA during birth, especially among women with a psychiatric history.It is possible that practising Mindfetalness in the third trimester can be advantageous for women's self-efficacy in coping with labour pain, but future studies are needed to further investigate and understand the association between Mindfetalness and the reduced usage of EDA during birth. Fig. 1 . Fig. 1.Flow chart: randomization of maternity clinics showing the number of women registered, number born in Sweden, number with spontaneous onset of labour who gave birth from 32 weeks' gestation for each study arm. Fig. 2 . Fig. 2. The use of EDA among women born in Sweden, with spontaneous start of labour after birth from 32 weeks' gestation, in the Mindfetalness-group and the Routine-care group, respectively. Fig. 3 . Fig. 3. Proposed theory of the effect of Mindfetalness on a woman's need for EDA. Table 1 Characteristics of 18 501 women born in Sweden with spontaneous onset of labour, 9238 registered at a maternity clinic randomised to Mindfetalness, and 9263 registered at a maternity clinic randomised to routine care. Table 3 EDA among women with spontaneous onset of labour in Mindfetalness-group compared to Routine-care group, adjusted for potential confounders.
2021-10-13T06:16:49.976Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "46a4f8f0a410d7aba6f38518e52783404b75a879", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.midw.2021.103156", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "d5b4a65ac995a59e05c2df35c82c044a18f3e5c4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255674577
pes2o/s2orc
v3-fos-license
SARS‐CoV‐2 booster effect and waning immunity in hemodialysis patients: A cohort study Patients with end‐stage kidney disease on dialysis suffer high morbidity and mortality from severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2). Despite successful vaccination campaigns by dialysis providers, the standard two‐dose vaccination series with Pfizer BioNTech (BNT162b2) messenger RNA (mRNA) SARS‐ CoV‐2 is insufficient to protect patients from infection due to Omicron variants. Current guidelines recommend boosters of SARS‐ CoV‐2 mRNA‐based vaccines. However, data regarding humoral response post‐booster is limited in dialysis patients. Additionally, few studies directly compare the long‐term response after two doses of a coronavirus disease 2019 (COVID‐19) vaccine to the response after three doses in the same cohort of patients. Studies suggest that the third dose of BNT162b2 increases antibody levels in dialysis patients. However, antibody response and booster effectiveness are diminished in dialysis patients compared with the general healthy population. We previously reported long‐term humoral responses to two doses of BNT162b2 in a cohort of hemodialysis patients. Six months after full vaccination, 40% of patients' anti‐spike protein IgG levels were either undetectable or borderline. Here, we report responses to the first booster of the BNT162b2 vaccine in these patients. | INTRODUCTION Patients with end-stage kidney disease on dialysis suffer high morbidity and mortality from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Despite successful vaccination campaigns by dialysis providers, the standard two-dose vaccination series with Pfizer BioNTech (BNT162b2) messenger RNA (mRNA) SARS-CoV-2 is insufficient to protect patients from infection due to Omicron variants. 1 Current guidelines recommend boosters of SARS-CoV-2 mRNA-based vaccines. 2 However, data regarding humoral response post-booster is limited in dialysis patients. Additionally, few studies directly compare the long-term response after two doses of a coronavirus disease 2019 (COVID-19) vaccine to the response after three doses in the same cohort of patients. Studies suggest that the third dose of BNT162b2 increases antibody levels in dialysis patients. 3 However, antibody response and booster effectiveness are diminished in dialysis patients compared with the general healthy population. 4 We previously reported long-term humoral responses to two doses of BNT162b2 in a cohort of hemodialysis patients. 5 Six months after full vaccination, 40% of patients' anti-spike protein IgG levels were either undetectable or borderline. Here, we report responses to the first booster of the BNT162b2 vaccine in these patients. | METHODS We performed a prospective cohort study measuring serial semiquantitative IgG antibodies to the SARS-CoV-2 spike protein S1 receptor binding domain. We evaluated the response at a mean of 2, 6, and 11 weeks post-booster. The Anti-SARS-CoV-2 QuantiVac ELISA (IgG) from Euroimmun (EUROIMMUN US, Inc.) was used in all assessments. Final results were reported in WHO-recommended binding antibody units (BAU/ml) per the manufacturer's instructions. 6 Final results were considered negative for <25.6 BAU/ml, borderline for 25.6 to <35.2 BAU/ml, and positive for ≥35.2 BAU/ml. Clinical data were obtained as previously described. 5 Of 35 hemodialysis patients in the original cohort, 27 (77.1%) received a third dose of BNT162b2, and 20/27 (74%) had complete data (4-time point measurements): pre-booster (mean of 6 weeks pre-booster) and 2, 6, and 11 weeks post-booster. Two weeks was used to attain peak/initial antibody levels. 7 wileyonlinelibrary.com/journal/hsr2 vaccination. 5 Encouragingly, a third dose appears to restore antibodies to high levels, though these waned quickly in the ensuing weeks. Similar trends of antibody decline are seen in healthy individuals, although dialysis patients may differ from the general population with reduced peak levels and lower seroconversion rates. 8,9 Long-term durability remains unclear and protective levels against infection are unknown. Goldblatt et al. 10 reported that the mean protective threshold against WT SARS-CoV-2 virus was 154 BAU/ml (95% CI 42-559) but higher levels are presumed to be required against current variants. Interestingly, previously infected patients saw a blunted rise in antibody level after an initial booster shot, though these patients started from a higher baseline. Thus, overall, they attained similar peak levels. While our sample size precludes further analysis of this finding, the interaction of natural immunity with booster vaccination response in dialysis patients requires further study. During the recent Omicron wave, boosters were found to be protective from hospitalization and severe illness in the general population, however, this effect was time-dependent and declined significantly at 4 months post-booster. 11 A similar pattern is likely in patients on dialysis, but few studies have been conducted in this population. In one study, 93% of dialysis patients who received the third dose of BNT162b2 vaccine achieved antibody levels associated with protection, compared with only 35% pre-booster. 12 The Centers for Disease Control recommends a fourth dose of mRNA vaccines for select populations. 2 The utility of such a strategy in dialysis patients remains unclear but the humoral antibody waning seen in dialysis populations may support additional boosters. Our study has limitations: small sample size, brief follow-up time and focus on humoral immunity. In conclusion, our data illustrate that, although humoral immunity wanes, patients on hemodialysis demonstrate strong antibody responses to a third dose of the BNT162b2 vaccine. CONFLICT OF INTEREST The authors declare no conflict of interest. DATA AVAILABILITY STATEMENT The data set used for this analysis is not publicly available. The data utilized was obtained from the Electronic Health Record and from the dialysis-specific electronic medical record system, which is restricted to use by only authorized employees.
2023-01-12T16:16:50.870Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "b8f756dcedd75602e0cd95c9d02ca50d50867679", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/hsr2.1040", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f8f9e73438436ad143f835d113880159d5941636", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232155863
pes2o/s2orc
v3-fos-license
Application of the bacterial strains Ruminobacter amylophilus, Fibrobacter succinogenes and Enterococcus faecium for growth promotion in maize and soybean plants Ruminobacter amylophilus, Fibrobacter succinogenes and Enterococcus faecium have characteristics that are similar to those of plant growth-promoting bacteria and can be used to promote plant development and reduce production costs. These bacteria were isolated from fistulated ruminants and are gram-negative, anaerobic or facultative anaerobic. These bacteria are frequently used to increase animal productivity through the production of many enzymes responsible for the carbon cycle and the release of other nutrients by organic matter decomposition. The bacteria R. amylophilus, F. succinogenes and E. faecium have growth promotion abilities, such as phosphorus solubilization, nitrogen promotion, and indole acetic acid and siderophore production. Tests were performed under greenhouse conditions with soybean and maize crops with five treatments and six replications. The first treatment was the control (without inoculant); the other treatments included each species of bacteria, and there was a treatment with a mixture (mix) of the three bacteria. F. succinogenes increased the root dry mass of maize by 21.4%, as well as the nitrogen and phosphorus contents, compared to the control. R. amylophilus and E. faecium decreased the phosphorus concentration in shoots of maize, and R. amylophilus increased the soil biomass carbon by 76.39% compared to the mix under maize cultivation, while E. faecium decreased the soil biomass carbon by 56.78% compared to the mix under soybean cultivation. The present study verified that Ruminobacter amylophilus, Fibrobacter succinogenes and Enterococcus faecium presented plant growth-related abilities and could be used to improve plant development, reducing the necessity of chemical fertilizers. Introduction The introduction of microorganisms as biological inoculants has proven to be one of the most efficient technologies for complementing and reducing the use of chemical fertilizers (Ramakrishna et al., 2019). Plant growth-promoting bacteria (PGPB) directly affect plant metabolism, providing nutrients that are generally scarce. These bacteria can fix nitrogen, solubilize phosphorus and produce plant hormones. In addition, they improve plant tolerance to stresses, such as drought, high salinity, and metal toxicity as well as phytotoxicity caused by insecticide application (Matthews et al., 2019). There are many mechanisms by which bacteria can promote plant growth, such as indole acetic acid (IAA) production, whose main effect is the promotion of root and shoot development (Ye et al., 2019). As a consequence of IAA production, the plant becomes more efficient in taking up nutrients and water, promoting its development. Some bacteria can fix nitrogen from the atmosphere and make it available to plants. This bacterial ability is important due to the amount of nitrogen required by plants to grow and increase their yield (Ke et al., 2019). Another important characteristic of some bacteria in promoting plant growth is their ability to solubilize phosphorus. Although soil has phosphorus reserves, most of it is insoluble and not assimilable by plants, which limits plant growth; some bacteria can make phosphorus available to plants (Schmidt and Gaudin, 2018). Iron is an important metal required by many bacteria as a cofactor for their growth and establishment in soil. Bacterial establishment is required for plant-bacteria interactions. However, iron is scarce in soils. Some bacteria can synthesize compounds with low molecular weights called siderophores. Siderophores allow these bacteria to uptake iron from soil more efficiently, ensuring their establishment in soil and eliminating some phytopathogens through competition for iron (Melo et al., 2016). Some bacteria, such as Ruminobacter amylophilus, Fibrobacter succinogenes and Enterococcus faecium, are probiotic bacteria of the ruminal tract. Ruminants depend on the symbiosis between the host and the rumen microbiota for the uptake of nutrients from feed (Anderson, 1995;Pinloche et al., 2013). R. amylophilus is a gram-negative, anaerobic bacterium that was isolated and described by Hamlin and Hungate (1956). R. amylophilus ferments nonstructural carbohydrates (starch, pectin, sugars), uses both ammonia as peptides and amino acids as a source of N and can produce ammonia (Russell et al., 1992). Hungate also isolated rumen bacteria F. succinogenes in 1950 for the first time (Hungate 1950). It is also gram-negative and anaerobic and plays a key role in the rumen (Bera-Maillet et al., 2004). It is a structural carbohydrate-fermenting bacterium, such as cellulose (Oliveira et al., 2007). The bacterium E. faecium is a grampositive bacterium that makes up part of the commensal microbiota of the intestines of humans and animals. They are anaerobic optional and ferment glucose and other carbohydrates, obtaining lactic acid as the final product (Carvalho, 2010). E. faecium specifically has a phenotype prominence of tolerance to nutritional stress, pointing to adaptations specific to this species that help explain its environmental persistence (Gao et al., 2018). Although these bacteria have been classified as ruminal probiotics, they also show important abilities related to plant growth promotion, such as IAA production, nitrogen fixation, phosphorus solubilization and siderophore production. The hypothesis of this study is that these bacteria could have a plant growth-promoting effect due to these abilities. Therefore, these bacteria were inoculated into maize and soybean crops. Bacterial isolates and the ability to promote plant growth The three isolates tested could produce IAA, solubilize phosphorus and fix nitrogen (Table 1), with the contents quantified and values calculated according to the standard curve in a previously described methodology. Bacteria can solubilize phosphorus under in vitro conditions. F. succinogenes solubilized the highest amount of phosphorus, followed by R. amylophilus and E. faecium. The bacterial isolates were also able to fix nitrogen. The isolate that fixed the most nitrogen under in vitro conditions was R. amylophilus, followed by F. succinogenes and E. faecium. All three isolates showed the presence of siderophores; the coloration of the culture medium turned from blue to red/orange through the CAS assay ( Fig. 1). Growth promotion in maize under greenhouse conditions The height of maize plants varied from 55 to 65 cm, and the treatments presented similar heights compared to that of the control, with no significant differences according to the 5% Duncan test. Additionally, the treatments presented similar dry shoot mass values among plants, showing no significant differences ( Fig. 2A). Treatments containing the F. succinogenes isolate showed a higher root dry matter (RDM) value in maize plants, significantly differing from that in the control (Fig. 2B). Regarding the nitrogen concentration in shoots, R. amylophilus and E. faecium bacteria decreased the concentration compared to that in the control (Fig. 3A), and the F. succinogenes treatments showed higher nitrogen concentration in the roots compared with that in the control (Fig. 3B). For the phosphorus concentration, there was no difference for shoots, and for the roots, F. succinogenes showed a high nitrogen content compared with that in the control ( Fig. 3C and D). R. amylophilus produced the highest microbial biomass carbon; however, there was no significant difference compared to that in the control treatment ( Fig. 4A). Additionally, there was no significant difference in the total soil bacterial count among the treatments (Fig. 4B). Growth promotion in soybean under greenhouse conditions condition R. amylophilus bacteria produced a significant increase in the height of soybean plants compared to that of the control treatment (Fig. 5), and there were no significant differences in average shoot dry matter (SDM) (Fig. 6A). However, the treatment that received the mix showed the highest root dry matter (RDM) compared with that of the other treatments in soybean plants (Fig. 6B). Nitrogen concentrations in the shoots and roots of soybean plants that received bacterial inoculations did not differ significantly from those of the control treatment ( Fig. 7A and B). E. faecium and the bacterial mix significantly decreased the phosphorus concentration in shoots compared to that in the control treatment (Fig. 7C). The bacterial mix promoted a significant increase in soil microbial biomass carbon compared to that in the control treatments under soybean cultivation (Fig. 8A). However, there was no significant difference in the number of total soil bacteria among treatments that received bacterial inoculations compared to that in the control treatment under soybean cultivation (Fig. 8B). Discussion As seen in plant growth-promoting rhizobacteria (PGPR), the presence of certain functions, such as indole acetic acid production, biological nitrogen fixation, phosphorus solubilization and siderophore production, are important and can promote plant development (Melo et al., 2016). The bacterial isolates used in the present study are ruminal probiotic bacteria that, when present in adequate amounts, promote these benefits in the host. They play an important role in providing energy and nutrients to ruminants through the breakdown of macromolecules in animal feed (Lerner et al., 2019). Ruminobacter amylophilus, F. succinogenes and E. faecium bacteria demonstrated important abilities related to plant growth promotion, such as the synthesis of phytohormone indole acetic acid (IAA). R. amylophilus produced the most IAA of the studied isolates, 9.69 μg IAA mL -1 , followed by F. succinogenes, 9.20 μg IAA mL -1 , and E. faecium, 7.60 μg IAA mL -1 (Table 1). Phytohormones are organic substances that can promote, inhibit or modify the development of plants at low concentrations (Damam et al., 2016). Phytohormones promote the proliferation of root cells by overproduction of lateral cells and root hairs together with increased absorption of nutrients and water (Sureshbabu et al., 2016). Fibrobacter succinogenes promoted an increase in root dry matter in maize plants (Fig. 1B); however, there was no difference in dry matter from that of the control treatment in soybean plants (Fig. 5B). There was no difference in IAA production between F. succinogenes and R. amylophilus, and Means followed by equal letters do not differ by Duncan's test at 5% probability. When there was no significant difference between the treatments, no letter was added. Fig 4. Analysis of (A) biomass carbon and (B) number of colony-forming units (data transformed into log 10) compared among treatments for the maize crop. Means followed by equal letters do not differ by Duncan's test at 5% probability. When there was no significant difference between the treatments, no letter was added. Fig 5. Plant height compared among treatments in the soybean crop. Means followed by equal letters do not differ by Duncan's test at 5% probability. When there was no significant difference between the treatments, no letter was added. Fig 6. Average shoot dry mass (SDM) (A) and root dry mass (RDM) (B) compared among treatments in soybean crops. Means followed by equal letters do not differ by Duncan's test at 5% probability. When there was no significant difference between the treatments, no letter was added. Fig 7. Average nitrogen concentrations in shoots (A) and roots (B) and average phosphorus concentrations in shoots (C) in the soybean crop. Means followed by equal letters do not differ by Duncan's test at 5% probability. When there was no significant difference between the treatments, no letter was added. Fig 8. Analyses of (A) biomass carbon and (B) number of colony-forming units (data transformed into log 10) compared among treatments in the soybean crop. Means followed by equal letters do not differ by Duncan's test at 5% probability. When there was no significant difference between the treatments, no letter was added. the increase in RDM caused by F. succinogenes in maize plants but not in soybean plants may be due to the greater interaction of the bacteria with the plant species. The first step for the colonization of bacteria in the rhizosphere and the subsequent interaction between bacteria and plants is the attraction of the bacteria by plant exudates. It appears likely that attraction to the root constitutes the first step toward the attraction of various plant growth-promoting bacteria toward whole-root exudates (Tan et al., 2013). Root exudates are a complex blend of high and low molecular weight compounds, many of which can induce chemotactic responses in PGPR (Bais et al., 2008). Molecules, such as small sugars, amino acids, aromatic compounds and small organic acids, have been suggested to be important drivers of bacterial attraction in the rhizosphere (Oku et al., 2012). The exact composition of exudates in the rhizosphere varies significantly among plants, which allows for the specific recruitment of a cognate PGPR and their subsequent colonization of the root (Oku et al., 2012). This fact may be why F. succinogenes promoted an increase in root dry matter in maize plants but not in soybean plants. Only F. succinogenes could increase the phosphorus concentration in plants. Talboys et al. (2014) reported a decrease in phosphorus concentration when wheat plants were inoculated with B. amyloliquefaciens. This negative effect may be due to the production of auxins by bacteria under certain soil fertility conditions, especially in soil with low phosphorus availability. Santos et al. (2018) also observed a reduction in soil P levels in sugarcane when inoculated with Bacillus subtilis and B. pumilus and attributed this effect to the high IAA concentrations synthesized by Bacillus. Interestingly, in these studies, phosphorus reduction occurred in roots, but in the present study, it occurred in shoots, and E. faecium did not produce the largest amount of IAA of the three bacteria in the study. Given these results, further studies are needed to verify the effect of E. faecium inoculation on soybean plants and to elucidate its modes of action. The three bacterial isolates were also able to fix nitrogen, and the isolate that presented the highest nitrogen fixation value in vitro conditions was R. amylophilus, 12.34 mg N mL -1 , followed by F. succinogenes, 11.23 mg N mL -1 , and E. faecium, 10.34 mg N mL -1 . Dynarski et al. (2019) showed that PGPR can increase N concentrations by 20 to 30% in grasses. In this study, it was not possible to verify which of the inoculated isolates had the best result for N accumulation in shoots and roots. An unexpected result was the reduction in the nitrogen concentration levels in the shoots of maize plants with R. amylophilus and E. faecium bacteria when compared to that in the control treatment. The bacterial mix increased the biomass carbon in soil under soybean cultivation. This result shows that microbial establishment occurred in soil. Microbial establishment in soil is the first step toward PGPR and has a positive effect on plant development. However, the total soil bacterial count did not increase. This result suggests that a population rearrangement may have occurred, where the populations of certain microbial species may have increased and others decreased without changing the total number of microorganisms. Bacterial strains and growth conditions The isolates of R. amylophilus, F. succinogenes and E. faecium came from the collection of the laboratory of soil microbiology of the Universidade Estadual Paulista, Jaboticabal campus. All strains were isolated from cow rumen according to Hungate (1950) from the UNESP Farm (Avila et al., 1986;Rigobelo et al., 2016). The strains were identified by automatic sequencing of the 16S ribosomal gene and stored in freeze-dried brain heart infusion (BHI) medium in a freezer at -20 °C. The strains were reconstituted by adding 25 mL of BHI medium to 2 g of each strain. The strains were kept in a microbiological incubator for 24 h at 38 °C for 48 h and 150 rotations per minute before use in subsequent assays. IAA production Indole acetic acid (IAA) production was measured by the methodology of Kuss et al. (2007), and Ali and Hasnain (2007) qualitatively determined by the reddish coloration of the solution contained in vitro. For the quantitative evaluation of the hormone, the reading was performed in a spectrophotometer at a wavelength of 530 nm. Detection of Siderophores CAS medium was prepared according to Schwyn and Neilands (1987), although only as a means to reveal changes rather than the presence of nutrients. All experiments were performed at least three times with three replicates for each experiment. Chemical determination of produced siderophores Chemical assays were performed to test the results obtained with the O-CAS method as follows: for hydroxamate detection, the FeCl3 assay (Neilands, 1981) was used. To detect catechols, the Arnow assay (Arnow, 1937) was used, and to detect carboxylates, the Shenker assay (Shenker et al., 1992) was used. Quantification was performed using a Lambda 35 spectrometer (Perkin-Elmer Instruments). All assays were performed at least three times with two replicates for each assay. Phosphorus solubilization in vitro The in vitro activity of fluorapatite solubilization was determined by transferring 0.2 mL of the 1 x 10 7 CFU mL -1 suspension to an Erlenmeyer flask containing the medium described by Nahas et al. (1994) supplemented with 5 g L -1 fluorapatite (Araxá apatite). After inoculation, the bacterial solution was incubated with stirring at 28 °C for two days at 20 rpm on an orbital shaker. The control was an Erlenmeyer flask containing the same medium without bacterial inoculum. After the incubation period, the bacterial solution was removed and centrifuged at 9,000 rpm for 15 minutes, the supernatant was collected, and the amount of phosphate was determined according to Ames (1966). Nitrogen fixation in vitro Total nitrogen was determined according to the method proposed by Bremner and Mulvaney (1982). For each evaluation, a blank was made. The total nitrogen content was calculated based on a standard curve determined with ammonium sulfate solution. Plant growth promotion assay The experimental design was randomized blocks, with five treatments and six replicates, totaling thirty pots per crop. The experiment was performed in a greenhouse. The treatments were as follows: T1: control (without inoculant); T2: R. amylophilus; T3: F. succinogenes; T4: E. faecium; T5: mixture of the three bacteria (mix) with six repetitions. Three seeds were planted in pots (vases) filled with five liters of soil. After one week, thinning was performed, and only one plant per pot was kept. One week after sowing, each plot received 10 ml of bacterial inoculum at a concentration of 1 x 10 8 colony-forming units through aerial parts with the aid of a pipette. The plants received three inoculations weekly throughout the experiment for 60 days, when the harvesting was performed manually. The experiments were performed with maize genotype 2B587PW and soybean genotype 95R95IPRO. Height and dry mass Plant height was measured using a measuring tape from the base of the plant to the apex on the last day of experiments. Plants were collected, and the roots were washed in running water to remove excess soil. Shoots were separated from roots, and both were oven dried under forced air circulation at 65 °C for approximately 72 hours until reaching constant weight to determine the dry matter content using an analytical scale. Phosphorus and nitrogen contents After weighing, the plant material was ground in a Wiley mill (mesh 20) and submitted to analysis of phosphorus and total nitrogen in leaves, according to the methodology described by Sarruge and Haag (1974). The nitrogen content was determined by the Kjeldahl method. Sulfuric digestion was used to obtain the extract, and nitrogen determination was performed by distillation in a Kjeldahl semimicro apparatus and titration with 0.02 mol L -1 sulfuric acid. Soluble phosphorus levels were measured as previously described for quantification of solubilization in vitro. Counting of soil microorganisms The methodology of serial dilution from 10 -1 to 10 -3 according to Wollum (1982) was used. Then, 0.1 mL of each dilution was transferred to a Petri dish containing tryptic soy agar (TSA) culture medium and incubated in BOD at 25 °C to 30 °C, allowing colony growth. Counts were performed at 24, 48 and 72 h (Schortemeyer et al., 1996). Microbial biomass carbon To determine soil microbial biomass (SMB), the adapted (Islam and Weil, 1998) method was used. For extraction, 10 g of the soil sample was weighed into two 250 ml Erlenmeyer flasks, with one identified as irradiated and the other as not irradiated. Irradiated soils were subjected to 15 seconds in a microwave 900 Watts per hour in each flask. Forty milliliters of 0.5 M K2SO4 was added to the Erlenmeyer flasks (irradiated and not irradiated), and the flasks were placed on a horizontal shaker for 30 minutes. The flasks were allowed to stand for another 30 minutes after stirring. Filtration was performed with funnel and qualitative filter paper, and the filtrate (extract) was collected in 50 ml Erlenmeyer flasks. For determination, 10 mL of each filtered extract was pipetted and transferred to a 125 mL Erlenmeyer flask containing 2 mL of 0.066 M K2Cr2O7 and 10 mL of concentrated sulfuric acid. After lowering the temperature, 50 mL of distilled water and 4 drops of the ferroin indicator were added. Titration was performed with 0.03 M ammoniacal ferrous sulfate. Statistical analysis of data Statistical analyses were performed using AgroEstat software (Barbosa and Maldonado, 2010). Normality and homogeneity of variances were tested by the Shapiro and Wilk (1965) test (α ≤ 0.05). Differences were considered significant at α ≤ 0.05. Conclusion Ruminal probiotic bacteria present some plant growthrelated abilities and can be used to improve plant development and reduce the necessity of chemical fertilizers.
2021-03-09T03:22:23.644Z
2020-12-10T00:00:00.000
{ "year": 2020, "sha1": "6a8c2d09d2c265dda72daf662675c0f552260529", "oa_license": null, "oa_url": "https://doi.org/10.21475/ajcs.20.14.12.2937.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6a8c2d09d2c265dda72daf662675c0f552260529", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
252531268
pes2o/s2orc
v3-fos-license
Partially dissipative systems in the critical regularity setting, and strong relaxation limit Many physical phenomena may be modelled by first order hyperbolic equations with degenerate dissipative or diffusive terms. This is the case for example in gas dynamics, where the mass is conserved during the evolution, but the momentum balance includes a diffusion (viscosity) or damping (relaxation) term, or, in numerical simulations, of conservation laws by relaxation schemes. Such so-called partially dissipative systems have been first pointed out by S.K. Godunov in a short note in Russian in 1961. Much later, in 1984, S. Kawashima highlighted in his PhD thesis a simple criterion ensuring the existence of global strong solutions in the vicinity of a linearly stable constant state. This criterion has been revisited in a number of research works. In particular, K. Beauchard and E. Zuazua proposed in 2010 an explicit method for constructing a Lyapunov functional allowing to refine Kawashima's results and to establish global existence results in some situations that were not covered before. These notes originate essentially from the PhD thesis of T. Crin-Barat that was initially motivated by an earlier observation of the author in a Chapter of the handbook coedited by Y. Giga and A. Novotn{\'y}. Our main aim is to adapt the method of Beauchard and Zuazua to a class of symmetrizable quasilinear hyperbolic systems (containing the compressible Euler equations), in a critical regularity setting that allows to keep track of the dependence with respect to e.g. the relaxation parameter. Compared to Beauchard and Zuazua's work, we exhibit a 'damped mode' that will have a key role in the construction of global solutions with critical regularity, in the proof of optimal time-decay estimates and, last but not least, in the study of the strong relaxation limit. For simplicity, we here focus on a simple class of partially dissipative systems, but the overall strategy is rather flexible, and adaptable to much more involved situations. Introduction An important recent mathematical literature has been devoted to the study of first order systems of conservation laws. These systems that come into play in the description of a number phenomena in mechanics, physics or engineering typically read where the vector-fields f k , k = 0, · · · , d are defined on some open subset O of R n , and the unknown V depends on the time variable t ∈ R + [0, ∞) and on the space variable x ∈ R d . Under rather general conditions, for example whenever (1) is Friedrichs-symmetrizable, it is well known that for anyV in O and initial data V 0 : R d → O such that V 0 −V belongs to some Sobolev space H s (R d ) with s > 1 + d/2, then (1) supplemented with initial data V 0 admits a unique classical solution V on some time interval [−T, T ], satisfying (V −V ) ∈ C b ([−T, T ]; H s (R d )) (the reader may find the detailed statement and the proof in e.g. [4,Chap. 10]). At the same time, for most systems of the above type, smooth solutions (even small ones) blow-up after finite time. In many physical systems however, friction or diffusion phenomena (through e.g. thermal conduction or viscosity) cannot be neglected. Typically, they act on some components of the unknown, while other components remain unaffected. An informative example is gas dynamics where the mass is conserved (as well as the entropy in the isentropic case). In order to have an accurate description corresponding to these situations, it is thus suitable to add in (1) zero (friction) or second (diffusion) order terms that act on a part of the unknown but, possibly, not on all components. The resulting class of systems is named, depending on the authors and on the context, hyperbolic-parabolic, partially diffusive or partially dissipative. It has been extensively studied since the pioneering work by S. Kawashima in his PhD thesis [24]. One of the main issues is to find as weak as possible conditions ensuring the existence of global solutions close to constant states, to describe their long time asymptotics and, where applicable, to study the convergence to some limit system. Rather than writing out now the class of systems that enter in our study, let us give a simple example from multi-dimensional gas dynamics. In he barotropic and isothermal case, the governing equations then read: Above, ̺ = ̺(t, x) ∈ R + stands for the density of the gas, and v = v(t, x) ∈ R d , for the velocity. The pressure P = P (̺) is a given function of the density. A typical example is the isentropic pressure law P (̺) = a̺ γ with a > 0 and γ > 1. The first equation corresponds to the mass conservation and the second one, to the momentum balance. We assume that the fluid domain is the whole space which, somehow, means that boundary effects are neglected. This is a fundamental assumption for our analysis, that strongly relies on Fourier methods. It is by now well understood that in the first situation (neither viscosity nor damping), smooth initial data generate a local-in-time solution that is likely to blow up after finite time (see e.g. [1,34]) whereas in the second and third situations, small and sufficiently smooth perturbations of a constant density state (3) (̺, 0) with̺ > 0 and P ′ (̺) > 0 produce global strong solutions that are defined for all positive times. The good diffusive properties of the barotropic compressible Navier-Stokes equations in the whole space R 3 (and, more generally, of the full non-isothermal polytropic system) have been first observed by A. Matsumura and T. Nishida at the end of the 70ies. In [28], they established the global existence of strong solutions for H 3 (R 3 ) perturbations of any constant state of type (3) (see [13] for a version of this result in the broader setting of 'critical Besov spaces'). An important achievement in the study of general first order partially dissipative symmetric hyperbolic systems having both terms of order 0 and 2 has been made by S. Kawashima in 1984, in his PhD thesis [24]. There, he exhibited a rather simple sufficient condition that is nowadays called the (SK) (meaning Shizuta-Kawashima) condition for global existence of strong solutions in the neighborhood of linearly stable constant solutions. In the case where there is only a 0-order partially dissipative term, Condition (SK) exactly says that for the linearized system, the intersection between the kernel of the 0-order term and the set of all eigenvectors of the symmetric first order term is reduced to {0}. A bit later, S. Shizuta and S. Kawashima in [33] observed that Condition (SK) is equivalent to the fact that, in the Fourier space, the real parts of all eigenvalues of the matrix of the linearized system about the reference solution are strictly negative and also to the existence of a compensating function. That compensating function comes into play for working out a functional that is equivalent to a Sobolev norm of high order and allows to recover the optimal dissipative properties of the system. In the same paper, the authors pointed out that, if in addition of being in a Sobolev space H s (R d ) with large enough s, the discrepancy of the initial data to the reference constant solutionV belongs to some Lebesgue space L p (R d ) with p ∈ [1,2], then the global solution V converges toV in L 2 (R d ) with the same decay rate as for the heat equation, namely (1 + t) , when t goes to infinity. Since then, more decay estimates have been proved under various assumptions in e.g. [5,37,40]. A number of more accurate results have been obtained since then for specific systems. For instance, T. Sideris et al [35] considered the three-dimensional compressible Euler equations with damping and Y. Zeng [43] studied a particular class of 4×4 nonlinear hyperbolic system with relaxation. General partially (0-order) dissipative systems have been investigated by S. Kawashima and W.-A. Yong in [25,26] and by W.-A. Yong in [42], and adapted to second order partially diffusive operators by V. Giovangigli et al in [18,19]. Recent works on general partially dissipative systems in the so-called critical functional framework (that will be recalled later in this text) have been performed by J. Xu and S. Kawashima [38,39,40]. It has also been observed by several authors that Condition (SK) is not necessary for the existence of global strong solutions. For instance, in [31], P. Qu and Y. Wang established a global existence result in the case where exactly one eigenvector violates Condition (SK). In this respect, one can also mention the paper by R. Bianchini and R. Natalini [6] that uses nonresonant bilinear forms, and the recent work [8] dedicated to the mathematical study of a model of mixture of compressible fluids. The strength of Shizuta and Kawashima's approach is that it does not require to compute explicitly the Green function of the linearized system under consideration. Although doing this calculation for the damped barotropic Euler equations presented above is not an issue, computing the Green kernel associated to the corresponding linearized system in the nonisothermal case is already more involved, and it soon becomes impossible for more cumbersome systems (like e.g. systems related to the description of plasma or radiative phenomena, see e.g [16]). As said before, having a 'compensating function' at hand allows to construct an energy functional that encodes the dissipative properties of the system. In Shizuta and Kawashima's work however, this functional is not so explicit, that makes difficult, if not impossible, to track the dependency of the solution with respect to the parameters of the system, when applicable. Another limitation is that it only provides estimates on the whole solution, without supplying more accurate informations on the part of the solution which is expected to experience a better dissipation. In [3], K. Beauchard and E. Zuazua took advantage of techniques that originate from Kalman control theory for linear ODEs so as to construct explicit Lyapunov functionals for general partially dissipative systems of order 1. They also pointed out the connection between Condition (SK) and the Kalman criterion for observability in the theory of linear ODEs (this was also noticed by D. Serre in his unpublished lecture notes [32]). To some extent, Beauchard and Zuazua's approach may be interpreted in the broader framework of hypo-ellipticity as presented by L. Hörmander in [22] or, much more recently, by C. Villani in [36]. To keep these notes as elementary and short as possible, we refrain from looking deeper into this direction, though. Although it is not mentioned in the construction of a Lyapunov functional, Beauchard and Zuazua's approach provides for free compensating functions. Furthermore, the construction is elementary (it suffices to compute at most n powers of matrices) and easily localizable in the Fourier space. Hence, at the linear level, keeping track of the different behavior of the low and of the high frequencies of the solution is obvious. Their method further allows to handle some systems that do not satisfy Condition (SK) (but we shall not investigate this interesting point is these notes). The present lecture notes aim at familiarizing the reader with the Beauchard-Zuazua approach and recent updates that originate from the thesis of T. Crin-Barat and were published in [10,11,12]. As our aim is not to provide the reader with an exhaustive theory of partially dissipative hyperbolic systems but rather to present a clear road map allowing him to tackle efficiently the study of systems of this type, we shall focus on the following 'academic' class of partially dissipative hyperbolic systems: Above, the (smooth) functions A k (k = 1, · · · , d) and H are defined on some open subset O of R n , and have range in the set of n × n real symmetric matrices, and in R n , respectively. The unknown V = V (t, x) depends on the time variable t ∈ R + and on the space variable x ∈ R d (d ≥ 1). We fix a constant solutionV ∈ O of (4) (hence H(V ) = 0). The system is supplemented with initial data V 0 ∈ O at time t = 0, that are sufficiently close toV . Finally, the relaxation parameter ε is a given positive parameter that, except in Section 4, is taken equal to 1. A basic example of a physical system in the above class is the compressible Euler equations with isentropic pressure law P (̺) = a̺ γ , if rewritten in terms of the (renormalized) sound speed Indeed, the pair (c, v) then satisfies: Under the so-called Condition (SK) (presented in the next section) that is satisfied in particular by (5), we shall prove the existence of global strong solutions with 'critical regularity' for (4) in the neighborhood of any constant solutionV (see Theorems 2.1 and 2.2). Then, we shall obtain the strong convergence toV in the long time asymptotics with explicit decay rates (Theorem 3.1). In Section 4, we shall investigate the strong relaxation limit, that is the convergence of the solutions of (4) to some limit system. Let us shortly explain what we mean in the simple case of the compressible Euler equations. Making the following 'diffusive' rescaling: we see that the pair ( ̺, v) satisfies: Hence, formally, if ̺ and v tend to some functions N and w, then the second equation above yields ∇(P (N )) + N w = 0 which, plugged in the mass conservation equation leads to the so-called porous media equation: The rigorous justification of the convergence of the density to a solution of (6) has been first carried out by S. Junca and M. Rascle [23] in the one-dimensional case where specific techniques may be used. In the multi-dimensional case, the weak convergence and the strong convergence on bounded subsets of R d have been proved by J.-F. Coulombel and C. Lin in [27], and by Z. Wang and J. Xu in [41]. Results in the same spirit for a class of partially dissipative hyperbolic systems have been obtained by Y.-J. Peng and V. Wasiolek in [30]. The approach that is proposed in the present lecture notes allows to get the strong convergence in the whole space with explicit convergence rates for suitable norms when the relaxation parameter tends to zero not only for the Euler equations, but also for a class of partially hyperbolic systems (see Theorem 4.1). It should be noted that, at the linear level, the method that has been originally proposed by K. Beauchard and E. Zuazua in [3] works exactly the same for partial differential operators of any order (and, more generally, for homogeneous Fourier multipliers) provided one of them is skew-symmetric and the other one, nonnegative. We will enrich this method by exhibiting a 'damped mode' for low frequencies, first introduced in [10] and [11] to the best of our knowledge. This the key to an optimal treatment of the low frequencies of the solution in a critical framework. With almost no additional effort, assuming a bit more integrability on the initial data (expressed in terms of negative Besov spaces like in the work [40] by J. Xu and S. Kawashima), and arguing essentially as in the paper by Y. Guo and Y. Wang [21], we will derive optimal time decay estimates, pointing out better decay for the high frequencies of the solution and for the damped mode. It turns out that adopting a critical approach with different levels of regularity for low and high frequencies also allows to keep track of the relaxation parameter ε just by suitable space/time rescaling. This substantially simplifies the study of the strong relaxation limit. Here again, having a damped mode at hand plays an essential role. Except for our linear analysis, we here concentrate on first order hyperbolic symmetric systems with a partial dissipation term of order 0. The class that is considered contains the isentropic Euler equations with relaxation. We expect the whole strategy modified accordingly to be adaptable to hyperbolic-parabolic systems, to operators of any order and to more complex situations where the partially dissipative terms have mixed orders (see recent examples in [16] and [8]). It would also be of interest to study to what extent it may be adapted to situations where pseudo-differential operators depending on the space variable come into play. Since we used mostly Fourier analysis in our investigations, most of our results can be adapted to periodic boundary conditions in one or several directions, leading to the same statements in the first three sections (the strong relaxation limit studied in Section 4 may be different since the rescaling we used there changes the size of the periodic box). Handling 'physical' boundaries requires completely different tools, and we have no opinion on whether similar results are true or not. The rest of these notes unfolds as follows. In the next section, we present Beauchard and Zuazua's approach for linear partially dissipative hyperbolic systems with operators of any orders. This enables us to deduce quite easily global-in-time a priori estimates in 'hybrid' Besov spaces with different regularity exponents for low and high frequencies. We also exhibit a damped mode, the low frequencies of which satisfy better decay estimates and point out that, under additional structure conditions on the system, it is possible to use without much effort an L p functional framework for the low frequencies. The following sections focus on the nonlinear system (4). In Section 2, we prove global-in-time results while time decay estimates are established in Section 3. In Section 4, we prove strong convergence results when the relaxation parameter ε tends to 0 for partially dissipative systems having the same structure as the isentropic compressible Euler equations with damping. A few technical results are recalled or proved in Appendix. Acknowledgements. The author is indebted to the anonymous referees for their relevant remarks and suggestions that contributed to improve substantially the organization of these notes. The author has been partially supported by the ANR project INFAMIE (ANR-15-CE40-0011). The linear analysis To better understand the difference between the three model situations corresponding to System (2), having first a look at the linearized equations about (̺, 0) is very informative. After suitable renormalization, the system to be considered reads: The above cases correspond to (κ, β) = (0, 0), (κ, β) = (f, 0) with f > 0 or (κ, β) = (µ(̺), 2) (in the special situation λ(̺) + µ(̺) = 0 the general case being similar), respectively. If κ = 0 then System (7) is purely first order hyperbolic and no diffusion or dissipative phenomenon is expected whatsoever since all Sobolev norms are constant in time. In the multi-dimensional case, dispersive phenomena of wave equation type do exist, but they concern only the density and the potential part of the velocity (they will not be discussed here). Let us revert to our model system (4) with ε = 1 for simplicity, namely Let us fix a constant solutionV of (9) (that is,V ∈ O satisfies H(V ) = 0) and make the following structure assumptions on the system: (H1) For all V ∈ O, the matrices A k (V ) are real symmetric; (H2) The spectrum of DH(V ) is included in the set {z ∈ C : Re z ≤ 0}. In the case H ≡ 0 (no dissipation at all) smooth solutions, even small ones, may blow up after finite time. At the exact opposite, if the spectrum of DH(V ) is included in the set {z ∈ C : Re z < 0} then it is not difficult to show that small perturbations ofV in the Sobolev space H s with s > 1+d/2 generate global strong solutions that tend exponentially fast toV when time goes to infinity. We here address the intermediate situation where some eigenvalues of DH(V ) vanish. For expository purpose, we assume that H is linear and has the block structure: where V 1 ∈ R n 1 , V 2 ∈ R n 2 (with n 1 + n 2 = n) and L 2 : R n 2 → R n 2 is linear invertible and such that L 2 + t L 2 is definite positive. Additional structure assumptions on L 2 and on the matrices A k will be specified later on. 1.1. Reduction of the problem. Denoting Z V −V and LZ −H(V + Z) (with H as in (10)), the system for Z reads and the corresponding linearized system is thus In the Fourier space, the above system recasts in The symmetry of the matricesĀ j ensures that for all ξ ∈ R d , the matrix is skew Hermitian, while the symmetric part of L is nonnegative. Denoting by A(D) (resp. B(D)) the Fourier multiplier of symbol 1 A (resp. L), System (12) rewrites The analysis we present below is valid in the more general situation where: is a homogeneous (matrix-valued) Fourier multiplier of degree α that satisfies where · designates the Hermitian scalar product in C n , • B(D) is an homogeneous (matrix-valued) Fourier multiplier of degree β, such that, for some positive real number κ, Re B(ω)η · η ≥ κ|B(ω)η| 2 for all ω ∈ S d−1 and η ∈ C n . As a first example, if one considers the linearized damped compressible Euler equations about (̺, v) = (1, 0) in the case P ′ (1) = 1, namely then we have n 1 = 1, n 2 = d, and the Fourier multipliers A and B read: They are of order 1 and 0, respectively. Clearly, (15) holds true, as well as (16) with κ = f −1 . System (14) may be solved by means of Duhamel's formula: where (T (t)) t≥0 stands for the semi-group associated to operator −(A + B)(D). The value of T (t) may be computed by going into the Fourier space. Indeed, denote by Z the Fourier transform of Z with respect to x, and by ξ the corresponding Fourier variable. Then, in the case F = 0, System (14) rewrites: Hence Z(t, ξ) = exp(−E(ξ)t) Z 0 (ξ). In other words, we have T (t) = exp(−E(D)t). With this notation, we have Making the change of variable τ (tρ β )/κ and r κρ α−β , we discover that z(τ ) Z(t, ξ) is the solution to Hence, the case α = 1, β = 0 and κ = 1, is generic at the linear level. 1.2. Derivation of a Lyapunov functional. The long time behavior of z is closely connected to the signs of the real part of the eigenvalues of the matrix E r,ω defined in (22). The method proposed by K. Beauchard and E. Zuazua in [3] (see also [14,15]), that is inspired by Kalman's control theory for linear ODEs supplies a simple way for constructing an explicit Lyapunov functional and a dissipation term altogether without computing the eigenvalues. To explain the construction, fix some r > 0 and ω ∈ S d−1 , and consider the ODE (22) satisfied by z. Combining the assumptions (15) and (16) with the renormalization (20) ensures that (23) Re ((A ω η) · η) = 0 and Re (( Hence, taking the Hermitian product in C n of (22) with z and keeping the real part yields If B ω has rank strictly smaller than n, then the above inequality does not ensure decay of all the components of z (even though this decay exists whenever r > 0 and ω ∈ S d−1 are such that the real parts of all the eigenvalues of the matrix E r,ω are positive). To recover the decay (if any) for the 'missing components' of the solution, one can start with the identity (B ω z) ′ + (rB ω A ω + B 2 ω )z = 0. Hence, taking the Hermitian product with BAz (we drop the index ω for better readability), we obtain Remembering (24) and using several times the obvious inequality with suitable values of K, we discover that one can find some ε 1 (that can be taken arbitrarily small) such that In the case BA 2 = 0, we need (at least) one more relation to handle the term in the right-hand side. For that, one can start from the equation (BAz) ′ + (rBA 2 + BAB)z = 0 and take the Hermitian scalar product with BA 2 z, adding up the resulting identity multiplied by a small enough ε 2 to (25), then iterate the procedure. The fundamental observation of Beauchard and Zuazua in [3] is that Cayley-Hamilton theorem ensures the existence of complex numbers c 0 , · · · , c n−1 so that Consequently, one can end the process after at most n steps. In the end, we get positive parameters ε 0 = 1 and ε 1 , · · · , ε n−1 (that are defined inductively and can be taken arbitrarily small) such that for all ω ∈ S d−1 and r > 0, we have and, additionally, (26) and In the particular case where (the only situation that will be considered in these notes) then N ω is actually bounded away from zero owing to the compactness of the sphere. Hence, (28) implies that there exists a positive constant c such that for all r > 0 and ω ∈ S d−1 , we have Then, using once more (27) and reverting to the original unknown Z, we conclude that In other words, if (29) holds then: • either α > β, and we are in a partially dissipative regime similar to that of linearized compressible Euler equations, • or α < β, and we are in a partially diffusive regime analogous to that of the linearized compressible Navier-Stokes equations. It has been pointed out in [3] that (29) is equivalent to the Shizuta-Kawashima condition. The following lemma stresses the link between those two conditions, the strict dissipativity of System (11) and Kalman's condition for observability. Lemma 1.1. Let A and B be two n × n complex valued matrices. Assume that A is skew-symmetric in the meaning of (15) and that B is nonnegative in the sense of (16). The following properties are equivalent: (1) For all positive ε 0 , · · · , ε n−1 , we have n−1 ℓ=0 ε ℓ |BA ℓ η| 2 > 0 for all η ∈ S n−1 . (2) We have the Kalman rank property, namely the n 2 × n matrix Proof. The equivalence between the first three items is basic linear algebra (see details in e.g. [3]), while Inequality (28) (with A ω = A, B ω = B and r = 1) ensures equivalence with the last item. As an example, let us again consider the linearized compressible Euler equations (17). As said before, (15) and (16) Hence Bω BωAω has rank d + 1 and Kalman rank condition is thus satisfied, which gives eventually Since we do not need higher powers of A ω to ensure the Kalman rank condition, one can suspect that one can restrict the sum in the definition of the Lyapunov function L r,ω to only one term (ℓ = 1). Now, the reader may observe by direct computation that C|B ω z| 2 and taking ε 1 sufficiently small in (25) allows to just have One can be more explicit : since z = ( a, u) and ξ = rω, we have Hence, we conclude that the Lyapunov functional is of the form Combining with Fourier-Plancherel theorem, one can conclude that in order to recover the full dissipative properties of the linearized compressible Euler equations, it suffices to consider the functional with suitably small ε 1 or, rather, spectrally localized versions of it. Similar computations are valid for the linearized compressible Navier-Stokes equations (18). The reader may find more details in [13]. 1.3. Derivation of a priori estimates. Let us assume from now on that κ = 1, α = 1 and β = 1 in (14) (since the general case α = β reduces to that one). Recall Duhamel's formula (19). Combining with (30), we get Clearly, if one wants to get optimal estimates then low and high frequencies have to be treated differently. To proceed, we shall actually use a more accurate decomposition of the Fourier space, namely a dyadic homogeneous Littlewood-Paley decomposition (∆ j ) j∈Z defined by∆ j ϕ(2 −j D). Here, ϕ is a smooth nonnegative function on R d , supported in (say) the annulus {ξ ∈ R d , 3/4 ≤ |ξ| ≤ 8/3} and satisfying By construction,∆ j is a localization operator in the vicinity of frequencies of magnitude 2 j . Since∆ j commutes with any Fourier multiplier, each Z j ∆ j Z satisfies (14) with Consequently, after taking the L 2 (R d ) norm of both sides, then using Minkowski inequality and Fourier-Plancherel theorem, we end up with: At this stage, two important observations are in order. First, note that Hence, in order to get a Sobolev estimate of Z, it suffices to multiply (31) by 2 js then to perform an ℓ 2 -summation on j ∈ Z. However, the second term of (31) will not exactly give an estimate in some space L 1 (0, t;Ḣ σ ) since the time integration has been performed before the summation with respect to j : one ends up in one of the Chemin-Lerner (or 'tilde') spaces that have been introduced in [9]. They turn out to be delicate to manipulate and not adapted to the critical regularity setting we have in mind. The second observation is that, owing to the factor min(1, 2 2j ), in order to track as much information as possible, it is suitable to work with different regularity exponents for low and high frequencies. Putting the two observations together, this motivates us to multiply (31) by 2 js with a different value of the 'regularity exponent' s for negative and positive j 's, then to perform an ℓ 1 summation with respect to j. The advantage of ℓ 1 summation -that corresponds to Besov norms with last index 1 -is that one can freely exchange time integration and summation on j. Taking into account the possible difference of regularity between the low and high frequencies leads us to introduce for all pair (s, s ′ ) ∈ R 2 the hybrid Besov space B s,s ′ 2,1 , that is the set of all tempered distributions z such that Above, χ stands for a compactly supported smooth function on R d such that χ(0) = 1, and the condition on χ(2 −j D)z implies that z has to tend to 0 at ∞ in the sense of tempered distributions 2 . Classical homogenous Besov spaces correspond to s = s ′ and will be denoted byḂ s 2,1 . In what follows, it will be sometimes convenient to use the following notation for all σ ∈ R: Even though most of the functions we shall consider here will have range in the set of vectors or even matrices, we shall keep the same notation for Besov spaces pertaining to this case. Now, multiplying (31) by 2 js (resp. 2 js ′ ) for j ≤ 1 (resp. j ≥ 0) and summing up on j ≤ 1 (resp. j ≥ 0) leads to Hence, putting together those two inequalities yields Since a part of the solution experiences direct dissipation, one can suspect the low frequency integrability we get in this way to be not optimal. Recovering better integrability for a part of the solution is the goal of the next subsection. 1.4. The damped mode. Assume that the system has an orthogonal block structure, that is independent of the frequency, namely Denote by P the orthogonal projector onto M ⊥ and set Since P and B commute, we get the following equation for W : this may be rewritten: As A(D) and B(D) are of order 1 and 0, respectively, multipliers of orders 1 and 2, act on W and Z in the right-hand side. Hence the low frequencies of the corresponding terms are expected to be negligible compared to the left-hand side of (37). To make this heuristics rigorous, let us look at the equation for W j ∆ j W, namely Taking the Hermitian scalar product in C n with W j , using (16), the fact that B(D) is 0-order and that A(D) is 1-st order yields Hence, integrating on R d and taking advantage of the Fourier-Plancherel theorem yields: from which we eventually get for all t ≥ 0 and j ∈ Z, owing to Lemma A.1, Therefore, if we multiply by 2 js and sum up on j ≤ j 0 with j 0 chosen so that C2 j 0 ≤ 1/2, then we end up with The last term may be controlled by the data according to (33). Furthermore, W j L 2 Z j L 2 for all j < 0, and 2 j(s+2 ) ≃ 2 js for j 0 ≤ j < 0. Hence the above inequality still holds if one sums up to j = 0. In the end, this allows us to get the following additional bound: dτ. Let us finally look at the part of Z that undergoes direct dissipation, namely Z 2 PZ. We claim that, as expected, the low frequencies of Z 2 have better time integrability than the overall solution Z. Indeed, observing that B(D)Z 2 = W − PA(D)Z and that PB(D) (restricted to functions defined on M ) is invertible, we may write Hence, since (PB)(D) (resp. A(D)) is a 0-order (resp. 1-st order) Fourier multiplier, we may write Then, remembering (33) and using Hölder inequality and interpolation in Besov spaces when needed yields dτ. This has to be compared by the following (optimal) inequality for Z : dτ. 1.5. An L p approach. In this part, we are going to show that under slightly stronger structure assumptions 3 on the linear system (12) than those that have been made so far, it is possible to bound the low frequencies of the solution on functional spaces built on L p for any p ∈ [1, ∞]. This unusual setting is in sharp contrast with the non dissipative case. In fact, as pointed out by P. Brenner in [7], apart from the notable exception of the transport equation, 'most' first order 'purely' hyperbolic systems are ill-posed in L p if p = 2. It turns out that for nonlinear partially dissipative systems satisfying the structure assumptions of this part, it is also possible to use, at least partially, an L p type framework (see details in [10,12]). This offers one more degree of freedom in the choice of solutions spaces allowing not only to prescribe weaker smallness conditions for global well-posedness, but also to get more accurate informations on the qualitative properties of the constructed solutions. In order to proceed, let us assume without loss of generality that M = R n 1 × {0} and . For expository purpose, further assume that there is no source term (F = 0). Then, System (11) may be rewritten by blocks as follows: where the 0-order Fourier multiplier B 22 (D) has symbol in M n 2 (R), and so on. In the spirit of the computations of the previous paragraph, let us introduce This definition of a damped mode is consistent with the one we had before: we just applied to (36) the 0-order operator (B 22 (D)) −1 that corresponds to the inverse of PB(D) restricted to M. Now, we note that ∂ t Z 2 + B 22 (D)W = 0 and that from the definition of Z , we have Hence, using System (39) for computing ∂ t Z, we get the following equation: In order to pursue our analysis, we make the following assumption: is a positive operator. By positive, we mean that the symbol A 12 B −1 22 A 21 has range in the set of positive Hermitian matrices of size n 2 . For this particular structure, the above hypothesis turns out to be equivalent to Condition (SK) (see Lemma A.3). Then, after applying∆ j to (41) and for (42), we obtain: Using Duhamel formula for computing Z 1,j from the first equation of (44), we get Since A(D) is second order positive and satisfies the assumptions of Lemma A.2, there exist two constants c and C such that the following bound holds: Then, we get from Bernstein inequality (151), remembering that all the blocks of A(D) are homogeneous multipliers of degree 1 and that B −1 22 (D) is homogeneous of degree 0, whence taking the supremum or the integral on [0, t], Similarly, Lemma A.3 guarantees that we have which allows to get eventually Owing to the factor 2 j , there exists an integer j 0 ∈ Z so that the last term may be absorbed by the left-hand side for all j ≤ j 0 . Hence, multiplying by 2 js then summing up on j ≤ j 0 yields, with the notation z ℓ,j 0 dτ while the inequality for Z 1 gives us dτ. The definition of W in (40) ensures that for all j ≤ j 0 (with negative enough j 0 ), there holds that Hence, adding up ε·(47) to (48) with ε small enough and negative enough j 0 , we conclude that Of course, combining with (49) yields also By the same token, if we consider a source term F in (39), one gets the following bound: which is actually the same as the one we proved before for p = 2. At the linear level, there is no restriction on the value of p: it can be any element of [1, ∞]. Reverting to the initial nonlinear system (11), it is possible to work out a functional framework of L p type for the low frequencies of the solution. However, owing to the interactions between the low and high frequencies through the nonlinear terms, there are some restrictions on p. The most obvious one is that, if combining Bernstein and Hölder inequalities for estimating the medium frequencies in a L 2 type space of a product of low frequencies that belong to a L p type space, one needs to have p ∈ [2,4]. In high dimension, there are stronger restrictions on p. The reader is referred to [10,12] for more details and complete statements. Global existence in the critical regularity setting The principal aim of this section is to prove the global existence of strong solutions for (9) supplemented with initial data that are a perturbation of a constant stateV satisfying Condition (SK). For notational simplicity, we assume thatV = 0 so that the system under consideration reads 4 It is assumed that the (smooth) given functions A 1 , · · · , A d range in the set of n × n real symmetric matrices, and that B = 0 0 0 L 2 with L 2 ∈ GL n 2 (R) satisfying for some c > 0, According to the linear analysis that was performed in the previous paragraph in the context of System (50), Condition (SK) is equivalent to: 4 The reader is referred to [11] for the proof of similar results for more general symmetrizable quasilinear partially dissipative hyperbolic systems satisfying (SK). 2.1. The main results. In order to find out a suitable functional framework for solving (50), let us temporarily consider a smooth solution Z. Taking advantage of the symmetry of the matrices A k and integrating by parts, one gets the following 'energy identity': Therefore, combining with (51) and Gronwall inequality, we discover that Hence, even for controlling the L 2 norm of the solution, a bound of ∇Z in L 1 loc (R + ; L ∞ ) is needed. Since no gain of regularity can be expected on the whole solution (see (34)), we must assume that Z 0 belongs to a functional space X that is embedded in the set of globally Lipschitz functions. If X = H s then this embedding holds if and only if s > d/2 + 1. In the framework of Besov spaces with last index 1, one can reach the critical index s = d/2 + 1, owing to the (critical) embedding Hence, X must be a subspace ofḂ 2,1 . Consequently, we shall take s ′ = 1 + d/2 in (34). As regards the value of the regularity exponent s in (33) for the low frequencies, a natural candidate is s = −1 + d/2 since (33) and (34) together give us a control of Z in 2,1 ) (provided we succeed in bounding in L 1 (R + ;Ḃ d 2 −1 2,1 ) the nonlinear term F ), and thus of ∇Z in L 1 (R + ; L ∞ ). Having at our disposal global L 1 -in-time estimates for the solution will be particularly comfortable for further analysis in contrast with the classical 'Sobolev' approaches for partially dissipative systems where only L 2 -in time estimates are available. To make a long story short, a good candidate for a solution space is the set of functions 2,1 ). According to the linear analysis presented before, one can expect to get additional informations for low frequencies, through the damped mode W defined in (40), that is essentially equivalent to ∂ t Z 2 in our context. We will eventually obtain the following result that will be proved in the next subsection. System (50) supplemented with initial data Z 0 admits a unique global-in-time solution Z in the set 2,1 ) and ∂ t Z 2 ∈ L 1 (R + ;Ḃ and a constant C depending only on the matrices A k and on L 2 , and such that . Choosing regularity d/2 − 1 for low frequencies has some disadvantages, though: • it does not allow to treat the mono-dimensional case since the low frequencies of the nonlinear terms of type DZ × Z cannot be estimated in L 1 (R + ;Ḃ d 2 −1 2,1 ) (this is the needed regularity for the right-hand side of (33)). Indeed, the numerical product does not mapḂ • it does not provide us with uniform bounds in the high relaxation asymptotics (see the beginning of Section 4 for more explanations). Another possible choice is s = d/2. Then, the solution space becomes the set of Z in 2,1 ) plus crucial informations from the damped mode that, in particular, will ensure that ∇Z 2 ∈ L 1 (R + ;Ḃ d 2 2,1 ) and Z 2 ∈ L 2 (R + ;Ḃ d 2 2,1 ). This alternative framework allows to consider initial data that are less decaying at infinity (regularityḂ d 2 2,1 for low frequencies is less stringent thanḂ d 2 −1 2,1 ), to handle the one-dimensional situation, and to provide crucial uniform a priori bounds in the strong relaxation limit. The only drawback is that this alternative framework requires seemingly stronger structure assumptions on the system (that are nevertheless fulfilled by the compressible Euler equations). In order to specify them, let us rewrite System (50) by blocks as follows: Then, we need the following additional assumption: (H3) For all k ∈ {1, · · · , d},Ā k 11 = 0 and Z → A k 11 (Z) is linear with respect to Z 2 . Note that in the context of gas dynamics, the above assumption just means that there are no terms like ∇̺ or ̺∇̺ in the density equation, which is indeed the case ! Theorem 2.2. In general dimension d ≥ 1, let the assumptions of Theorem 2.1 concerning system (9) be in force and assume in addition that (H3) holds true. Then, there exists a positive constant α such that for all 2,1 ) and ∂ t Z 2 ∈ L 1 (R + ;Ḃ . Theorem 2.2 directly applies to the isentropic compressible Euler equations with relaxation, written in terms of the sound speed c and of v (that is, System (5)). The result we get reads as follows: , and satisfy A similar statement holds true for the barotropic compressible Euler equations with general smooth pressure law P satisfying P ′ > 0 in the neighborhood of the reference density̺, although one cannot 'symmetrize' the system any longer by using the sound speed. For more details, the reader may refer to [11] where a class of partially dissipative systems, more general than (9), is considered. In the rest of this section, we focus on the proof of Theorem 2.1. The reader is referred to [11] for more general systems and for the proof of Theorem 2.2. A similar statement in the L p framework has been established in [12]. 2.2. A priori estimates. The overall strategy is to apply the Littlewood-Paley truncation operator∆ j to (50), then to follow the method that has been described in the previous section so as to get optimal estimates in L 2 for each dyadic block. Performing eventually a suitable weighted summation on j will lead to the control of Besov norms of the solution, as stated in Theorem 2.1. Throughout, we assume that we are given a smooth and sufficiently decaying solution of (50) on [0, T ] × R d such that is sufficiently small. We shall use repeatedly that, owing to the embedding (53), the solution Z is also small in L ∞ ([0, T ] × R d ). • Low frequencies. Let us denote Z j ∆ j Z and F j ∆ j F, with We see that for all j ∈ Z, Hence, taking the L 2 (R d ; R n ) scalar product with Z j and using that the first order terms are skew-symmetric yields The term with B may be bounded from below according to (51). Hence, using Cauchy-Schwarz inequality for bounding the right-hand side delivers for some c > 0, In order to recover the full dissipation, we proceed as in the previous section, introducing the functional L r,ω defined in (26). Adapting the computations therein to the case where the source term in (12) is nonzero, we get for all r > 0 and ω ∈ S d−1 (with the notations of (20)): In light of Cauchy-Schwarz inequality, the sum in the right-hand side may be bounded by L r,ω ( Z j )L r,ω ( F j ). Hence, using that Condition (SK) and (27) ensure the existence of a positive constant c 0 > 0 such that for all r > 0 and ω ∈ S d−1 , we conclude that Let us denote for all j ∈ Z, Integrating (66) on R d and observing that, by virtue of (27), we have Therefore, taking X = L j and A = C F j L 2 in Lemma A.1, then multiplying by 2 j( d delivers for all j < 0, In order to bound the right-hand side, it suffices to combine the following facts that are proved in e.g. [2, Chap. 2]: (69) For d ≥ 2, the numerical product mapsḂ and, for all smooth function Φ : R d → R p vanishing at 0, Hence, remembering also (62), we conclude that So, finally, there exist two positive constants c 0 and C such that for all j < 0, we have dτ with j∈Z c j = 1. • High frequencies. In order to bound the high frequency part of the solution, we shall keep the functional L j , but one cannot look at F defined in (63) as a source term since this would entail a loss of one derivative. To overcome the difficulty, we mimic the proof of the L 2 estimate recalled at the beginning of this section, writing the system for Z j ∆ j Z as follows: Then, taking the L 2 (R d ; R n ) scalar product with Z j and integrating by parts yields: Thanks to the embedding (53) and to the definition of · Ḃs Hence, owing to (51), we have for all s ∈ (−d/2, d/2 To recover the full dissipation, one has to compute for all r ≥ 1 and ω ∈ S d−1 , the time derivative of as it will generate the term n−1 k=1 ε k 2 |BA k ω Z j | 2 , that is, the missing dissipation. To proceed, one can keep F defined in (63) as a source term and start from (64). For j ≥ 0, the term r −1 yields the factor 2 −j that exactly compensates the loss of one derivative when estimating F inḂ . Now, adding up the relation we get for r −1 L r,ω ( Z j ) (after space integration) to (73) yields for all j ≥ 0: and (65) guarantees that Hence, using Lemma A.1, multiplying by 2 j( d 2 +1) and taking advantage of (63) and (75), we end up with dτ. • Conclusion. Let us put Since L j ≃ Z j 2 L 2 , we have the following equivalence: Note that this implies that Hence, we deduce from (71) and (78) that We claim that there exists α > 0 such that if L(0) < α then, for all t ∈ [0, T ], we have Indeed, let us choose α ∈ (0, c/(2C)) so that L ≤ α implies that (62) is satisfied, and set The above set is nonempty (as 0 is in it) and contains its supremum since L is continuous (remember that we assumed that Z is smooth). Hence we have Using the smallness hypothesis on L(0), one may conclude that L < α on [0, T 0 ]. As L is continuous, we must have T 0 = T and (83) thus holds on [0, T ]. Clearly, time t = 0 does not play any particular role, and one can apply the same argument on any sub-interval of [0, T ], which leads to: Hence, provided that Z 0 , is small enough, L is a Lyapunov functional that is, in light of (80), equivalent to Z 2.3. The damped mode. Define W by the relation: Since L 2 is invertible, the second line of (57) yields which allows to get the following equation for W : Applying∆ j to the above relation and denoting W j ∆ j W leads to Using (51), an energy method and Lemma A.1, we get two positive constants c and C such that for all j ∈ Z and t ∈ [0, T ], Bernstein inequality (150) guarantees that Hence, there exists j 0 ∈ Z such that for all j ≤ j 0 , the last term may be absorbed by the time integral of the left-hand side. Next, using (57) to compute the time derivatives, we see that the terms with ) ∂ k Z 2 are linear combinations of coefficients of type K(Z) Z ⊗∇ 2 Z, K(Z) Z ⊗∇Z and K(Z) ∇Z ⊗ ∇Z for suitable smooth functions K. Hence, using (69), (70) and remembering (62) yields we split W into low and high frequencies. For the low frequency part, we just write that by composition (70) and product law (69), For the high frequency part, we further decompose W as follows (in light of (85)): which allows to get whence, using (74), Plugging this information in (87), multiplying by 2 j( d 2 −1) , summing up on j ≤ j 0 and remembering that Z , taking advantage of (83) eventually yields : Owing to (89) and (83), the high frequencies of W also satisfy which completes the proof of (55). Proving Theorem 2.1. Having the a priori estimates (83), (90) and (91) at hand, constructing a global solution obeying Inequality (55) for any data Z 0 satisfying (54) follows from rather standard arguments. First, in order to benefit from the classical theory on first order hyperbolic systems, we remove the low frequency part of Z 0 so as to have an initial data in the nonhomogeneous Besov space B 2,1 . More precisely, we set for all n ∈ N, (92) Z n 0 (Id −Ṡ n )Z 0 withṠ n χ(2 −n D). 6 Handling the intermediate frequencies corresponding to j0 ≤ j < 0 may be done from (71) since, then, In light of e.g. [2,Chap. 4], we get a unique maximal solution Z n in 2,1 , it is easy to prove from (57) and the composition and product laws (69) and (70) that ∂ t Z 1 and (∂ t Z 2 − L 2 Z 2 ) are in L ∞ loc (0, T n ;Ḃ d 2 −1 2,1 ), and as Z n 0 belongs toḂ , hence obeys (83) for all t ∈ [0, T n ). In particular, the embeddingḂ and thus the standard continuation criterion for first order hyperbolic symmetric systems (again, refer to e.g. [2,Chap. 4]) ensures that T n = ∞. In other words, for all n ∈ N, the function Z n is a global solution of (57) that satisfies (83), (90) and (91) for all t ∈ R + . Note that, owing to the definition (92), we have Hence (Z n ) n∈N is a sequence of global smooth solutions that is bounded in the space E of Theorem 2.1. Proving the convergence of (Z n ) n∈N relies on the following proposition that can be easily proved by writing out the system satisfied by the difference of two solutions Z and Z ′ of (57), namely, applying the Littlewood-Paley truncation operator∆ j to this system then arguing as for getting (73) and using product laws (see the details in [11,Prop. 2]): Proposition 2.1. Consider two solutions Z and Z ′ of (57) in the space E corresponding to small enough initial data Z 0 and Z ′ 0 in B . Then we have for all t ≥ 0, From this proposition (applied to Z n and Z m for any (n, m) ∈ N 2 ), Gronwall lemma and the definition of the initial data in (92), we gather that (Z n ) n∈N is a Cauchy sequence in the space C b (R + ;Ḃ d 2 2,1 ), hence converges to some function Z in C b (R + ;Ḃ d 2 2,1 ). As the regularity is high, passing to the limit in the system is not an issue, and one can easily conclude that Z satisfies (57) supplemented with data Z 0 . That Z belongs to the smaller space E stems from standard functional analysis. Typically, one uses that all the Besov spaces under consideration satisfy the Fatou property, that is, for instance Z The only property that is missing is the time continuity with range inḂ d 2 +1 2,1 . However, this is known to be true for general quasilinear symmetric systems (see e.g. [2,Chap. 4]). Finally, the uniqueness follows from Proposition 2.1. Decay estimates and asymptotic behavior The global-in-time properties of integrability for the solution Z that have been proved so far ensure that Z(t) tends to 0 in the tempered distributional meaning when t goes to ∞. In the present section, we aim at specifying the decay rate for some Besov norms of Z, whenever the initial data satisfy a (mild) additional condition. In the pioneering works by the Japanese school in the 70ies and early 80ies (see e.g. [28,33]), it was expressed in terms of Lebesgue spaces L p for some p ∈ [1, 2). However, it is well understood now that it suffices to prescribe this condition in some homogeneous Besov spaces with a negative regularity index. In order to understand how those spaces come into play, looking first at the linearized at the linearized system (12) with no source term is very informative. Let Z be the corresponding solution. Using (30) and Fourier-Plancherel theorem yields for all t ≥ 0: This means that the high frequencies of Z decay to 0 exponentially fast, and that the low frequencies behave as those of the heat flow. More precisely, for all α ≥ 0, we have Hence, since the function x α/2 e −x is bounded on R + , we eventually get for all s ∈ R, We note that, as for the free heat equation, in order to obtain some decay for the low frequencies, a shift a regularity is needed. This is the reason why it is wise to make an additional assumption (e.g. some negative regularity) on the initial data to eventually get some decay rate for the norms we considered before for the global solutions to (50). In fact, to compare our results with the classical ones in the literature, one can introduce another family of homogeneous Besov spaces, namely the setsḂ s 2,∞ of tempered distributions z on R d satisfying Owing to the critical embedding making assumptions in spacesḂ s 2,∞ with a negative s is weaker than in the pioneering works on decay estimates [28] where the initial data were assumed to be in L 1 (this corresponds to the endpoint value σ 1 = d/2) or (see [33]) in L p for some p ∈ [1, 2) (take σ 1 = d/p − d/2). This motivates the following statement that we shall prove in the rest of the section: Then, the global solution Z constructed in Theorem 2.1 also belongs to L ∞ (R + ;Ḃ −σ 1 2,∞ ), and there exists a constant C 0 that may be computed in terms of c 0 such that Remark 3.1. Under the (stronger) structure assumptions of Theorem 2.2, one can prove a similar result assuming only that σ 1 is in wider range (−d/2, d/2]. The inequality we eventually get is Remark 3.2. Even though the negative Besov space assumption is weaker than in e.g. [33], the obtained decay rates are the same ones. Note also that Z 0 Ḃ −σ 1 2,∞ can be arbitrarily has to be small. The linear decay rate for low frequencies turns out to be the correct one for the solution of the nonlinear system (50), and better (algebraic) decay rates hold true for the high frequencies and for the damped mode. At the same time, although the high frequencies of the solution of the linearized system (12) have exponential decay, it is not the case for the nonlinear system (50) owing to the coupling between the low and high frequencies through the nonlinear terms. We do not claim optimality of the above decay rates for the high frequencies but, for sure, it is very unlikely that they are exponential even for very particular initial data. Let us briefly explain the general strategy of the proof. The starting point is to show that the additional negative regularity is propagated for all time (with a time-independent control). Then, we shall combine it with Inequality (84) and an interpolation argument so as to exhibit a decay inequality for Z(t) . The rate that we shall get in this way turns out to be precisely the one that was expected from our linear analysis in (94). Then, interpolating with the estimate in the negative space will enable us to capture optimal decay rates for intermediate norms Z(t) Ḃs 2,1 . To the best of our knowledge the idea of combining a Lyapunov inequality with dissipation and interpolation to get (optimal) decay rates originates from the work by J. Nash on parabolic equations in [29] 7 . Implementing it on other equations in a functional framework close to ours is rather recent. The overall strategy is well explained in a work by Y. Guo and Y. Wang [19] devoted to the Boltzmann equation and the compressible Navier-Stokes equations in the Sobolev spaces setting, and Z. Xin and J. Xu in [37] used the same method to prove decay estimates for the compressible Navier-Stokes equations in the critical regularity framework. In the context of partially dissipative systems, the idea of prescribing additional integrability in terms of negative Besov norms instead of Lebesgue ones seems to originate from a paper by J. Xu and S. Kawashima [40]. Finally, let us emphasize that it is possible to do without a Lyapunov functional (like we did in e.g. [17]) but, somehow, the proof is more technical and less 'elegant'. 3.1. Propagation of negative regularity. In order to prove that the regularity inḂ −σ 1 2,∞ is propagated for all time, let us start from the equation of Z j written in the following way: Taking the L 2 scalar product with Z j and using (51) yields One can show (combine the commutator inequalities of [2, Chap. 2] with (70)) that Hence, dropping the nonnegative term in the left-hand side of (96), using Lemma A.1 and taking the supremum on j yields which, after applying Gronwall lemma, leads to Whenever Z 0 satisfies (54), the global solution of Theorem 2.1 has (small) gradient in 2,1 ). Hence the above inequality guarantees that Z is uniformly bounded inḂ −σ 1 2,∞ : there exists a constant C depending only on σ 1 and such that 3.2. Decay estimates for the whole solution. The starting point is Inequality (84) that is valid for all 0 ≤ t 0 ≤ t, and the fact that Being monotonous, the function L is almost everywhere differentiable on R + and Inequality (84) thus implies that Now, if −σ 1 < d/2 − 1, then one may use the following interpolation inequality: which implies, taking advantage of (97), that R. DANCHIN To handle the high frequencies of Z, we just write that, owing to (55), we have Putting (99) and (100) together and remembering that , one may thus write that for a small enough c > 0, we have Reverting to (98), one eventually obtains the following differential inequality: which readily leads to Now, replacing θ 0 with its value, and using (101), one can conclude that As regards the low frequencies of the solution, this decay is consistent with (94) in the case s = −σ 1 and α = σ 1 + d/2 − 1. 3.3. High frequency decay. From (76) and (77), we gather Hence, bounding F j according to (75) yields By time integration (viz. we use Lemma A.1), we deduce that Hence, multiplying both sides by 2 j( d 2 +1) , then summing up on j ≥ 0 and using the equivalence of the high frequency part of (79) with the norm inḂ 2,1 , we end up with Consequently, for all t ≥ 0, Furthermore, one can find a constant C 0 depending only on α 1 and c 0 such that Hence, in the end, we get 3.4. The decay of the damped mode. According to (86) and to (57), the damped mode and, according to (85), we have Therefore, applying∆ j to the above equation, taking the L 2 scalar product with W j ∆ j W and using Bernstein inequality in order to bound the last term, we get for all j ∈ Z, Let us choose j 0 ∈ Z such that C2 j 0 ≤ c/2 (so that the last term may be absorbed by the left-hand side). Then, using Lemma A.1, multiplying both sides by 2 j( d 2 −1) , then summing up on j ≤ j 0 , we end up with 8 Since d ≥ 2, the product laws (69) and (74) guarantee that Rigorously speaking the low frequencies that are here considered are lower than with our previous definition since it may happen that j0 ≤ 0. However, one may check that the high frequency decay estimate in (104) still holds if we put the threshold at some j0 ≤ 0 : the argument we used works if summing up on j ≥ j0 provided we change the 'constants' accordingly. which, combined with (103) and the fact that Z Ḃ d 2 2,1 is small implies that Similarly, we have Hence, using (105) and arguing as in the previous paragraph, we end up with In other words, the decay rate for the low frequencies of the damped mode in normḂ is the same as that of the high frequencies of the whole solution. Summing up the results of the previous paragraphs completes the proof of Theorem 3.1. On the strong relaxation limit This section is devoted to the study of a singular limit problem for the following class of partially dissipative hyperbolic systems: where, denotingĀ k ℓm A k ℓm (0) and A k ℓm (Z) A k ℓm (Z) −Ā k ℓm , we assume that for all k ∈ {1, · · · , d}: (1)Ā k 11 = 0, and A k 11 is linear with respect to Z 2 and independent of Z 1 , (2) A k 12 and A k 21 are linear with respect to Z 1 and independent of Z 2 , (3) A k 22 is linear with respect to Z, (4) Condition (SK) is satisfied by the pair (A(ξ), B) with A(ξ) defined in (13), at every point ξ ∈ R d . The linearity assumption is here just for simplicity as well as the fact that there is no 0-order nonlinear term. At the same time, assuming that A k 12 and A k 21 (resp. A k 11 ) only depend on Z 1 (resp. Z 2 ) is very helpful, if not essential. We shall see that it is satisfied by the compressible Euler equations written in terms of the sound speed (see (5)). We want to study the so-called 'strong relaxation limit', that is whether the global solutions of (107) constructed before tend to satisfy some limit system when ε goes to 0. A hasty analysis suggests that the part of the solution that experiences direct dissipation, namely Z ε 2 with the notation of the previous sections, tends to 0 with a characteristic time of order ε and that, consequently, Z ε 1 tends to be time independent (since, for all k ∈ {1, · · · , d}, we haveĀ k 11 = 0 and A k 11 is independent of Z 1 ). To some extent this will prove to be true but, even for the simple case of the linearized one-dimensional compressible Euler equations, the situation is more complex than expected. Indeed, consider (108) ∂ t a + ∂ x u = 0, In the Fourier space, this system translates into d dt • In low frequencies |ξ| < (2ε) −1 , the matrix A(ξ) of this system has the following two real eigenvalues: For ξ going to 0, we observe that This means that one of the modes of the system is indeed damped with coefficient ε −1 but that the overall behavior of solutions of the system is like for the inviscid limit (or for the heat equation with vanishing diffusion). • In high frequencies |ξ| > (2ε) −1 , the matrix A(ξ) has the following two complex conjugated eigenvalues: Clearly, Re λ ± (ξ) = (2ε) −1 and Im λ ± (ξ) ∼ iξ for ξ → ∞. Hence, there is indeed dissipation with characteristic time ε for the high frequencies of the solution. The 'low frequency regime' is expected to dominate when ε → 0, as it corresponds to |ξ| ε −1 . Consequently, the overall behavior of System (108) might be similar to that of the heat flow with diffusion ε, and one can wonder if the high relaxation limit is analogous to the inviscid limit 9 . However, we have to keep in mind that the low frequencies of the 'damped mode' (that here corresponds to the combination u + ε∂ x a) undergo a much stronger dissipation. This is of course an element that one has to take into consideration. Based on this simple example, it looks that in order to investigate the high relaxation limit, it is suitable to use a functional framework that non only reflects the different behavior of the low and high frequencies (with threshold being located around ε −1 ) but also emphasizes the better properties of the damped mode. 4.1. A 'cheap' result of convergence. Let us revert to the general class of Systems (107) supplemented with initial data Z ε 0 . The structure assumptions that we made at the beginning of the section enable us to apply Theorem 2.2. In this Subsection, we shall take advantage of it and of elementary scaling considerations so as to establish that both Z ε 1 − Z ε 1,0 and Z ε 2 converge strongly to 0 for suitable norms. The reader may refer to the next subsection for a more accurate result. The starting observation is the following change of time and space scale: Clearly, Z ε satisfies (107) if and only if Z satisfies (57). The following property of homogeneous Besov norms is well known (see [2,Chap. 2]): By adapting the proof therein, one can prove that where we have used the notation Putting together (110), (111), the change of unknowns (109) and Theorem 2.2 readily gives the following global existence result that is valid for all ε > 0. Theorem 4.1. There exists a positive constant α such that for all ε > 0 and data Z ε System (107) supplemented with initial data Z ε 0 admits a unique global-in-time solution Z ε satisfying the inequality . 4.2. Connections with porous media-like equations. In order to exhibit richer dynamics in the asymptotics ε → 0, one may perform the following 'diffusive' rescaling: . Dropping the exponents ε for better readability, we get the following system for ( Z 1 , Z 2 ): From the second line, one can expect In order to find out what could be the limit system for Z 1 , let us systematically express Z 2 in terms of W and Z 1 by means of (121) in the first line of (120). We get Introducing the following second order operator: the above equation may be rewritten: where, Q 1 , Q 2 (resp. T 1 , T 2 ) are bilinear (resp. trilinear) expressions that may be computed in terms of the coefficients of the matrices A k 11 , A k 12 and of L 2 , and Consequently, if (121) is true, then we expect Z 1 to tend to N with N satisfying Note that, as a consequence of Lemma A.3 in Appendix, and since we assumed both Condition (SK) and thatĀ k 11 = 0 for all k ∈ {1, · · · , d}, (125) is a quasilinear (scalar) parabolic equation. Before justifying the above heuristics in the general case, let us again consider the compressible Euler equations, that is Under the isentropic assumption (127) P (z) = Az γ with γ > 1 and A > 0, the above system enters in the class (107) if reformulated in terms of (c ε , ̺ ε ), where Indeed, we get: So, if we setc (γA) 1 2 γ (̺) γ , then Conditions (1) to (4) below (107) are satisfied with Z ε 1 = c ε −c and Z ε 2 = v ε . Now, performing the diffusive rescaling: In light of the second equation, it is expected that ∇(P ( ̺ ε )) + ̺ ε v ε → 0 when ε → 0, and thus that ̺ ε converges to some solution N of the porous media equation: The general result we shall prove for Systems (107) reads as follows for the particular case of the isentropic Euler equations 10 : with pressure law (127) and initial data ( 2,1 . There exists α > 0 independent of ε such that if 2,1 ) satisfying in addition 2,1 ) satisfying for all t ≥ 0, Finally, if one denotes by ( ̺ ε , v ε ) the rescaled solution of the Euler equations defined through (130) and assumes in addition that Proof. Let us assume for a while that ε = 1 so that one can readily take advantage of Theorem 2.3. As a first, we want to translate Theorem 2.3 in terms of ̺, where c and ̺ (resp.c and̺) are interrelated through (128). On the one hand, Inequality (61), the property of interpolation in Besov spaces and Hölder inequality with respect to the time variable imply that c −c On the other hand, using the fact that the composition inequality (70) is actually valid for all positive Besov exponents (see e.g. [2][Chap. 2]), we may write that . Therefore, the last term of ∂ t v may be 'omitted' in Inequality (61), and we get Now, for general ε > 0, performing the rescaling (109) and remembering the equivalences (110) and (111) gives the first part of Theorem 4.2. After performing the diffusive rescaling (130), the rescaled pair ( ̺ ε , v ε ) satisfies Thanks to (110), the bound for the last term in (136) translates into which completes the proof of (134). Proving that ̺ ε tends to some solution N of (132) may be done exactly as in the general case presented below in Theorem 4.3. Let us finally turn to the study of the strong relaxation limit in the general case. The main result we shall get reads as follows: Theorem 4.3. Assume that 11 d ≥ 2 and consider a system of type (120) for some ε > 0. Let the structure hypotheses listed below (107) be in force. There exists a positive constant α (independent of ε) such that for any initial data N 0 ∈Ḃ for (120) satisfying System (125) admits a unique solution N in the space 2,1 ), satisfying for all t ≥ 0, and System (120) has a unique global-in-time solution Z ε in C(R + ;Ḃ where W ε has been defined in (121). 11 The one-dimensional case is tractable either under specific assumptions on the nonlinearities that are satisfied by the Euler equations, or in a slightly different functional framework. More details may be found in [12]. If, in addition, then we have Proof. That (120) supplemented with initial data Z 0 admits a unique global solution satisfying (141) follows from Theorem 2.2 after suitable rescaling. Indeed, if we make the change of unknowns: then we discover that Z satisfies (120) if and only ifŽ is a solution to (57). Then, taking advantage of the equivalence of norms pointed out in (110) and (111) However, combining Inequality (141) (without the last term of course) with (115) ensures that Z 1 Hence the last term of (144) is of order ε in L 1 (R + ;Ḃ d 2 2,1 ), and W does satisfy (141). In order to prove the convergence of Z 1 to N , let us first verify that S defined in (124) is of order ε in L 1 (R + ;Ḃ and using (141) and the smallness of the initial data thus yields (145) S Let us next briefly justify that any data N 0 satisfying (138) gives rise to a unique global solution N of (125) in C b (R + ;Ḃ 2,1 ) satisfying (140). In fact, since the operator A is strongly elliptic, the parabolic estimates in Besov spaces with last index 1 recalled in Proposition A.1 ensure that any smooth enough global solution N satisfies for all t ≥ 0, dτ. Using the stability of the spaceḂ + αε for all t ≥ 0, which completes the proof of the theorem. We end this section with a few remarks. The first one is that, for small ε, it is natural to modify the definition in (121) so as to have a damped mode that is expressed in terms of Z 2 and N . If we set then we have In order to bound the right-hand side, one can observe that Note also that, since Z 1 is bounded in C b (R + ;Ḃ Finally, observe that if we introduce the following rescaled solution of the limit system: Appendix A. The following classical result (see the proof in e.g. the Appendix of [10]) has been used a number of times in this text. Then, for all t ∈ [0, T ], we have We frequently took advantage of the fact that applying derivatives or, more generally, Fourier multipliers on spectrally localized functions is almost equivalent to multiplying by some constant depending only on the Fourier multiplier and on the spectral support. This is illustrated by the classical Bernstein inequality that states (see e.g. [2, Chap. 2]) that for any R > 0 there exists a constant C such that for any λ > 0 and any function u : R d → R with Fourier transform u supported in the ball B(0, Rλ), we have The reverse Bernstein inequality asserts that, under the stronger assumption that u is supported in the annulus {x ∈ R d : rλ ≤ |x| ≤ Rλ} for some 0 < r < R, then we have in addition, A slight modification of the proof of (149) allows to extend the result to any smooth homogeneous multiplier : denoting by M a smooth function on R d \ {0} with homogeneity γ, there exists a constant C such that for any λ > 0 and any function u : R d → R with Fourier transform u supported in the annulus {x ∈ R d : rλ ≤ |x| ≤ Rλ}, we have In the last section, in order to study the convergence to the limit system, we used maximal regularity estimates in Besov spaces with last index 1 for parabolic system. These estimates are well known for the heat equation (see e.g. [2,Chap. 2]). Below, we extend them to semi-groups generated by strictly elliptic homogeneous multipliers in the following meaning: we consider functions A ∈ C ∞ (R d \ {0}; M n (C)) homogeneous of degree γ, such that the matrix A(ξ) is Hermitian and satisfies for some c > 0: (152) A(ξ)z · z ≥ c|ξ| γ |z| 2 , ξ ∈ R d \ {0}, z ∈ C n . In order to prove (158), it suffices to establish that |g λ (x)| ≤ C(1 + |x| 2 ) −d e −c 0 λ , x ∈ R d , λ > 0. Now, integrating by parts, we get Of course, the integral may be restricted to Supp φ which is a compact subset of R d \ {0}. Owing to (152), on this subset, there exists a positive constant c 0 such that all the real parts of the eigenvalues of A(ξ) are bounded from below by 2c 0 . Now, since the differential of the exponential map may be computed by the formula D e X · H = Hence, there exist two constants C and C ′ such that By induction, one can get similar estimates for higher order derivatives of ξ → e −λA(ξ) , which eventually yields |h λ (x)| ≤ Ce −c 0 λ , x ∈ R d , λ > 0, and completes the proof. Remark A.1. In the case p = 2 one can work out a shorter proof, based on the Fourier-Plancherel theorem. However, it is interesting to point out that the very same result holds for any value of p in [1, ∞] including 1 and ∞, and with a constant independent of p. The following lemma ensures that in the setting of System (57), if both Condition (SK) and A k 11 (V ) = 0 for all k ∈ {1, · · · , d} are satisfied, then the second order differential operator A defined in (122) is indeed strictly elliptic in the sense of Proposition A.1, with γ = 2. with A 12 ∈ M n 1 ,n 2 (C), A 21 ∈ M n 2 ,n 1 (C), A 22 ∈ M n 2 ,n 2 (C) and B 22 ∈ M n 2 ,n 2 (C). Suppose also that B 22 is positive. Then, B 22 is invertible and the following two properties are equivalent: (1) The matrix A 12 B −1 22 A 21 is a n 1 × n 1 positive matrix. Proof. The invertibility of B 22 being obvious, let us first assume that A 12 B −1 22 A 21 is positive. Then, the rank of A 21 must be equal to n 1 and so does the rank of B 22 A 21 . Now, we observe that Hence, the rank of B BA is equal to n 1 + n 2 = n, and Condition (SK) is thus satisfied. Conversely, if A has the special structure (159) then an easy induction reveals that the bottom left block of any positive power k of A ends with A 21 . The same property clearly holds for BA k that thus looks like for some C k , D k ∈ M n 2 (C). · Hence, if we assume in addition that Condition (SK) is satisfied, then we must have rank(B 22 A 21 ) = n 1 , and thus rank(A 21 ) = n 1 , too. Now, since tĀ 12 = A 21 , we have for all z ∈ C n 1 ,
2022-09-27T01:16:05.870Z
2022-09-26T00:00:00.000
{ "year": 2022, "sha1": "f5af7f5871a5ac940458f52e5494decdabe04091", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f5af7f5871a5ac940458f52e5494decdabe04091", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
110234390
pes2o/s2orc
v3-fos-license
Design and Verification of proposed Operation Modes of LLC Converter The subject is a comprehensive research on issues of power electronic converters operating at very high switching frequency. This is a group of inverters applicable particularly in the power supply, respectively automotive products. The main purpose of the use of high switching frequency is a high value of power density and reduced weight and dimension parameters of the device. DOI: http://dx.doi.org/10.5755/j01.eee.18.8.2606 I. INTRODUCTION Purpose the paper is verification of operation modes and the influence of parasitic elements proposed of LLC converter with output power of 1 kW and switching frequency of 500 kHz for telecommunications distributions.LLC resonant converter is multi-resonant converter and is characterized by its unique DC -gain characteristic, which has two resonant frequencies (f 0 and f p ).This converter has several advantages compared to standard serial LC resonant topology.One of them is possibility of stable regulation of output voltage in a wide range of input voltages together with the change of output power from 1% to 100%.The next advantage is achievement of ZVS switching mode during various operational modes.LLC resonant converter is composed of three functional parts (Fig. 1).It deals about pulse generator, resonant circuit with high-frequency transformer and rectifier with capacitive filter. II. OPERATION MODES Operation of LLC converter in different operational modes is described by DC gain characteristic, which should be divided into ZVS and/or ZCS region [1].ZVS region in dependency on the switching frequency can be further divided into:  Region with switching frequency equal to resonant (f s = f 0 );  Region with switching frequency higher than resonant (f s > f 0 );  Region with switching frequency lower than resonant (f s < f 0 ).According to the operational modes of resonant converters the operation of LLC resonant converter is rather difficult [2].The principal waveforms of transformer and output diode during each operating mode are shown in Fig. 2. The impedance of series resonant circuit at the resonant frequency is equal to zero.Therefore the reflected output voltage is equal to the input voltage, what is described by the unity of voltage gain thus the circuit then operates optimally.LLC resonant converter can achieve gain greater, less or equal to 1.If the switching frequency is less than the resonant frequency, magnetization inductance is involved into the resonance of the circuit so the converter can deliver higher gain. III. DESIGN PARAMETERS OF THE MAIN CIRCUIT Target of the design is to determine the active and passive elements of the proposed converter.Is the need estimated efficiency, determination of the maximum and minimum input voltage, determination of maximum and minimum voltage gains, determinations of transformer turns ratio, determination of equivalent load resistance and design of the resonant network.The final design of LLC power stage only Design and Verification of proposed Operation Modes of LLC Converter the three parameters are necessary to be optimally chosen:  Ratio of magnetizing and resonant inductance where L m is magnetizing inductance and L r is resonant inductance. where n is transformer ratio and R L is equivalent load. In terms of design it is important to made the compromise in selection of the inductance ratio m.Fig. 8 is showing the simulation experiment at 25% of load at the input voltage U IN = 425V.In the picture shows that even at the reduced output power the switching transistors are maintaining excellent operating characteristics of the ZVS mode.In the case of output diodes the ZCS switching character is also still achieved. Fig. 9 shows the simulation experiment, when LLC resonant converter operates at minimal supply voltage U IN = 325V and at full load condition.Simulation experiment confirmed proper design of converter.Transistor's current has sinusoidal shape until magnetizing inductance became participating in resonance with other circuit parameters.Output diodes are operating in discontinuous ZCS mode.The last experiment has been done at the input voltage U IN = 325V and at output power P OUT = 252W.As can be seen in this operation mode the inverter is still able to achieve ZVS conditions for the main transistors.Soft commutation with ZCS conditions are also achieved for output rectifier diodes, which are operating in discontinuous mode. V. CONCLUSIONS This paper describes the design of the LLC resonant converter, which is done by means of fundamental harmonic approximation (FHA).Output power of proposed converter is 1kW output voltage 48V and switching frequency is 500 kHz.Performance of converter at different operational conditions were verified through simulation analysis by utilization of OrCAD PSpice software.The simulation results of multiple parametrical experiments were obtained and consequently evaluated into graphical interpretation of efficiency of proposed converter.Future work will be focusing on the design of the physical samples, verification activities and influence of the parasitic components on the operation of LLC converter. Fig. 2 . Fig. 2. Dependency of parallel resonant frequency (fp), ratio of f0/fp and IRMS on the value of m. Fig. 3 . Fig. 3. Simulation model of main circuit of proposed LLC converter.Input/Output parameters of LLC resonant:  Input voltage: 425 V;  Output voltage: 48 V;  Output current: 21 A;  Output power: 1008 W;  Switching frequency ≈ 500 kHz.Simulation waveforms in the Fig.4,Fig.5, Fig.6.confirm the theoretical assumptions.Another simulation test verified the properties of LLC resonant converter at change load and Fig. 4 . Fig. 4. Time waveforms during the simulation experiment at fs = f0 (from topdriving signals of transistors X1, X2, voltage on the primary side of transformer, current on the primary side of transformer, currents of the output diodes D1, D2). Fig. 5 . Fig. 5. Time waveforms during the simulation experiment at fs > f0 (from topdriving signals of transistors X1, X2, voltage on the primary side of transformer, current on the primary side of transformer, currents of the output diodes D1, D2). Fig. 6 . Fig. 6.Time waveforms during the simulation experiment at fs < f0 (from topdriving signals of transistors X1, X2, voltage on the primary side of transformer, current on the primary side of transformer, currents of the output diodes D1, D2). Fig. 7 Fig. 7 presents the results from the simulation experiment, when LLC resonant converter operates at full load (P OUT = 1008W) when input voltage U IN = 425V.In this mode, transistors are operating with ZVS conditions and output diodes with character of ZCS switching.The unwanted effect of reverse recovery was eliminated by utilization of Schottky rectifier diodes.Fig.8isshowing the simulation experiment at 25% of load at the input voltage U IN = 425V.In the picture shows that even at the reduced output power the switching transistors are maintaining excellent operating characteristics of the ZVS mode.In the case of output diodes the ZCS switching character is also still achieved.Fig.9shows the simulation experiment, when LLC resonant converter operates at minimal supply voltage U IN = Fig. 10 . Fig. 10.Time waveforms of voltages and currents during the simulation experiment: UIN = 325V, POUT = 252W (from top -transistor X1a X2, transformer primary side, output diode D1 and D2).After simulation experiments have been made, we have made multiple graphic interpretation of converter's efficiency in dependency on converter's output power and input voltage.These results are good indicators of converter design and are good starting point for experimental Fig. 11 . Fig. 11.Efficiency of the proposed converter in dependency on output power and input voltage (UIN = 425 V, UIN = 325V).Time waveforms voltage and current may be deformed influence parasitic components.At higher switching frequencies the sensitivity of the parasitic leakage inductances and capatities becomes serious problem.The parasitic components inclusive in the simulation model of main circuit of LLC resonant converter are:  C OSS : output capacitance of MOSFET;  C TR : transformer winding capacitance;  C jc : junction capacitance of rectifier diode;  L lks : leakage inductance at transformer secondary side.The simulation model LLC resonant converter include parasitic components is shown in Fig. 12. Fig. 12 . Fig. 12. Simulation model of main circuit of proposed LLC resonant converter with parasitic components. Fig. 13 . Fig. 13.Time waveforms during the simulation experiment LLC resonant converter inclusive parasitic components at fs = f0 (from topdriving signals of transistors X1, X2, voltage on the primary side of transformer, current on the primary side of transformer, currents of the output diodes D1, D2). Fig. 14 . Fig. 14.Time waveforms during the simulation experiment LLC resonant converter inclusive parasitic components at fs > f0 (from topdriving signals of transistors X1, X2, voltage on the primary side of transformer, current on the primary side of transformer, currents of the output diodes D1, D2). Fig. 15 . Fig. 15.Time waveforms during the simulation experiment LLC resonant converter inclusive parasitic components at fs < f0 (from topdriving signals of transistors X1, X2, voltage on the primary side of transformer, current on the primary side of transformer, currents of the output diodes D1, D2).
2019-04-13T13:04:54.524Z
2012-10-26T00:00:00.000
{ "year": 2012, "sha1": "36ae57c84224735abb6ce0b2627281a46bc58301", "oa_license": "CCBY", "oa_url": "https://eejournal.ktu.lt/index.php/elt/article/download/2606/1908", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "36ae57c84224735abb6ce0b2627281a46bc58301", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
15915709
pes2o/s2orc
v3-fos-license
Chaos on the conveyor belt The dynamics of a spring-block train placed on a moving conveyor belt is investigated both by simple experiments and computer simulations. The first block is connected by spring to an external static point, and due to the dragging effect of the belt the blocks undergo complex stick-slip dynamics. A qualitative agreement with the experimental results can only be achieved by taking into account the spatial inhomogeneity of the friction force on the belt's surface, modeled as noise. As a function of the velocity of the conveyor belt and the noise strength, the system exhibits complex, self-organized critical, sometimes chaotic dynamics and phase transition-like behavior. Noise induced chaos and intermittency is also observed. Simulations suggest that the maximum complexity of the dynamical states is achieved for a relatively small number of blocks, around five. I. INTRODUCTION Spring-block type systems are successfully used for modeling various complex systems which exhibit selforganization. Usually, the static friction coefficients exceed the dynamical ones, in systems composed of two or more blocks connected by linear springs the coexistence of these friction types can lead to avalanches, nonlinear dynamics or Self-Organized Criticality (SOC) [1][2][3][4]. Spring-block type models have broad interdisciplinary applications (for a recent review see [5]), and prove to be very useful in describing many complex phenomena in different areas of science. For example, they have been applied successfully to explain elements of the Portevin-Le Chatelier effect [6], the fragmentation obtained in drying granular materials in contact with a frictional surface [7][8][9], to understand the formation of self-organized nanostructures produced by capillary effects [10,11], to model magnetization processes and Barkhausen noise [12], to describe glass fragmentation [13], and even for modeling highway traffic [14,15]. The first members of this model-family were presented in 1967 by R. Burridge and L. Knopoff [16] to explain the empirical Guttenberg-Richter law of the size distribution of earthquakes [17]. One of their original models (called BK model here) is composed of a chain of many blocks connected with springs and two planes, to model the sliding of tectonic plates (for a recent review see for example [18]). The blocks can slide with friction on the bottom surface and they are all connected by springs to the upper plane which is dragged with a constant velocity. This model presents stick-slip dynamics, and the size distribution of the slipping events exhibits a power-law scaling. The system used here has also been proposed by Burridge and Knopoff [16], and later it was referred as "train model" [26]. As sketched in Fig. 1, a spring-block chain is placed on a platform (conveyor belt) that moves with constant velocity. The first block is connected by a spring to a static external point. As a result, due to the dragging effect of the moving platform, the blocks will undergo complex stick-slip dynamics. The stick-slip dynamics of one block has been studied by analytical [27][28][29][30], numerical [28][29][30][31] and experimental methods [28,32]. The results for a chain composed of several blocks are contradictory from the point of view of self-organized criticality [26,29,33,34]. Undoubtedly, chaotic dynamics has been found in such systems by many authors [22,[35][36][37]. In these systems nonlinearity is introduced via friction forces, and several models have been studied from velocity-weakening [26,27,35,36] to state-dependent [31] friction forces. It has been shown that with velocity-weakening friction forces (and a constant static friction), the case of two blocks is the simplest autonomous spring-block system exhibiting chaos [35]. It was also argued, that for systems composed of many blocks, chaotic dynamics and SOC can coexist [33]. It needs to be mentioned that a somewhat modified version of the train model has also been investigated, where in contrast with the former model, both the first and the last blocks are connected to static points. Additional springs are also introduced. This system also exhibits periodic and chaotic behavior [38,39]. Although the proposed problem has been thoroughly investigated for one, two and many blocks, we believe that a careful experimental and detailed simulation study for an intermediate number of blocks can reveal new and interesting dynamical complexity. Comparisons between analytical, numerical and experimental results exist mainly for the one block system. An exception is the work of Burridge and Knopoff [16], where a chain of eight blocks was thoroughly investigated. The main purpose of their work was, however, to investigate the distribution of the potential energy released during avalanches, and they did not considered the problem from a dynamical systems view. Also, a general feature of previous numerical studies is that the model parameters and assumptions are arbitrarily chosen, and usually the only argument for this is to make the calculations or the numerical code simpler. Here we consider an experimental setup composed of 5 blocks, and computer simulations of a simple model for systems up to 10 blocks. In contrast to previous works, the model parameters are realistically estimated from experimental measurements. We also incorporate the surface inhomogeneities of the conveyor belt by using a Coulomb type friction force varying stochastically about a mean. This feature appears to be essential since the temporal intermittency observed in the experiments cannot be recovered with only deterministic friction. A spring force with exponential cutoffs is used, to make the force profile more realistic and to avoid the collisions of the blocks with each other. To the best of our knowledge this intermediate system size has not yet been investigated experimentally. The choice of intermediate system sizes is motivated by the expectation that this is the range where the theory of collective phenomena and of dynamical systems overlap and thus tools borrowed both from statistical mechanics and chaotic dynamics become both applicable. Despite the relatively small number of elements the system consists of, we surprisingly find a sharp phase transition-like behavior as a function of the conveyor belt velocity. Interestingly, as the size of the system is increased this transition becomes smoother, a phenomenon that can be understood through the intermittency that was revealed in this region. Tuning the level of disorder to a certain value, disorder induced phase transition-like behavior is also observed. We show that by using the Coulomb type friction forces this simple system exhibits a complex, chaotic dynamics accompanied by power-law type avalanche size distribution indicating SOC, noise induced intermittency and also noise induced chaos. The work is structured as follows. First the used experimental setup is presented and the results are described for a chain of five blocks (Sec. II). In Sec. III the model is detailed. Simulation results (Sec. IV) without and with a disorder in the friction force are discussed. Finally, the influence of the number of blocks (up to N = 10) on the observed dynamics is investigated (Sec. V) and final conclusions are drawn (Sec. VI). II. EXPERIMENTS The experimental setup is sketched in Fig. 1. The chain is built by N = 5 black wooden blocks of mass m = 115.8 g, and of dimensions 4 cm×8 cm×4 cm. The blocks are connected by steel springs of rest length l = 7 cm and spring constant of k = 19.8 N/m. The chain is placed on the conveyor belt of a treadmill (running machine) with adjustable speed. A digital camera is placed above the chain to record the dynamics of the blocks with 24 fps (for a snapshot see Fig. 2). In order to make the last block clearly distinguishable from the others, and from the black colored conveyor belt, the top of this block is colored white. The video recordings are converted by a threshold operation into black and white image sequences as shown in Fig. 3. Then, the position of the last block is detected on these images. The length unit on the image can be determined using the image of a tape measure, which is placed next to the chain. Accordingly, the length x 5 (t) of the chain as a function of time can be obtained automatically with a simple processing program. The experiments were carried out with two values of the belt's speed: u = 0.22 m/s and u = 0.28 m/s. The measured average values of friction forces in the experiment are F st0 = 1.98 N for static friction and F k0 = 0.89 N for kinetic friction. Thus, the ratio of the two types of friction forces is f s = F k0 /F st0 = 0.45. In both experiments the system is initialized with blocks sticking to the belt and with undeformed springs. Initially, the conveyor belt is at rest. After it is set in motion, length oscillations of very large amplitude occur on a time-interval up to 100 s. In order to ensure that these initial transients are not considered and a kind of steady state has set in, a time-interval of t trans = 200 s is discarded in the data processing. The recorded data for the position of the last block x 5 (t) as a function of time is then analyzed. For the velocity u = 0.28 m/s, the recorded data reveal (Fig. 4) two qualitatively differ- ent temporal behaviors. In the domain characterized by small amplitudes in x 5 (t), the dynamics of the system is nearly periodic, while in the domain of large amplitudes the last block exhibits a chaotic-looking behavior. We recall again, that the initial periodic-like behavior in this figure is not a result of the lasting transients, since before this, several chaotic-like regions were already present during the large amplitude bursts following the start of the conveyor belt. The differences in the dynamics for these domains can be better understood from the Fourier Transforms (FT). We have thus computed separately the FT for the two well distinguishable regions (see Fig. 4). It is clearly observable that for the small amplitude interval the power spectrum S of the FT has peaks at equal distances suggesting periodic dynamics, while in the chaotic region the power spectrum presents a quasi-continuous distribution. As can be seen in Fig. 5, for u = 0.22 m/s the dynamics of the system is intermittent. In the first 125 seconds there are large "avalanches" that result in a simultaneous movement of blocks (they slip and stick to the belt together). In this regime, the length of the chain evolves in time with fluctuations of large amplitudes. Then, without any external influence, the dynamical behavior changes abruptly, and for more than one minute the system remains in a nearly steady state, when all the blocks are slipping continuously relative to the belt. After this time interval, the behavior of the system looks chaotic again, of the type it was at the beginning of the recorded dynamics. The power-spectrum of the FT for the whole plotted interval is similar to the one observed for the chaotic regime for u = 0.28 m/s. III. A SIMPLE MODEL The model contains the same elements (blocks and springs) as the experimental setup. The main challenge in the modeling effort is however the quantification of the friction and spring forces and the numerical integra- In the model, dimensionless units are taken. We have chosen these units so that m = 1, k = 1, and l = 50. The value for l was chosen for the sake of an easier graphical visualization (spring length corresponding to 50 pixels). Being motivated by the experiment, we cannot avoid using a nonzero rest length for the springs. The equation of motion for the ith block of the chain is written as respectively, and v ri is the relative velocity of the block to the conveyor belt. The elastic force, F e , and the friction force, F f , are defined below. The elastic force F e of any spring is linear, up to a certain deformation value, ∆l max . For higher deformations, this dependence is assumed to become exponential, with an exponent bigger (in modulus) for negative deformations (see Fig. 6). Accordingly, where we have chosen ∆l max = 20, b 1 = 0.2 for ∆l < 0 and b 2 = 0.01 for ∆l 0. With these choices the model is more realistic since the nonlinearity of spring forces is taken into account and collisions between blocks are avoided. At the same time the choice of parameters b 1 and b 2 is somewhat ad-hoc and we have not carried out a systematic change of them, other than the fact that the average chain lengths are thus comparable with those seen in the experiment with an error less than 10%. According to a review of experimental results presented in Ref. [32], "the classical friction law, where the friction force is proportional to the load, will only exist in an average sense" and it is important to take into account that in many cases the friction is determined by surface asperities, which means that "deterministic friction-velocity relations at best only exist in an average sense". In [32], the authors also argue that in the case of stick-slip dynamics on a plane surface, the friction forces may have normal distributions. In the present paper Coulomb's law of friction [40] is used with a noisy extension reflecting the surface irregularities. Both the static and kinetic friction forces are independent of velocity modulus. A block remains in a stick state until the resultant external force F ex exceeds the value of the static friction force, F st . For higher external force values the block starts to slide in the presence of the kinetic friction force F k . We assume that, the ratio of the static and kinetic friction forces F k /F st = f s is constant. The friction force F f acting on the block depends both on sgn (v r ), where sgn is the signum function and v r is the block's speed relative to the conveyor belt, and on the value of the resultant external forces F ex acting on it. In our 1D setup, the friction force orientation is given only by its sign: where v r = v−u and v is the velocity of the block relative to the laboratory frame. Since the surface of the conveyor belt is not perfectly smooth, the already mentioned argument of Ref. [32] is implemented by using randomly distributed friction forces. This is incorporated into the model by randomly generating a new static friction force value for every different position of the block relative to the belt. These random friction force values are generated according to a normal distribution with a fixed mean F st0 and standard deviation: σ. This standard deviation (together with the integration time-step, which is however the same for all simulations), will quantify the amount of disorder introduced in the model. Although the origin of the disorder is the spatial inhomogeneity of the belt's surface, from the point of view of the dynamics the fluctuations of the friction force appear as temporal noise. We shall thus call σ as the noise strength. The kinetic friction force value is always automatically updated together with the static friction force value, using their fixed ratio f s . As a result, both the kinetic and static friction forces will fluctuate in time during the sliding of the particular block. In Fig. 7 we illustrate the variation of the friction force as a function of the resultant external force F ex for one stick-slip event. Such type of friction forces also proved to be useful for modeling highway traffic [5,14]. In order to use the same friction force value as in the experiments, in the dimensionless units the average value of the static friction force is F st = F st 0 = 71.4. The ratio of the two types of friction forces is also chosen in agreement with the experiments as f s = 0.45. Computer simulations of the model start from initial conditions similar to the experimental ones. The blocks are placed on the belt with undeformed springs between them. At this initial setup the blocks are stuck to the belt which means that their initial velocity in the laboratory reference frame is equal to the conveyor belt's constant velocity u. Computer graphics is also used to visualize in real-time the dynamics of the blocks. The numerical methods used to solve Newton equations (1) are presented in the Appendix. The time-step of integration is fixed to dt = 0.001. In order to characterize statistically the dynamics of the chain, a parameter measuring the fluctuation of the chain length in dynamical equilibrium is introduced. The chain's length is determined by the position of the last block in the row. The disorder parameter r is defined as the standard deviation of the coordinate x N compared to the time averaged length x N of the chain: This quantity takes on large values for large fluctuations which explains the term disorder parameter. Among other relevant dynamical measures, this disorder parameter is investigated as a function of the belt velocity u (Section IV A.), the noise level σ (Section IV B.), and the number of blocks N (Section V.). A. Results without noise In the deterministic case, as the conveyor belt is started, the whole system moves together with the belt until the first block starts to slip. This slipping moment is determined simply by the value of the static friction force F st and the belt velocity u. The block sticks again to the belt when its relative velocity v r becomes zero. After that the process starts again. For small belt velocities this kind of behavior characterizes all the blocks in a self-organized manner. Specifically, the blocks are slipping together and produce "avalanches" of different sizes. Therefore, the length of the chain, defined by the position of the last block x N fluctuates as a function of time with largely varying amplitudes, as shown in Fig. 8.a. Furthermore, as shown in the inset of Fig. 8.a. the Fourier transform of the time series of x N exhibits a nearly power-law behavior with an exponent −1.21, suggesting a 1/f type stochastic behavior in the long time dynamics and a critically self-organized state. The relative velocity of the last block v rN is zero, when the block is stuck to the belt. As the chain starts to slip, this relative velocity is rapidly increasing in absolute value as indicated in Fig. 8.b and it shows large fluctuations according to the inset of Fig. 8 For a quantitative measure of avalanches, the slip size distribution for the last block N = 5 is computed and shown in Fig. 9.a. The slip size ∆x s is defined as the difference of the coordinates where the block starts to slip and when it stops relative to the belt. The obtained power law behavior confirms the presence of SOC, which appears here together with a chaotic dynamics. The distribution of the energies dissipated during avalanches is plotted in Fig. 9.b. The results indicates again a scaling (a sign for SOC-like behavior), although the exponent α = −0.87 is different from the one obtained in the seminal work of Burridge and Knopoff [16]. This difference is a natural consequence of the different nature and size of the problems. The existence of chaos is illustrated by Fig. 10, where the natural distribution on a Poincaré section is presented. This Poincaré section is projected onto the phase plane (x N , v rN ) of the last block, and obtained as the intersections of the system's trajectory in the phase space with the plane defined by x 2 = 304.3, considering only uni-directional crossings from right to left (ẋ 2 < 0). For a better view, the high probability states along the line v rN = 0 (where the block is stuck to the belt) are not shown. In the full system of 10 degrees of freedom the Poincaré section is 9-dimensional. Fig. 10 shows a projection of this on a plane. It is surprising that the natural distribution on this plane is similar in appearance (both in the shape of the support and of the rather irregular distribution) to that of the chaotic attractor of a driven one-dimensional system. This indicates that the chaotic attractor of the full chain is rather low-dimensional. For higher values of the belt velocity, there are other possible scenarios as well. For instance, for u = 7 the system exhibits an asymptotically periodic stick-slip dynamics, but for u = 15 the behavior is aperiodic again. For the velocity interval 18 ≤ u ≤ 50 permanent chaotic dynamics never occurs for the investigated parameters (see e.g. Fig. 11 for u = 20) and the last block exhibits a continuous slip dynamics after a certain transient time. It can be seen in this figure that the system undergoes a transient chaotic behavior with a stick-slip dynamics, before reaching a periodic attractor. The bifurcation diagram plotted in the top panel of Fig. 12 summarizes all the cases described above. Here, the velocity of the last block v N is plotted as a function of the driving velocity u in instances when the block goes through a fixed (x N = 615) position from the right. For every value of the control parameter u the simulation is restarted from the declared initial position. To construct this plot, the computer simulations were run up to t = 2 × 10 6 time units, discarding a transient time of t trans = 10 6 . Interestingly, the disorder parameter (4) allows for a difference to be made between these different dynamical behaviors. Results for r(u) are plotted on the bottom panel of Fig. 12. As expected, for asymptotic chaos the disorder parameter has a high value, and in the case of the periodic or quasi-periodic dynamics, it is much smaller. In the transition region (15 < u < 18) the value of r falls from about 0.13 to 0.02. This jump of nearly one order of magnitude indicates a relatively sharp dynamical phase transition-like behavior. The critical speed u c , defined as the midpoint of the transition region, which the phase transition-like behavior can be associated with, is u c = 16.5. (Note that in the interval 20 < u < 22 there are three outlier points in the lower panel of Fig. 12. These belong however to periodic attractors of large amplitudes rather than to chaotic cases.) In the inset of Figure 12, the average lifetime τ of the chaotic transients preceding the periodic behavior is presented. This is measured using the method described in Ref [41]. Looking at the simulation data, it can be seen that in the phase transition region the lifetime of transients grows considerable, a kind of critical slowing down can be observed. It can thus be concluded, that for u 15 (except for the periodic windows) the chaotic dy- namics is permanent, while for u > 18 it has a transient character. These transients were neglected in computing the disorder parameter, since the discarded transient time (t trans = 10 6 ) is larger by more than two orders of magnitude than the average lifetime τ of the transients. B. Results in the presence of noise As described earlier, the two friction forces are linked by the relation F k = f s F st , and the values of F st are randomly updated for each new position of the blocks on the conveyor belt. The distribution of F st has a standard deviation σ and a mean value F st0 . First, a relatively low level of noise σ = 1 is considered (see Fig. 7). In this case, the phase transition-like behavior remains almost unchanged. This is clearly visible in the r(u) plots of Fig. 13. The periodic windows present for σ = 0 (top panel of Fig. 12) disappear, however, and consequently the disorder parameter has less fluctuations in the chaotic regime. The noisy bifurcation diagrams of Fig. 14 also illustrate the disappearance of the periodic windows as σ increases. For belt velocity u = 7 and small noise levels, the asymptotic periodic dynamics is reached after a transient chaotic behavior. This is possible if a non-attracting chaotic set (chaotic saddle) and periodic attractors coexist (see for example [42]). Beyond a critical noise level σ c , the chaotic transients turn into a permanent chaotic dynamics. This is called noise induced chaos [41,[43][44][45][46]. Based on the results plotted in the inset of Fig. 14 the critical noise strength σ c necessary to obtain noise induced chaos is estimated as σ c = 0.75. By increasing the noise level σ in the friction force, the critical speed u c (σ), for which a phase transition-like behavior occurs, increases with σ as suggested by Fig. 13. Turning the problem around, in Fig. 15 we plot the disorder parameter r as a function of the noise level σ for intermediate velocities u 20. In the whole range a noise induced phase transition-like behavior emerges. Moving away from u = 20, the sharp transition becomes more smooth and the transition point in σ is shifted toward higher noise levels. A possible explanation of this effect is that for higher velocities a higher noise level is needed to kick the system out from the basin of attraction of the periodic attractor. The model is able to reproduce the intermittent behavior observed in the experiment and presented in 4. For example, considering σ = 2.2, and belt velocities in the phase transition-like region, the system exhibits an intermittent dynamics (Fig. 16). Since such intermittency is not present without noise, this phenomena is called noise induced intermittency [42] in the chaos literature. This can happen again only if a chaotic saddle exists in the deterministic system [42]. We note that the critical speed u c (σ = 2. Another facet of the effect of noise is shown in Fig. 17, where the natural distribution is projected to the bifurcation diagram defined in Sec. IV A. The consecutive graphs are results for increasing noise levels. From these plots we learn that for σ = 1 all the periodic windows in the 0 < u 15 interval disappear, but the phase transition point hardly changes. As σ is increased further, the sharp phase transition-like behavior transforms into a smoother one (see Fig. 17.c). For σ = 10 the dynamics of the system is dominated by noise (Fig. 17.d). The results regarding the phase transition-like behavior are summarized in Fig. 18 by a detailed map of the parameter space (u, σ). Figures 13 and 15 correspond to sections of this map along the horizontal and vertical axes, respectively. The range where the system exhibits noise induced intermittency is enclosed by white dots. The fact that this range does not reach the σ = 0 line shows, that intermittency cannot be recovered with purely deterministic friction forces. As the level of noise is increased, this region becomes abruptly wider for σ > 4. This can be interpreted as the critical noise strength below which the phase transition-like region remains sharp, and if this is exceeded, the transition becomes smoother. In the view of all these observations we conclude that the noise strength corresponding to the experiments is about σ = 1 − 2. V. FINITE SIZE EFFECTS We can now briefly discuss the influence of the system size N on the observed phase transition-like phenomena. We know from statistical physics that the observed phase transitions become sharper as the system size is increased. Although the investigated system is a non-equilibrium dynamical system and not a system in thermal equilibrium, one might expect that the phase transition-like behavior sharpens for larger system sizes. In our system, however, the opposite happens. As shown in Fig. 19 for increasing system sizes, the sharp phase transition-like behavior is transformed into a smoother and smoother one, the transitional region broadens with N . This observation can be explained through a quantitative change in the systems dynamics in the presence of noise. As we have seen before, the dynamics changes from chaotic to periodic or quasi-periodic via an intermittent behavior. For N = 10, intermittency occurs even without noise. This is nicely illustrated by the x N (t) graphs for different u values in Fig. 20. The intermittency found in the transition interval 27 < u < 40 explains why the sharp phase transition-like behavior disappears for larger systems. As the disorder parameter measures the fluctuations of the chain length, in the intermittent region smaller disorder parameters will be observed. Fig. 20 indicates that larger and larger periodic or quasi-periodic time intervals appear at increased chain velocities. Therefore, the sharp transition becomes smoother. For increasing system sizes, the previously observed noise induced phase transition-like behavior becomes also less evident, as illustrated by Fig. 21, for driving velocities above the transition region. Bifurcation diagrams generated for different system sizes are shown in Fig. 22. These results augment those obtained from disorder parameter. They indicate that for N < 5, the system has only periodic or quasiperiodic attractors. The behavior of the disorder parameter does not suggest any phase transition-like behavior here. Dominant chaotic regimes appear for N 5. As the size of the system is increased, the number of periodic windows decreases, and the transition region is enlarged. Also, for larger systems the effect of noise is less pronounced. VI. DISCUSSION The dynamics of a simple spring-block chain placed on a running conveyor belt was investigated both by simple experiments and through computer simulations. Despite its simplicity, the dynamics of the system proved to be quite complex exhibiting-chaotic, periodic or quasiperiodic behavior as a function of the conveyor belt's velocity and the amount of noise in the friction forces. The experiments and computer simulations indicate that the transition from chaotic to periodic or quasi-periodic dynamics is typically realized through an intermittent chaotic state. In the chaotic regime, the avalanche-size distribution function shows a scale-free nature, indicating also the presence of SOC, along with a 1/f type of stochasticity in the chain length. Another aspect of the systems complexity is the diversity of the possible dynamical states reflected by the convoluted graph of the bifurcation diagram, and a phase transition-like behavior in the disorder parameter. The presence of noise adds to this picture fascinating phenomena, like noise induced chaos, noise induced intermittency and noise induced phase transition. Interestingly, the computer simulations suggest that the maximal complexity of the dynamical states is achieved for a relatively small number of blocks in the vicinity of N = 5 for σ < 2. For N < 5 there is practically no chaos in the system. With N = 5 the observed transition is the sharpest one, and for N > 5 it becomes smoother in a somewhat counter intuitive way. It is remarkable that this collective behavior finds its explanation in terms of dynamical system properties. Finally we would like to draw the attention to the fact that in several recent experimental studies [48,49] performed on metallic alloys, it has been observed that just before the onset of the plastic instability the local strain rate as a function of the corresponding force response shows a multi periodic behavior. This is somehow similar with the graphs shown in Fig. 11. This observa- tion brings us back to the possibility of modeling the Portevin-Le Chatelier effect with a spring-block system [6]. Indeed, if the plastic deformation is macroscopically uniform in the absence of plastic instability, one can as- sume that it is governed by only a small number of collective degrees of freedom, similarly with the dynamics of the spring-block chain considered here. In such view the studied model and the obtained results can gain interest in the field of material science as well. The disorder parameter r as a function of the noise level σ for different system sizes N . The driving velocity is chosen to be near the upper edge of the observed transition region from chaotic to periodic/quasi-periodic dynamics. Results for a total simulation time t = 2 × 10 6 and ttrans = 10 6 . ACKNOWLEDGMENTS The useful comments and advices of Gábor Drótos and Gábor Csernák are acknowledged. The work of FJ-SZ and ZN is supported from the IDEAS research grant: PN-II-ID-PCE-2011-3-0348. The work of BS was subsidized by "Collegium Talentum" of Hungary and by the Excellence Bursary of BBU. The research of BS and TT is conducted in the framework of TÁMOP 4.2.4.A/1-11-1-2012-0001 'National Excellence Program' and OTKA NK100296. Financial support for this program is provided jointly by the Hungarian State, the European Union and the European Social Fund. Appendix: Numerical method We briefly describe here the method used to integrate the Newton-equations (1) and to handle the discontinuous stick-slip dynamics of the blocks. If a block is stuck to the conveyor belt, then it will move together with it at constant velocity u. Therefore, the position of the ith block relative to the ground is calculated with the simple equation. When a block is slipping relative to the belt, the basic Verlet method is used to update its position. As can be seen, this is a third order method, which can be extended also to the velocity space [47] as: The instance when the ith block sticks to the belt is found when the relative velocity v ri changes its sign, while the instance when the block starts to slip is defined by the sign change of F ex − F st . A more complicated stochastic numerical method was also developed to handle the stickslip dynamics, but was found not to significantly alter the presented results.
2013-04-18T10:31:35.000Z
2013-04-12T00:00:00.000
{ "year": 2013, "sha1": "6b3fc6baa2e8fcad8005cbe463d775cfe4b61895", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1304.3667", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "efbdc905ebbd55867feba7ba5a8d598b6f275fc0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
221292240
pes2o/s2orc
v3-fos-license
Norm-Controlled Inversion of Banach algebras of infinite matrices In this paper we provide a polynomial norm-controlled inversion of Baskakov–Gohberg–Sjöstrand Banach algebra in a Banach algebra B(`q ), 1 ≤ q ≤∞, which is not a symmetric ∗− Banach algebra. 2020 Mathematics Subject Classification. 47G10, 45P05, 47B38, 31B10, 46E30. Funding. The project is partially supported by NSF of China (Grant Nos. 11701513, 11771399,11571306) and the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2019R1F1A1051712). Manuscript received 13th March 2020, revised 10th April 2020 and 20th April 2020, accepted 20th April 2020. Introduction N. Wiener in [19] proved that if a periodic function with absolutely convergent Fourier series never vanishes, then it also has an absolutely convergent Fourier series. A Banach subalgebra A of a Banach algebra B having a common identity is called inverseclosed in B if A ∈ A with A −1 ∈ B implies A −1 ∈ A . For a Banach subalgebra A which is inverseclosed in B, we say that A admits a norm-controlled inversion in B if there exists a function h : R + × R + → R + such that where · A and · B are norms on A and B, respectively. N. Nikolski in [9] showed that the algebra of absolutely convergent Fourier series does not admit norm-controlled inversion in the algebra of continuous periodic functions. Using the commutator trick and the partition of the identity, J. Sjöstrand in [14] showed Wiener's lemma for C 1,0 (Z d ). The polynomial norm-controlled inversion is studied in [6] for a differential subalgebra of a symmetric Banach algebra and in [7] for matrices in Besov algebras, Bessel algebras, Dales-Davie algebras, Baskakov-Gohberg-Sjöstrand algebras and Jaffard algebras. A. G. Baskakov in [1,2] depending on Bochner-Phillips theorem proved that Jaffard algebras and Baskakov-Gohberg-Sjöstrand algebras with p = 1 admit norm-controlled inversion in B ( 2 ). E. Samei and V. Shepelska in [11] showed that the convolution algebras as a subalgebra of a C *algebra admits an inversion controlled by a subexponential function. In [13], it is shown that a Beurling algebra admits a polynomial norm-controlled inversion in a symmetric Banach algebra B( 2 (V )), where V, E are the sets of vertices and edges in the graph G = (V, E ), respectively, which has a complicated structure to prove the norm-controlled inversion. In some applications in the field of mathematics and engineering, widespread-used algebras B of infinite matrices are Banach algebras B( p ) for p ∈ [1, ∞], while they are symmetric only when p = 2. The results in [1,2,6,7,11,13] deal with the norm-controlled inversions in symmetric algebras, on the other hand, we provide the norm-controlled inversion in a nonsymmetric algebra. In this paper, for 1 ≤ p, q ≤ ∞, r > d (1 − 1/p) and a relatively-separated subset Λ of R d , we give the simple proof for the norm-controlled inversion of the Baskakov-Gohberg-Sjöstrand subalgebra C p,r (Λ) of a nonsymmetric Banach algebra B( q (Λ)). We expect that the method in this paper can be applied to algebras of infinite matrices having off-diagonal decay with different weights from polynomial functions. The proof of the main theorem is based on commutator trick and the partition of the identity in [14]. Norm-Controlled Inversion To state our result on norm-controlled inversion for localized infinite matrices, we recall some concepts. For a relatively-separated subset Λ of R d satisfying (1), we define Schur norm of an infinite matrix A = (a(λ, λ )) λ,λ ∈Λ by For any 1 ≤ q ≤ ∞, one can show that the Schur class S (Λ) is a subalgebra of the Banach algebra B( q (Λ)) and Let A = (a(λ, λ )) λ,λ ∈Λ be an infinite matrix in a BGS algebra, we define its approximation matrices A N , N ≥ 1, with finite bandwidth by We have the following properties of the algebra C p,r (Λ) for 1 ≤ p ≤ ∞ and r > 0. and let Λ be a relatively-separated subset of R d satisfying (1). Then the following statements hold. (1) The BGS algebra C 1,0 (Λ) is a subalgebra of Schur algebra S (Λ), and (2) The BGS algebra C 1,0 (Λ) is a subalgebra of the Banach algebra B( q (Λ)), and (3) The BGS algebra C p,r (Λ) is a subalgebra of the algebra C 1,0 (Λ), and (4) The BGS algebra C p,r (Λ) is a Banach algebra, and there exists a positive constant C 1 such that (5) A matrix A in C p,r (Λ) is well approximated by its truncated matrix A N , N ≥ 1, in the norm · C 1,0 (Λ) , and Proof. For 1 ≤ q ≤ ∞, a positive integer N and A ∈ B( q (Λ)), define localization operators and , where for a set I , χ I (·) denotes the characteristic function on I .
2020-07-30T02:10:03.760Z
2020-07-28T00:00:00.000
{ "year": 2020, "sha1": "6429a2ce80e242d263032f7f6cd5e87fcd10e0e4", "oa_license": null, "oa_url": "https://comptes-rendus.academie-sciences.fr/mathematique/article/CRMATH_2020__358_4_407_0.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1ab8d7a3e921baf06d60310b48ccd67d0ccc8224", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
221265989
pes2o/s2orc
v3-fos-license
Indium-Tin-Oxide Transistors with One Nanometer Thick Channel and Ferroelectric Gating In this work, we demonstrate high performance indium-tin-oxide (ITO) transistors with the channel thickness down to 1 nm and ferroelectric Hf0.5Zr0.5O2 as gate dielectric. On-current of 0.243 A/mm is achieved on sub-micron gate-length ITO transistors with a channel thickness of 1 nm, while it increases to as high as 1.06 A/mm when the channel thickness increases to 2 nm. A raised source/drain structure with a thickness of 10 nm is employed, contributing to a low contact resistance of 0.15 {\Omega}mm and a low contact resistivity of 1.1{\times}10-7 {\Omega}cm2. The ITO transistor with a recessed channel and ferroelectric gating demonstrates several advantages over 2D semiconductor transistors and other thin film transistors, including large-area wafer-size nanometer thin film formation, low contact resistance and contact resistivity, atomic thin channel being immunity to short channel effects, large gate modulation of high carrier density by ferroelectric gating, high-quality gate dielectric and passivation formation, and a large bandgap for the low-power back-end-of-line (BEOL) CMOS application. semiconducting type channel with an electron density of ~10 19 /cm 3 and good current on/off ratio as a switch. [13][14][15][16][17] Ferroelectric gating was also reported to enhance the gate controllability. 13 Li et al. recently reported a high-performance ITO transistor with channel thickness down to 4 nm. On-current (ION) exceeding 1 A/mm was achieved on a device with 10 nm thick ITO channel at a channel length of 200 nm, demonstrating that ITO TFT can be a promising candidate for lowpower high-performance device application. 16 Further reduction of ITO channel thickness is highly demanded to further improve the immunity to short channel effects for transistor scaling. Beyond ITO, scaled devices on W-doped In2O3 (IWO) or IGZO are also being investigated now. 18,19 More importantly, TFT with oxide semiconductor as channel recently attract revived interest since it can be applied in BEOL compatible transistors for monolithic 3D integration. 20 In this work, we report ITO transistors with 1-nm and 2-nm thick channel and ferroelectric (FE) hafnium zirconium oxide (HfZrO2 or HZO) as gate insulator. Highly doped ITO with 3D carrier density (n3D) of 1.7×10 20 /cm 3 is employed, which enables the channel thickness (Tch) scaling down to 1 nm. The high polarization charge density in FE HZO [21][22][23][24][25] enhances the gate controllability so that the high carrier density can be fully depleted. A raised source/drain structure is applied so that low contact resistance (Rc) of 0.15 Ω⋅mm and low contact resistivity (ρc) of 1.1×10 -7 Ω⋅cm 2 are achieved. High ION of 1.06 A/mm is achieved on ITO transistor with Tch=2 nm and 0.243 A/mm with Tch=1 nm with sub-micron channel length. Therefore, high performance ITO transistors with low contact resistance and ultrathin channel are obtained simultaneously, which overcomes fundamental challenges of 2D semiconductors. These results suggest ITO as a promising channel material for BEOL CMOS application. Fig. 1(a) illustrates the schematic diagram of an ITO transistor with recessed 1-nm thick channel and ferroelectric gating. The gate stack includes heavily boron-doped silicon (p+ Si, resistivity < 0.005 Ω⋅cm) as gate electrode and 20 nm FE HZO/3 nm Al2O3 as gate insulator. 80 nm Ni is used for source/drain electrodes. The thickness of ITO underneath the source/drain electrodes is 10 nm while the thickness of ITO channel is 1 nm or 2 nm. Fig. 1 Fig. 2(b) shows the X-ray diffraction (XRD) spectrum of FE HZO, confirming the HZO crystal contains orthorhombic phase, which leads to the ferroelectricity of HZO. Fig. 2(c) shows the P-V measurement of 20 nm HZO/3 nm Al2O3 capacitor with 10 nm ITO as top electrode and p+ Si as the bottom electrode, where the voltage is applied to the p+ Si electrode. The high polarization and ferroelectric hysteresis loop confirm the ferroelectricity of this structure. The corresponding C-V measurement at 1 kHz on the same device is shown in Fig. S1 in supporting information, showing a typical ferroelectric C-V hysteresis loop. Capacitance at negative voltage is lower than that at positive voltage because of the depletion of ITO as a degenerated semiconductor. Note that maximum polarization over 20 μC/cm 2 corresponds to 2D electron density (n2D) over 10 14 /cm 2 . The P-V measurement gives two clear indications except for the confirmation of ferroelectricity. The first is n2D in 10-nm ITO is higher than 10 14 /cm 2 , which is consistent with Hall measurement in Fig. 1(c), suggesting a recess channel is necessary for sufficient gate control. ITO transistor with Tch of 10 nm cannot be switched off by both conventional gating and ferroelectric gating, as shown in Fig. S3 in supporting information. The second indication is HZO/Al2O3/ITO oxide/oxide interface has a relatively low interface trap density compared to the FE polarization density. So, it could achieve a gate control of n2D over 10 14 /cm 2 , similar to ionliquid gating. Such gate control by ferroelectric polarization plays an important role to realize the high-performance ITO transistor in this work. in supporting information, which is even smaller than the value obtained from TLM measurements in Fig. 1(d). Device Characterization. The thickness of the ITO was measured using a Veeco Dimension 3100 atomic force microscope (AFM) system. Electrical characterization was carried out with a Keysight B1500 system with a Cascade Summit probe station. Supporting Information The supporting information is available free of charge on the ACS Publication website. switching with a wideband gap semiconductor channel may be fundamentally different. Carrier Density of ITO transistors μeff of 26.0 cm 2 /V⋅s and 6.1 cm 2 /V⋅s are achieved for 2-nm and 1-nm thick ITO, as shown in Fig. 4. Carrier density can be estimated according to ID=n2DqμE, where n2D is the 2D carrier density, q is the elementary charge, μ is the mobility, E is the source to drain electric field. According to this equation, the carrier density in 1-nm and 2-nm ITO channel can be calculated as shown in Fig. S10. Carrier density of 1-nm and 2-nm ITO are similar, with maximum n2D about 0.8×10 14 /cm 2 . Therefore, the current density difference between 1-nm and 2-nm ITO comes from the mobility difference. The reduction of drain current density in devices with 1-nm ITO channel is the result of mobility degradation from surface scattering. For 2-nm ITO transistor with 90 nm SiO2 as gate insulator ( Fig. S5(b)), according to μeff=26.0 cm 2 /V⋅s, a maximum n2D of 4.9×10 12 /cm 2 is obtained. For 1-nm ITO transistor with 90 nm SiO2 as gate insulator ( Fig. S5(a)), according to μeff=6.1 cm 2 /V⋅s, a maximum n2D of 7.2×10 12 /cm 2 is obtained. n2D is more than one order of magnitude higher in ITO transistor with FE gating (n2D > 8×10 13 /cm 2 ). Such difference cannot be the result of different EOT. For example, for 50 V on 90 nm SiO2, the voltage/EOT is about 0.6 V/nm; for 13 V on 20 nm HZO/3 nm Al2O3, the voltage/EOT is about 2.4 V/nm. The difference in displacement field is only 4 times. Therefore, only EOT difference itself cannot lead to the enhancement demonstrated in this work. Considering the low current density in ITO transistors with SiO2 as gate insulator, as shown in Fig. S5(a) and S5(b), the high carrier density in ITO transistor with FE HZO as gate insulator comes from the enhancement by FE polarization. Device Variations The ITO deposition was done by sputtering so that this technology can be used for largearea fabrication process. Fig. S11 shows ID-VGS characteristics of 13 ITO transistors with channel thickness of 2 nm and channel length of 3 μm, showing similar switching characteristics. The device-to-device variation comes from variation in wet etching process. Such variation can be further improved by introducing dry etching or re-growth source/drain process. The off-state current in this work comes from the gate leakage current through FE HZO, as shown in Fig. S12. IG and ID at off-state are very similar. Therefore, the off-state current variation in this work originates from the gate leakage current variation.
2020-08-25T01:00:51.896Z
2020-08-22T00:00:00.000
{ "year": 2020, "sha1": "355d5b1c1842a135840a7edfac68159687f42914", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2008.09881", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "355d5b1c1842a135840a7edfac68159687f42914", "s2fieldsofstudy": [ "Engineering", "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Medicine" ] }
113840235
pes2o/s2orc
v3-fos-license
Bidirectional Connected Control Method Applied to an Experimental Structural Model Split into Four Substructures Connected Control Method (CCM) is a well-known mechanism in the field of civil structural vibration control that utilizes mutual reaction forces between plural buildings connected by dampers as damping force. However, the fact that CCM requires at least two buildings to obtain reaction force prevents CCM from further development. In this paper, a novel idea to apply CCM onto a single building by splitting the building into four substructures is presented. An experimental model structure split into four is built and CCM is applied by using four magnetic dampers. Experimental analysis is carried out and basic performance and effectiveness of the presented idea is confirmed. Introduction Vibration suppression has been one of the biggest issues in civil structure, especially in the area possessing frequent earthquakes and typhoons such as Japan. Many mechanisms have been presented and applied to civil structures to reduce vibrations. Mass dampers and base isolators are the typical. So-called "Mass damper" utilizes resonance to absorb vibration energy of the objective structure [1]. Attaching relatively small vibratory system made of auxiliary masses, springs (or other restoring mechanism such as pendulum) and dampers to the objective structure and tuning its natural frequency close to that of the objective structure, it possesses resonance with the structure. Then vibration energy of the objective structure is transferred to the attached vibratory system and is absorbed by the dampers. "Mass damper" is so simple that it can easily be equipped to civil structures, while its vibration suppression performance is governed by the mass ratio -weight of the auxiliary mass versus weight of the objective structure. Therefore, as the applicable mass ratio remains relatively low, so the reachable vibration suppression performance also remains low. Such situation is common on bigger or taller structures such as high-rise buildings. Base isolation is another common mechanism for civil structures subjected to earthquake excitations [2]. By supporting entire structure by using bearable flexible mechanism such as rubber bearings, the natural frequency of the supported structure is dramatically dropped so that the resonance with earthquake excitation can mostly be avoided. Against to the wind excitations, however, the base-isolator hardly possess vibration suppression effect. Therefore base-isolation is not so suitable to high-rise buildings subjected to wind excitations. Inter-story dampers are widely applied for recently-built buildings in Japan. This method is essentially simple -just put dampers between vertically adjoining floors. Its effectiveness for relatively low buildings is already confirmed, while its effectiveness for high-rise buildings is deteriorated. It is because bending vibration becomes dominant in high-rise buildings and inter-story dampers are only effective against shear vibrations. To sum up these methods, it is difficult to reduce structural vibration of high-rise buildings subjected to wind and earthquake excitations. Therefore, authors focus on another method named "Connected control mechanism." Connected control mechanism (CCM) is a well-known mechanism in the field of civil structural vibration control that utilizes mutual reaction forces between plural buildings connected by dampers as damping force [3]. As the difference in natural frequencies among buildings increases, so the damping effect is also increased. CCM is already put into practical use in Japan. "Triton Square" is a complex of three high-rise buildings connected by two elastic viaducts. The effectiveness of CCM for life-size buildings are proved by this application [4]. Due to the restriction that CCM requires at least two buildings to obtain reaction force, however, further application of CCM has not been carried out yet. To overcome this restriction and develop CCM, a novel idea to apply CCM onto a single building is presented. First, splitting the building into four substructures that possess the same height and the same number of floors but different natural frequencies (or construct the building as an aggregate of four such substructures). To form single building, the floors of the four substructures in the same height would be coupled via elastic joints such as sliding couplers or expansion joints. Putting dampers in the joints (namely between substructures), the aggregate, four substructures coupled with dampers, forms a single building equipped with CCM. In this study, an experimental model structure split into four is built and CCM is applied by using four magnetic dampers to realize the presented idea. The model is set on a shaking table and subjected to excitations. Experimental analysis by impulse or shaking table excitation using seismic wave records is carried out and basic performance and effectiveness of the presented novel CCM is investigated. Experimental Model Structures and Magnetic Dampers As described, an experimental model structure made of four substructures are built that possess the same height and the same number of floors but different natural frequencies. In this study, two types of substructures are made and arranged in chequered pattern so that substructures possessing the same natural frequencies never be located side by side. Putting four dampers between substructures to form square, the four substructures forms single building equipped with CCM (see section 3.4). Fig.2. According to this difference, the natural frequencies of two substructures also differ. The natural frequencies of the first mode of two substructures are shown in Table 1. Figure 3 shows the outlook of the arranged substructures. Magnetic Dampers To obtain damping, eight magnetic dampers [5] are produced (two for each four couplings). Figure 4 and 5 shows the outlook of a magnet and a copper plate composing a magnetic damper, respectively. The design and parameters of the damper would be shown in the section 3.3 later. 1-Degree-Of-Freedom Models of Substructures. Prior to modeling and design dampers for CCM, the equivalent model of the structure is identified. First, modal analysis of the structures are carried out to identify the equivalent mass and stiffness. Figure 6 shows an example of modal shapes of St.A. In this study, the main target of the vibration suppression is the first mode. According to Reduced-order physical modeling method [6], 1-degree-of-freedom model denoting the first mode can be obtained. The identified parameters of the substructures are shown in Table 2. Optimal Damping Based on Fixed Point Theory Dynamical model of CCM can be described as a simple two mass-spring system connected by a coupling damper. Figure 7 shows a schematic view of the CCM model. "m1" and "m2" denote the equivalent masses of the substructures, "k1" and "k2" denote the equivalent stiffness and "c" denotes the damping of CCM, respectively. The entire structure is subjected to ground excitation displacement u. Fig.7 Reduced order vibration model and dynamics model The optimal tuning theory for CCM is already presented according to so-called "fixed point theory." The optimal damping coefficient is derived according to the theory [1]. The optimal damping ratio is derived as =0.104 that means the optimal damping coefficient is 31.9 [Ns/m]. Figure 8 shows the computational frequency transfer functions of each substructures connected dampers with zero, the optimal or the infinite damping. The effectiveness of the optimal tuning theory is clearly shown. Design of Magnetic Dampers Magnetic dampers utilizes electromagnetic induction when conductor such as copper plate goes across magnetic field. The theory of magnetic damper is already presented by Seto [3]. According to the theory, the damping coefficient can be obtained as The obtained damping coefficient is c=11.3 [Ns/m]. Therefore two dampers are equipped for each coupling. This means the equivalent damping coefficient for each coupling is c=22.6 [Ns/m], below the optimal damping c=31.9 [Ns/m]. This gap is caused by the experimental restriction on available copper plate. Thicker copper plate would be introduced in the near future. Figure 9 shows the schematic arrangement of substructures and magnetic dampers, while Figure 10 shows an outlook of the experimental structures with eight magnetic dampers, respectively. Experimental Evaluation The performance of the presented bidirectional CCM applied to the experimental structure with four substructures is explored by experimental analysis. Frequency transfer functions are measured by using impulse excitation onto the top of the structure, while time response against seismic excitation are measured through shaking table test using seismic wave record. Frequency Transfer Functions Using an impulse hammer, an FFT analyzer and an acceleration pickup, frequency transfer functions are measured. The pickup is located at the top of the roof of a substructure and impulse excitation is added onto the roof. Figure 11 shows examples of the measured frequency transfer functions. Due to transverse-torsional coupling, the influence of torsional mode also appears when dampers are mounted. Even though the coupled substructures possess significant damping that confirms the effectiveness of CCM clearly. Shaking Tests Using Earthquake Excitation Wave Records Putting entire system onto a shaking table, shaking tests are carried out using seismic wave records. In this research, El-Centro and Kobe seismic acceleration records are adopted. As the experimental system is fragile and its natural frequencies are far higher than the superior frequencies of the excitation records, time-and amplitude-scaling is applied so that the superior frequencies of the records would corresponds to those of the target substructures while the amplitude of the records become one-tenth of the original. Figure 12 shows examples of the measured time histories of the Y-direction acceleration of St.A at the roof. Peak accelerations are well suppressed by applying CCM. Besides, the peak acceleration responses of St.A and St.B in X or Y direction subjected to El Centro or Kobe earthquake wave record excitations are classified in Table 3. It is clearly shown that significant damping effect is achieved by applying CCM. Concluding Remarks In this paper, a novel idea to apply CCM onto a single building is presented. An experimental model structure split into four is built and CCM is applied by using four magnetic dampers to realize the presented idea. Modeling and design of the system are carried out according to procedures shown in previous studies. The experimental model is set on a shaking table and subjected to excitations. Experimental analysis by impulse or shaking table excitation using seismic wave records is carried out. Results of these tests showed significant effect of CCM that supports the effectiveness of the presented novel CCM.
2019-04-15T13:06:51.581Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "4194cbdaae258794383690f8fc5eb2bddf4e21fe", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/744/1/012035/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "95dc7d74ff8ef6141e6aed29b7d6506d0bfd07dd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
256846809
pes2o/s2orc
v3-fos-license
Analysis of dynamical effects in the uniform electron liquids with the self-consistent method of moments complemented by the Shannon information entropy and the path-integral Monte-Carlo simulations Dynamical properties of uniform electron fluids (jellium model) are studied within a novel non-perturbative approach consisting in the combination of the self-consistent version of the method of moments (SCMM) involving up to nine sum rules and other exact relations, the two-parameter Shannon information entropy maximization procedure, and the ab initio path integral Monte Carlo (PIMC) simulations of the imaginary-time intermediate scattering function. The explicit dependence of the electronic dynamic structure factor (DSF) on temperature and density is studied in a broad realm of variation of the dimensionless parameters ($2\leq r_s\leq 36$ and $1\leq \theta \leq 8$). When the coupling is strong ($r_s\geq 16$) we clearly observe a bi-modal structure of the excitation spectrum with a lower-energy mode possessing a well pronounced roton-like feature ($\theta \leq 2$) and an additional high-energy branch within the roton region which evolves into the strongly overdamped high-frequency shoulder when the coupling decreases ($r_s\leq 10$). We are not aware of any reconstruction of the DSF at these conditions with the effects of dynamical correlations, included here via the intermediate scattering and the dynamical Nevanlinna parameter functions. The standard static-local-field approach fails to reproduce this effect. The reliability of our method is confirmed by a detailed comparison with the recent ab initio dynamic-local-field approach by Dornheim et al. [Phys.Rev.Lett. 121, 255001 (2018)] available for high/moderate densities ($r_s\leq 10$). Moreover, within the SCMM we are able to construct the modes dispersion equation in a closed analytical form and find the decrements (lifetimes) of the quasiparticle excitations explicitly. The physical nature of the revealed modes is discussed. Mathematical details of the method are complemented in the Supplementary Material. Dynamical properties of uniform electron fluids are studied within a non-perturbative approach consisting in the combination of the self-consistent version of the method of moments (SCMM) involving up to nine sum rules and other exact relations, the two-parameter Shannon information entropy maximization procedure, and the ab initio path integral Monte Carlo (PIMC) simulations of the imaginary-time intermediate scattering function. The explicit dependence of the dynamic structure factor (DSF) on temperature and density is studied in a broad realm of variation of the dimensionless parameters (2 ≤ rs ≤ 36 and 1 ≤ θ ≤ 8). When the coupling is strong (rs ≥ 16) we clearly observe a bi-modal structure of the excitation spectrum with a lower-energy mode possessing a well pronounced roton-like feature (θ ≤ 2) and an additional high-energy branch within the roton region which evolves into the strongly overdamped high-frequency shoulder when the coupling decreases (rs ≤ 10). We are not aware of any reconstruction of the DSF at these conditions with the effects of dynamical correlations included here via the intermediate scattering and the dynamical Nevanlinna parameter functions. The standard static-local-field approach fails to reproduce this effect. The reliability of our method is confirmed by a detailed comparison with the recent ab initio dynamic-local-field approach by Dornheim et al. [Phys.Rev.Lett. 121, 255001 (2018)] available for high/moderate densities (rs ≤ 10). Moreover, within the SCMM we are able to construct the modes' dispersion equation in a closed analytical form and find the decrements (lifetimes) of the quasiparticle excitations explicitly. The physical nature of the revealed modes is discussed. Mathematical details of the method are complemented in the Appendix. The proposed approach, due to its rigorous mathematical foundation, can find numerous diverse applications in the physics of Fermi and Bose liquids. I. INTRODUCTION. Fermionic and bosonic three-and two-dimensional fluids of charged or neutral particles (see [1][2][3][4][5][6][7] and references therein) constitute an important class of onecomponent systems which serve to testify different theoretical models and are of significant practical importance for the interpretation and development of real experimental studies [8,9]. In this list, uniform electron fluids and, in particular, the uniform electron gas (UEG), an exotic, highly compressed neutral Coulomb system between solid and plasma phases [10], is one of the key models of the warm dense matter (WDM). This model system is of importance for our understanding of planet interiors [11,12], laser excited solids [13] or inertial confinement fusion [14][15][16][17]. The most accurate results in the WDM regime, so far, have been obtained via the first-principles methods of numerical simulations such as the quantum Monte Carlo (QMC) [18][19][20][21][22][23][24][25][26]. Despite quite accurate results for the static properties, the extraction of a similar quality QMC data for the dynamical characteristics (dynamic conductivity, optical absorption, collective excitations) is quite difficult and until recently [27] has been realized within the linear response theory. In particular, the dynamic structure factor (DSF), S(k, ω), is the central quantity in the x-ray Thomson scattering diagnostics of the WDM realized nowadays at large research facilities [28][29][30]. QMC simulations do not provide direct access to this quantity but permit to obtain reliable results with respect to the intermediate scattering function the inverse temperature in energy units. The inversion of the Laplace transform (1) for S(q, ω) can be realized via the maximum entropy method [31] or the stochastic and the generic optimizations [3,32,33]. Such a reconstruction is, unfortunately, not unique and the space of trial solutions expands with the increase of the statistical noise in F (q, τ ). Nevertheless, a number of trial solutions can be drastically reduced using a set of restrictions imposed either by several frequency moments of the spectral density [5] or relying on the exact properties of the dynamic local field correction (DLFC). This permits to reconstruct the most accurate UEG DSF [34,35]. The stochastic sampling of the trial solutions for the DLFC, however, is computationally expensive. Moreover, an accurate estimation of F (q, τ ) requires timeconsuming simulations and is limited to the temperaturedensity realm where QMC is not disabled by the fermion sign problem [36]. In addition, the lower boundary of accessible wavenumbers is limited by the system size, i.e. q ≥ 2π/L with L = (N/n) 1/3 . The algorithmic Matsubara diagrammatic Monte Carlo technique seems to be even more computationally involved [37]. As an alternative to the DLFC-based reconstruction [34,35] with much lower computational demand and applicability to a much broader class of physical systems, we present here the nine-moment version of the original non-perturbative self-consistent [38,39] method of moments [40][41][42][43][44] complemented by the Shannon information entropy two-parameter maximization technique and other exact requirements. Within this approach the DSF sum rules known theoretically or numerically are incorporated into the analytical form of the spectral density automatically. Resulting DSF S (q, ω), the (inverse) longitudinal dielectric function −1 (q, ω), the eigenmode spectrum and other dynamical characteristics are constructed exclusively in terms of the static structure factor (SSF), S (q) = F (q, 0), and the static dielectric function, (q, 0). These input data are provided here by the recent fermionic QMC simulations [24]. The respective accuracy of our approach is demonstrated and opens a path to further improvements and extensions to a broader parameter domain. A simplified version of our method was also validated against the QMC static data [45]. Preliminary steps to the creation of the present approach were taken in [46,47]. In what follows we use the reduced temperature, θ = k B T /E F with E F = ( 2 /2m)(3π 2 n) 2/3 , and the density (Brueckner) parameter r s defined by na 3 B = (4πr 3 s /3) −1 . Here, n is the number density of charged particles, a B is the first Bohr radius, and E F is the Fermi energy. In the present paper we concentrate first on the warm dense matter regime [10] with the coupling (r s ) and degeneracy (θ) parameters varying around unity (θ, r s ∼ 1). Then we extend our studies to the strongly coupled regime defined by 10 ≤ r s ≤ 36. The paper is organized as follows. In Sec. II we describe some details of the performed QMC simulations: we briefly mention the manifestations of the fermion sign problem in our simulations, demonstrate the convergence of main thermodynamic properties and the influence of the finite-size effects. The generalized selfconsistent method of moments (SCMM) with the dynamical Nevanlinna function is presented and discussed in detail in Sec. III. The UEG eigenmodes and the dynamic structure factor at moderate densities (2 ≤ r s ≤ 10) are obtained and compared to the local-field-based data. Further, in Sec. IV we present an improved version of the method. It is based on the optimization of the dynamical Nevanlinna function with an additional information contained in the intermediate scattering function F (q, τ ) provided by ab initio path-integral Monte-Carlo (PIMC) simulations. This new combined approach allows to study the influence of multiple correlation effects on the dynamical response in the UEG in the low density phase (16 ≤ r s ≤ 36) for the first time. Main conclusions and the outlook are drawn in Sec. V. A. Account of Fermi-Dirac statistics in QMC simulations In this section we briefly introduce the fermionic propagator path integral (FP-PIMC) recently developed by Filinov et al. [24] which provides the UEG ab initio static properties which are further employed in the selfconsistent method of moments (Sec. III) to recover the dynamical response. The FP-PIMC has demonstrated its efficiency in the analysis of the exchange correlation free energy for the UEG jellium model in a broad realm of parameters: 0.1 ≤ r s ≤ 10 and 1 ≤ θ ≤ 2. In contrast to the standard high-temperature decomposition of the fermionic partition function Z F via the bosonic propagators [48], the FP-PIMC employs the antisymmetric one, in the form of many-body Slater determinants, which already satisfy the required symmetry relations under an exchange of identical fermions. The summation over different permutation classes [48], {σ s }, can be performed analytically in the kinetic energy part of the N -body density matrix. As a result the anti-symmetric (fermionic) free-particle propagators (denoted in the following as "FP") between two adjacent time-slices are expressed as follows To shorten the notations, we introduced the total radius vector for identical particles of the same type, R s p = (r s 1 p , . . . , r s N s p ), where the upper index denotes the spin state s = {↑, ↓}, the first lower index counts the particle number indices (1 . . . N s ), and the second lower index denotes the imaginary time argument, τ p = p , with = β/P and 0 ≤ p ≤ P . Next, we can define the space-time variable, X s = (R s 1 , . . . , R s P ), which specifies a system microstate -a specific microscopic configuration of particle trajectories. The resulting expression for the partition function Z F thus contains the Slater determinants, M s p−1,p , between each successive imaginary times τ p − τ p−1 = , and, for practical applications in the Monte Carlo methods can be rewritten in the equivalent form with a new effective action S A (p − 1, p) which along with the standard potential energy term U contains an additional exchange contribution W x Hence, the probability of microstates sampled with the new action S A becomes proportional to the absolute value of the Slater determinants. Their degeneracy in the microstates with small spatial separations of the spin-like electrons correctly recovers the Pauli blocking effect and increases the average sign S , Eq. (8), being crucial for the numerical accuracy of the estimated physical observables (see below). The similar idea has been employed by several authors in different physical applications [49][50][51][52][53] including the uniform electron gas at warm dense matter conditions [10]. The change in the sign of Slater determinants evaluated along the imaginary time, 0 ≤ τ p ≤ β, is taken into account by extra factors, Sgn p . Combined together they define the average sign in the fermionic PIMC, and characterize the efficiency of simulations, as the statistical error δA of the estimated thermodynamic observables,Ā = A ± δA, is scaled as δA ∼ 1/ S(N, β) . The PIMC simulations become hampered by the fermion sign problem [54,55] once the statistical uncertainties are strongly enhanced due to an exponential decay of the average sign S(N, β) with the particle number N , the inverse temperature β = 1/k B T or the degeneracy parameter, θ = T /T F (or χ = nλ 3 ). The usage of the fermionic propagators, Eq. (2), permits to partially overcome the sign problem and make the UEG simulations feasible up to the degeneracy factor nλ 3 3 (λ being the themal de Broglie wavelength) with the average sign staying above S 10 −2 , see Ref. [24]. B. High-temperature factorization and the convergence tests The next issue which strongly influences the efficiency of PIMC simulations is the discretization time step = β/P . The general problem is related with the inability to estimate the exact value of the matrix elements of the density operator, e −βĤ , due to the non-commutability of the kinetic and the potential energy operators. This issue was elegantly solved by R.Feymann [56], who proposed to map the original quantum partition function to a quasi-classical one at a new effective high temperature, T = 1/ = P · T , by employing the semi-group property of the evolution operator, e −βĤ = e − Ĥ P . This idea renders to the high-temperature factorization representation (5). In the fermionic simulations the use of a larger time step (smaller P -value) increases the S -value and extends the applicability range of the method to a higher degeneracy [52]. To reduce a number of P factors in the DM we implement the fourth-order factorization scheme introduced by Chin et al. [57] and Sakkos et al. [58]: with the choice = β/P, (2t 1 + t 0 = 1), t 0 = 1/6, and K(V ) being the kinetic (potential) energy operator. In order to keep the systematic errors due to the neglected high-order commutators smaller than the statistical QMC errors, i.e. the terms of the order O( 4 ) in Eq. (9) which can be estimated from the Baker-Campbell-Hausdorff formula [59], in Fig. 1 we present the P -convergence test for main thermodynamic properties. The results for the internal energy components (see panels a,b) are well converged already for P = 2 at temperature θ = 1 (the observed deviations are within the statistical error bars). In contrast, some P dependence is still observable in the short-range correlation part of the radial distribution function (Fig. 1c). The similar analysis performed for the statistic structure factor S(q) (SSF), being the central quantity for the estimation of the fourth frequency moment C 4 (q) (Eq. (16)), has confirmed that the factorization errors practically vanish for P ≥ 4. In summary, for the densities r s ≥ 2 and the temperatures 1 ≤ θ ≤ 8 (θ = T /T F ), we end up with the optimal choice P = 8. In particular, for the low density case (r s ≥ 16) analyzed in Sec. IV C, the UEG degeneracy factor is relatively small (nλ 3 0.1) and the average sign (8) only has a weak P -dependence. For r s = 16 and N = 34 it varies within the range 0.52| P =16 ≤ S(P ) ≤ 0.63| P =16 . In addition, we admit that the simulations with a larger value of P are better The kinetic, k , and the potential energy, p, per electron for 2 ≤ P ≤ 16. The employment of the fourth-order propagators already delivers converged results for P = 2 (the average value extrapolated to P 1 is shown by the dashed black line). Panel c: The corresponding P -convergence for the radial distribution function for the spin-unlike electrons, g ↑↓ (r). Some noticeable deviations are mainly observed at smaller distances, r 10 aB, see the insert. The correct short range asymptotic behaviour in g(r) is reproduced only for P ≥ 8. This result is expected as the corresponding high-temperature factorization, Eq. (9), is optimized to be accurate up to the higher order contributions, O( 4 ), = β/P 1, only for the internal energy [58]. suited for reconstruction of the dynamical properties as they deliver a more refined resolution of the intermediate scattering function F (q, τ ) in the imaginary time, Eq. (1). The latter is used, in particular, for the accurate evaluation of the static density response function χ(q, 0), see Eq. (20), and in the optimized reconstruction procedure for the higher-order power moments C 6 , C 8 discussed in Sec. IV B. We obtained well-converged results for χ(q, 0) for q ≤ 6q F using both P = 8 and P = 16. The integral in Eq. (20) was performed using the spline interpolation between the values of F (q, τ p ) resolved at the discrete argument values τ p = p . A similar spline interpolation procedure is required in the integral (16) applied to the static structure factor S(q n ) being defined only for the discrete set of momentum q n = 2πn/L (n = 0, 1, . . .) with N/L 3 = ( 4 3 πr 3 s ) −1 due to a finite system size N and the periodic boundary conditions (PBC). The values of S(q) below the minimum wavenumber, q min = 2π/L, have been complemented by the STLS theory [60] similar to the analysis presented in Ref. [24]. Some examples are presented in Fig. 2. Finally, notice that we employed the standard periodic boundary conditions with the Ewald summation proce- dure [61] to take into account the long-range nature of Coulomb interaction. While this allows to significantly reduce the finite-size effects in the static structural properties (or even make them negligible, see below), for the most important thermodynamic properties such as the internal energy and the free energy the corresponding scaling analysis should be conducted carefully [10,24]. C. Finite-size effects The predictions on the system dynamical response discussed in the next sections are based on the general expressions valid in the thermodynamic limit. However, the self-consistent method of moments introduced below employs as a crucial input the static properties evaluated in finite-size simulations. Therefore, their dependence on the system size N has to be validated. This concerns, in the first place, the static density response function χ N (q, 0) and the static structure factor S N (q) which enter explicitly in the moments C 0 (q) and C 4 (q). The results of simulations for both quantities are presented in Fig. 3a,b for N = 34, 40 and N = 50. Up to the statistical errors we cannot resolve any finite-size effects present in our data. Our results are in agreement with the previous findings [62] for lower densities (r s < 10). Next, for r s = 16 we validate our FP-PIMC data for χ(q, 0) versus χ ESA (q, 0) evaluated via the static dielectric function in the RPA-type representation with the static local field correction taken from the neuralnet representation [63]. The agreement is excellent up to q ∼ 3q F . In Fig. 3c we perform a similar comparison but for lower density case (r s ≥ 22). Since the neural net was trained only for 0.7 ≤ r s ≤ 20, we notice a very reasonable agreement at r s = 22, and observe some systematic deviations for larger r s . The effective static approximation (ESA) results, in general, underestimate the amplitude of the main peak in χ(q, 0), while both theoretical approaches converge to the same asymptotic limit for small q given by the perfect screening sum rule in the UEG. To conclude, even though the ESA curves slightly deviate from the exact PIMC data, the observed deviations are not large, and the ESA approach is used further as a reference approximation where possible dynamical correlation effects in the density response are neglected. We note that this approach remains quite accurate at least for r s ≤ 6, see e.g. Ref. [64]. III. EXTENDED SELF-CONSISTENT METHOD OF MOMENTS WITH DYNAMICAL CORRELATIONS A. Spectral density and frequency power moments. From the mathematical point of view, the problem we solve in this work, is the truncated Hamburger prob-lem of moments consisting in the reconstruction of a non-negative distribution density from its power moments [41][42][43]. This problem is solvable [41] if and only if the Hankel matrices [43] constructed from the moments are all non-negative. Certainly, if the distribution (spectral) density is an even function of frequency, the set of power moments and the orthogonal polynomials which serve as the coefficients of the Nevanlinna linearfractional transformation [40,44] simplify significantly [44]. For this reason it is convenient to express all dynamical characteristics in terms of the loss function which is non-negative by virtue of the fluctuationdissipation theorem (FDT) and is an even function of frequency since Im −1 (q, ω) is an odd function of ω. The loss function frequency power moments and the characteristic frequencies determined by the sequential ratios of the power moments ω j (q) = C 2j /C 2j−2 (q) , j = 1, 2, 3, 4 , will be the only construction blocks of the present approach. The odd-order moments vanish and the set of moments we consider simplifies into {C 0 (q) , 0, C 2 , 0, C 4 (q) , 0, C 6 (q) , 0, C 8 (q)} . Notice that the frequency integral in Lindhard's formula for the polarizational stopping power of a plasma is an incomplete second moment of the above loss function. The static dielectric function, due to the Kramers-Kronig relations, is directly related to the zero-order moment, C 0 (q) = 1 − −1 (q, 0); and the plasma frequency enters via the f-sum rule, C 2 = ω 2 p . The fourth moment by virtue of the detailed balance condition [45] is effectively the third moment of the DSF and can be explicitly derived from the commutation relations [65] and expressed as follows: where Φ(q) = 4πe 2 /q 2 , ω 0 (q) = q 2 /2m. The factor reflects the angular averaging in the momentum vector. The fourth moment contains two main contributions: (i) the average kinetic energy per particle k = E kin /N [66] reduced in the case of a non-interacting system to the Fermi integral I 3/2 (η): ideal k = 3θ 3/2 I 3/2 (η) /2β, and (ii) the exchange-correlation contribution C I (q) with S (q) provided, e.g., by the ab-initio QMC simulations [67]. In the present work, in order to access the region of small wavenumbers, q ≤ 0.6 q F , dominated by a sharp plasmon resonance (see Fig. 5) and higher values of the coupling parameter, r s ≥ 16, we performed an independent evaluation of the SSF with the fermionic propagator PIMC [24] and the system size such that 64 ≤ N ≤ 140. B. The self-consistent solution of the five-moment problem. We start our analysis using a non-canonical solution of the five-moment Hamburger problem [40][41][42][43][44] establishes the following one-to-one linear-fractional transformation between the inverse dielectric function and a non-phenomenological Nevanlinna (response) function Q 2 = Q 2 (q, ω) such that lim z→∞ Q 2 (q, z) /z = 0 (Imz > 0), see Ref. [42]. These solutions have been extensively tested against the molecular-dynamics simulations of classical one-component Coulomb and Yukawa systems [38,39] with the quantitative agreement achieved even within the static approximation for Q 2 (q, z), i.e. when Since in the DSF of the above classical systems a broad extremum was observed at the zero frequency, the third derivative test for even functions [38,39] was applied to obtain the static Nevanlinna parameter h 2 (q; Notice that the Nevanlinna function is directly related to the dynamic local field correction used to extend the random-phase approximation (RPA) [68], see also [39]. This approach being very accurate for classical systems proves to be insufficient for Fermi fluids, where the trimodal structure of the spectrum [69] and a significant shift with respect to the RPA plasmon [34] have been recently predicted. It has long been known [70] that the tri-modal spectrum (the zero-frequency mode plus two "shifted" modes) presumably should be attributed to the dynamical multipair effects in electron fluids, and can be described only when the local field becomes a complex dynamic function of the energy transfer ω, in other words, if we abandon the static approximation (19) for the five-moment Nevanlinna function and specify the high-frequency asymptotic behavior of the inverse dielectric function (IDF), which is a genuine response (Nevanlinna) function [42]. To this end we equalized the five-moment expression for the IDF to the one stemming from the nine-moment solution of the Hamburger problem taking into consideration the sixth and the eighth frequency moments, C 6(8) , or the frequencies ω 3(4) defined in Eq. (13). Thus, we expressed the dynamic five-moment Nevanlinna function in terms of the nine-moment one and, using the same physical considerations [38,39] employed for the latter the static approximation similar to (19). This construction is presented in detail in Appendix. Hence, the dynamic response problem was reduced to the study of only two new static characteristics which are the unknown frequencies ω 3(4) (q). Notice that the static nine-moment Nevanlinna parameter h 4 (q;ω) withω = {ω 1 , ω 2 , ω 3 , ω 4 } is determined by the frequencies ω 3(4) (q), since the frequencies ω 1 (q) and ω 2 (q) are uniquely defined by the SSF and by the static density response function which is directly accessible from the intermediate scattering function: We understand that the frequencies ω 3(4) (q), formally introduced above, are determined by the three-and fourparticle static correlation functions. The ab initio QMC data for them can be achieved though precise expressions for the higher-order moments in terms of these correlation functions not yet available. The precision of the latter seems to be a problem and, as we show, to achieve quantitative agreement with the simulation data, we need to possess highly precise values of the sixth and the eighth moments. This is why we determine their values by means of the Shannon information entropy maximization (EM) procedure [71][72][73][74], see also [44], or using the intermediate scattering function, see below. D. The Shannon entropy maximization technique. We introduce the two-parameter Shannon entropy functional defined by the loss function spectral density: (21) and resolve the corresponding maximization problem with respect to ω 3(4) (q), with ω 1(2) (q) fixed by the known sum-rules. To solve the extremum conditions for two unknown frequencies we employ the Newton-Raphson method. As the starting points in the gradient descent method the corresponding Fermi-Dirac distribution moments have been chosen with The Hessian of the entropy (21) was studied to warrant the satisfaction of the maximization condition. E. The eigenmodes and the dynamic structure factor: Comparison to the local-field-based approach. Within our approach the properties of the eigenmodes can be directly studied via the solution of the dispersion equation, i.e. as the poles of the inverse dielectric function (18). The corresponding algebraic equation is of the fifth-order: which correspond to three possible eigenmodes: the diffusion (or Rayleigh) mode Ω 0 (q) and two shifted modes Ω 1(2) (q). The intrinsically negative imaginary parts of the solutions are defined by the decrements of the corresponding modes, ∆Ω 0 (q) and ∆Ω 1(2) (q). We applied the present self-consistent method of moments in the nine-moment approximation (9MA) to reconstruct S(q, ω) for different sets of parameters {r s , θ}. The performance of our approach in the WDM regime is demonstrated in Fig. 4 where it is compared to the DLFC results [34,35]. Both methods are in a good quantitative agreement both for weak (r s = 2) and moderate (r s = 6, 10) coupling, as they directly include the exchange-correlation contribution (16). The positions of the maxima and their broadening due to the damping are reproduced very accurately. The damping effects are intrinsically present in the 9MA solution due to the dynamical nature of the five-moment Nevanlinna function. 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 q/q F ω/ω P r s = 2, θ = 1 The dynamic structure factor S(q, ω) at three densities {rs = 2, 6, 10} and temperature θ = 1. The frequency is normalized to the plasma frequency, ω/ωp. The DSF plots are shifted by the value of the dimensionless wavenumber q/qF (horizontal dotted lines) with qF being the Fermi wavenumber, qF = √ 2mEF . Compared are the results of the random-phase approximation (RPA), the effective static approximation (ESA) [63], the dynamic local field (DLFC) [34], and the present self-consistent method of moments in the nine-moment (9MA) approximation. Qualitative discrepancies observed for q ≈ 0.63qF are due the Shannon entropy maximization which tends to smooth sharp energy resonances and is much better suited for the description of a broad multi-excitation continuum. Red vertical arrows indicate the frequencies Ω 1(2) (q) for the set of wavenumbers, q/qF = {0.6269, 1.2538, 1.8808, 2.3457, 2.9405}, specified by the periodic boundary conditions for N = 34, i.e. q = q 2 1 + q 2 2 + q 2 3 with qi = 2πni/L (ni = 1, 2, . . .). The only case when the DLFC results become qualitatively different from the 9MA ones is q ≈ 0.63q F (r s = 6, 10), where only a single sharp plasmon resonance quite accurately reproduced within the RPA and the ESA is present. At these conditions, the Shannon EM provides a class of solutions which are too smooth, and, hence, any sharp resonance features, if present in the spectrum, are artificially broadened, though the spectral density still satisfies all imposed constrains including the five lowerorder moments {C 0 (q) , 0, C 2 , 0, C 4 (q)} which are known exactly from the Monte-Carlo data. To avoid such artefacts induced by the unknown higher moments C 6(8) , we re-evaluated the DSF with the frequencies ω 3(4) used as the fitting parameters to reproduce the decay of F (q, τ ), see Eq (1), obtained within the fermionic PIMC [24]. Thus we found a much better agreement with the DLFC data at q ≈ 0.63q F . We applied this idea for smaller wavenumbers beyond the DLFC-generated data. These new results are presented in Fig. 5 and clearly demonstrate the applicability of our analytical expression for the inverse dielectric function (18) sharp resonance, see the DSF for q ≈ 0.39q F in Fig. 5. The accuracy of the reconstructed S(q, ω) is justified by the agreement with F (q, τ ). Among other approximations (RPA, ESA) only the 9MA agrees with the intermediate scattering function within the statistical error bars (see the left panel in Fig. 5). The details of this approach will be discussed in Sec. IV B. To summarize, a simple combination of the fitting procedure with only two parameters in the case of sharp energy resonances (at lower q) and the dynamical approximation for the Nevanlinna parameter function satisfying the Shannon EM principle when the damping effects prevail allowed us to reproduce the UEG DSF in a broad range of variation of the momentum and at different densities with a high accuracy. The transition between both regimes can be physically justified by a drastic variation of the decrement of the plasmon mode once it enters into the pair-continuum region [75]. In particular, for r s = 2, the lower dispersion curve Ω 1 (q) obtained from the solution (25) at q ≈ 0.63q F (see the first panel in Fig. 6) already lies at the edge of the pair continuum, and the present approach provides a very accurate description. Other theories (RPA, ESA) demonstrate here a similar accuracy. The ESA theory is based on the static LFC and for the weak coupling (r s = 2) it leads to a nearly perfect agreement with the DLFC data for all wavenumbers. In contrast, the RPA prediction becomes unreliable in a finite interval, 1.2 q/q F 2.9, when the account of the static pair correlations becomes necessary via the G(q, 0)-factor, as demonstrated in the STLS theory [76]. The validity of the RPA solution is restored once the ESA LFC approaches unity for large wavenumbers. Similar trends are observed for r s = 6(10). Omitting the case of a sharp plasmon resonance, the best agreement with the DLFC for q/q F ≥ 1.88 is provided by the 9MA reconstruction. The asymmetric form of the DSF and a noticeable redshift of its maximum with respect to the RPA/ESA results indicated by a vertical arrow (see the second (third) panel in Fig. 4) are reproduced quite well. On the contrary, we observe systematic deviations (more pronounced for r s = 10) between the ESA and the DLFC models. The onset for this discrepancy matches the characteristic wavenumber q c when the dispersion curve, Ω 1 (q), in the second (third) panel in Fig. 6 enters the pair excitation region: q c ∼ 0.62q F (r s = 2), q c ∼ 0.9q F (r s = 6), and q c ∼ 1.0q F ( r s = 10). For q > q c we observe that the reconstruction with the dynamical Nevanlinna parameter function starts to demonstrate a remarkable agreement with the DLFC and the PIMC data for F (q, τ ). This testifies the importance of the dynamical correlations and the need for the dynamical local field theory in this regime substituting the static LFC approximation used in the ESA. These observations validate the physical consistency of the applied Shannon EM technique at high/moderate densities. Once the plasmon mode is strongly damped and broadened, one observes that the spectral density is mainly formed by the contribution of different combinations of quasi-particle excitations -the microstates in the sense of the statistical ensemble. The most probable (degenerate) solution in this case should correspond to the entropy maximum. This permits to determine the unknown frequencies ω 3(4) by means of the Shannon EM extrema conditions in a unique way. Furthermore, the frequencies of the eigenmodes, Ω 1(2) (q), found as the poles of the inverse dielectric function, are also compared in Fig. 4 to the full DSF results. We observe a quite good agreement between the low-frequency mode Ω 1 (q) and the maximum of the spectral density (excluding q ≈ 0.63q F ). The second solution Ω 2 (q) is shifted to higher frequencies, and in our interpretation (see below) it is responsible for the observed asymmetrical shape of the DSF. This effect becomes more pronounced at low densities (r s 6), when we can observe even a second local maximum predicted independently (Fig. 4) both by the DLFC model at q/q F ≈ 2.35, 2.94 (r s = 10), and within the 9MA theory at q ≈ 2.94q F (r s = 2, 6, 10) and q ≈ 1.25q F (r s = 6, 10). Both approaches indicate the presence of two modes, distinguishable at low and high frequencies, which, however, are difficult to resolve if only a full DSF is available. Recently, the dispersion relation, (q, z) = 0, has been analyzed on the complex frequency plane within the ESA approximation [77] based on ab initio QMC data for the static LFC. Only a single solution (a plasmon) was found and only outside the q − ω region corresponding to the pair continuum (Fig. 6). In contrast, within the present approach the possibility for a three-mode solution (including a diffusive mode) and the mode-mixing effects are incorporated in the analytical representation of the Nevanlinna parameter function and the inverse dielectric function. Our solution for the dispersion relations (25) of two shifted characteristic modes Ω 1(2) (q) is demonstrated in Fig. 6. In the WDM regime (r s = 2) the Ω 1 (q) mode lies close to the center of the pair continuum. A similar behaviour is observed for the ESA/RPA but the deviations increase with r s . For r s = 6(10) we clearly observe the negative dispersion and a local rotonlike feature in the range 1.5 q/q F 2.5. The effect is more pronounced compared to the ESA predictions and is in a good agreement with the DLFC [34] results. Around q ∼ 1.9q F the strongly damped Ω 1 (q) mode is responsible for the low-frequency DSF maximum, while the upper branch Ω 2 (q) generates a broad shoulder at higher frequencies. Moreover, around q ∼ q c this shoulder is centered close to the RPA dispersion. A similar behavior is captured quite well also by the DLFC for q/q F ≈ 2.35, 2.94 (r s = 10) visible in Fig. 4. Our analysis performed for small wavenumbers (q < 0.63q F , see Fig. 5) has proved that the upper branch Ω 2 (q) in the long-wavelength limit coincides with the plasmon mode. Furthermore, the lower branch, Ω 1 (q), was found to exist only in the q−ω region spanned by the pair-continuum and has a negligible spectral weight in the full DSF (see Fig. 5) when the upper mode, Ω 2 (q), forms a sharp plasmon resonance. However, with the increase of the plasmon damping with the wavenumber q, the Ω 1 -mode contribution is systematically enhanced. In particular, the Shannon EM applied at q ≈ 0.63q F predicts a nearly equal spectral weight of both modes (see the DSF in the first row of Fig. 4). The intermediate scattering function F (q, τ ) reconstructed from the ESA, the DLFC and the 9MA coincides with the PIMC data within the statistical error bars, and, therefore, cannot be used as a sufficient criteria to select a unique physical solution. For larger q (q > 0.63q F ) all theoretical approaches, except RPA, predict the DSF maximum being close to Ω 1 (q). F. Intermediate analysis of the UEG eigenmodes The above analysis of characteristic collective modes in electronic fluids or the UEG at moderate densities (θ = 1, r s = 2, 6, 10) within the nine-moment approximation complemented with the ab initio QMC data can be summarized as follows. We clearly observe how the position of the DSF peak undergoes a transition from the Ω 2 (q) plasmon for q < q c (outside the pair continuum re- : gion) to the strongly damped low-frequency branch Ω 1 (q) when the plasmon can decay into pair excitations. The main effect introduced by the exchange-correlation contribution C I (q) in the C 4 -moment (16) is the formation of a roton-like feature missing in the RPA theory completely. For q > q c our dispersion equation (23) predicts the presence of an additional second mode Ω 2 (q), evolved from the plasmon for q < q c , but with a significantly enhanced decrement ∆Ω 2 . We believe that due to a strong damping it is not of the collective nature and can be viewed as a local enhancement of the spectral density around the Fermi energy. In addition, the upper edge of the pair continuum in Fig. 6 (shown at θ = 0) will be broadened at the simulated temperature θ = 1. The presence of both characteristic modes is practically indistinguishable in the full DSF as they strongly overlap due to a rapid increase of the corresponding decrements ∆Ω 1(2) (q) whose role is represented in Fig. 6, see the red (green) dashed curves. As it will be demonstrated below, the role of the second solution Ω 2 (q) interpreted here as a local maximum in the multi-excitation continuum can change at different thermodynamic conditions. In particular, at much lower densities (r s ≥ 16) and temperatures, it can acquire a collective character being a combination of a several quasiparticle excitations with a significantly long lifetime. These new physical predictions are discussed in detail in Sec. IV. As to the possible physical interpretation of the Ω 1 (q) mode when it is strongly overdamped (∆Ω 1 ∼ Ω 1 ), its true physical origin has not yet been sufficiently clarified. The red-shift in the DSF maximum around q ∼ 2q F at metallic densities (r s ∼ 4) is a real physical effect and has been observed experimentally in alkali metals [78] and aluminium [79]. Takada [69,80] in his theoretical analysis attributed the roton-like feature to the excitonic mode dominant in the spectrum around q ∼ 2q F . The predicted excitonic mode has a two-particle character (an electron-hole excitation) and, therefore, it is mostly pronounced in the wavenumber segment spanned by the pair continuum. The idea of existence of such a mode in UEG has been discussed in a number of papers [81][82][83]. According to this concept, in order to conserve charge and angular momentum, an exchange electron is added to the exchange hole, forming a neutral exchange exciton. Consequently, the pair correlation defining the exchange hole is generalized to a three-fermion correlation. Certainly, this effect does not exist in classical systems. Recently, Dornheim et. al. [84] provided an alternative microscopic explanation of a roton feature in terms of an electronic pair alignment model. It was qualitatively demonstrated that the maximum of the RPAbased spectral density should get a shift to lower frequencies due to the exchange-correlation correction in the potential energy part of the quasiparticle excitation, ω(q) = ω RPA (q)−α∆W XC (q). Still, the presented model was not capable of predicting the explicit form of the DSF and how it could be modified due to the quasiparticle interaction and damping effects. In summary, both theory trends underline the leading role of short range correlations either in electronhole pairs (excitons) or electron pairs (two-particle alignment). Leaving the physical interpretation of the "rotonization" of the spectrum as a collateral question, in the following analysis we will concentrate on a physically reliable and accurate reconstruction of the full DSF, and report a new evidence on even more pronounced roton-feature observed in the low-density UEG in Sec. IV C, IV D. IV. CORRELATION EFFECTS IN THE DYNAMICAL RESPONSE As it is discussed earlier, with the introduction of the dynamical Nevanlinna parameter function we are able (i) to reproduce the dynamical correlations in the DSF on the same level of accuracy as the dynamical local field [34,35,64], and (ii) to observe a high-frequency mode, which generates a high-frequency shoulder, most pronounced at lower densities, r s = 10. Moreover, the direct solution of the dispersion equation, (z, q) = 0, permits to predict that the characteristic frequency of this mode lies slightly above the double plasmon frequency, i.e. Ω 2 (q) ≥ 2ω p , see Fig. 4. However, a clear observation of this mode in the full DSF is difficult due to strong damping effects in the density regime presented in Fig. 6: the linewidths of two modes, ∆Ω 1(2) (q), overlap strongly. Motivated by these observations, we extend our 9MA approach to lower densities, i.e. consider the UEG dynamical characteristics at {r s = 16, 22, 28, 36}, where the Coulomb correlations dominate. The use of the fivemoment dynamical Nevanlinna function allows for an ab initio reconstruction of the DSF of electron fluids including dynamical correlations at these conditions for the first time. Since the existing results employing the dynamic local field are limited to intermediate coupling, r s ≤ 10, for a valuable comparison we use the simulation data based on the effective static local-field correction (ESA) reconstructed at the same thermodynamic conditions {r s , θ} using the neural network representation [63]. The corresponding static local-field factor, G(q), proceeding from the ab initio QMC data contains full information of the static correlations in the system. For these new studies we have performed the fermionic PIMC simulations [24] with the temperature varied in the range, 1 ≤ θ ≤ 8. Notice that due to the definitions, θ = T /T F and T F = ( 2 /2m)(3π 2 n) 2/3 ∼ r −2 s , by increasing the coupling parameter from r s ∼ 10 to r s ∼ 36 we achieve to diminish the physical temperature by a factor of 13. Hence, the DSF results presented below at θ = 1 demonstrate a low-temperature counterpart of the excitation spectrum in Fig. 4 with significantly suppressed thermal effects. The physical temperature becomes comparable with that in Fig. 4 for θ ∼ 2.5 (r s = 16), θ ∼ 5 (r s = 22) and θ ∼ 8 (r s = 28). The plasmon frequency is reduced with the density as well, however, the corresponding reduction is weaker since ω p ∼ r −3/2 s . Hence, the thermal contribution to the damping will be scaled as, k B T / ω p ∼ r −1/2 s . A. Static properties of uniform electron fluids The power moments, C 0 (q; r s , θ) and C 4 (q; r s , θ), along with the f -sum rule C 2 (r s ) = ω 2 p , are the input of the 9MA model. This permits to express the DSF and the The static structure factor S(q) = F (q, 0) (obtained by the spline-interpolation, see Fig. 2) and the first characteristic frequency, ω1(q) = C2(q)/C0(q) = ωp/ C0(q) of UEG at rs = 10, 16, 22, 28, 36 and temperatures θ = 1, 2, 4. Solid dots correspond to ω1(q)/ωp evaluated independently from the ESA model. The presence of the second excitation branch in the spectrum (see Fig. 11) is correlated with the observation of a local maximum in the SSF (i.e. S(q) ≥ 1) for the wavenumbers 1.8 ≤ q/qF ≤ 2.9. dynamical dielectric function in terms of the characteristic frequenciesω (q) = {ω 1 (q) , ω 2 (q) , ω 3 (q) , ω 4 (q)} (see Eq. 13) with the additional parameters {ω 3 (q) , ω 4 (q)} being determined at given thermodynamic conditions from the first two characteristic frequencies by the Shannon entropy maximization procedure or from the intermediate scattering function as it is described below in Sec.IV B. The results of our PIMC simulations for the lowdensity phase of the UEG are presented in Fig. 7, and clearly demonstrate the interplay of both correlations and temperature effects. The static structure factor (SSF), S (q), and the first characteristic frequency directly related to the static inverse dielectric function (IDF), −1 (q, 0), are shown as a function of the density parameter r s and the temperature. It is important that Eq. (26) follows from the Kramers-Kronig relation for the IDF, which is a genuine response function. Thus the static IDF and the SSF are the real physical input quantities in our model. In the lower panels, the characteristic frequency ω 1 (q) is evaluated within the ESA model independently. These results are indicated by the solid dots (only for the lowest and highest r s values) and demonstrate a nice agreement with our present data. From Fig. 7 we can unambiguously conclude that ω 1 (q) < 1 in a certain wavenumber interval for r s ≥ 16 and θ 2, which is equivalent to negative values of the static dielectric function −1 (k, 0) = 1/ (k, 0) < 0. (27) for such conditions. The possibility and validity of this inequality is well-known as the over-screening effect, see [85,86] and references therein. It is directly related to the analyticity of the direct dielectric function (q, z) in the upper half-plane of the complex frequency plane, but this topic is beyond the scope of the present work. B. Reconstruction of the higher-order moments C 6(8) The virtually unknown higher-order power moments C 6(8) introduced above in Sec. II constitute a very important ingredient in the dynamical Nevanlinna parameter function. As it was demonstrated in Sec. IID, their reconstruction based on the maximization of the Shannon entropy functional leads to a nearly perfect agreement with the results based on the dynamical local field. The main advantage of the present approach is that we employ only a limited set of static characteristics {S(q), χ(q, 0)}. On the contrary, the DLFC reconstruction is mainly relied on a high-quality QMC data obtained for the density-density response function in the imaginary time. It is a peculiar decay of F PIMC (q, τ i ), (1 ≤ i ≤ M ), obtained with the fermionic PIMC, that has allowed to reconstruct ab initio UEG DSF in the high and moderate density regime (r s ≤ 10). The 9MA demonstrates in this regime a similar accurate predictive power for the dynamical response, however, with much less computational effort. The main drawback of the Shannon-entropy approach, as it is already discussed in Sec. IIE, is the artificial smoothing of the sharp energy resonances, in particular, in the q range spanned by the plasmon resonance. Additional information on the intermediate scattering function (ISF) available from the QMC data can be used to specify the results of the entropy approach for any wavenumber q. As a quantitative criterion, similar to the one used in the stochastic and the generic optimization techniques [3,32,33], we suggest to use to this end the relative deviation from the QMC data, integrated along the imaginary time 0 ≤ τ i ≤ β, with M being the number of high-temperatures propagators and ∆τ = τ i+1 − τ i = β/M . In addition, we introduce a natural measure of the statistical noise present in the QMC data 8. (from left to right) The dynamic structure factor S(qi, ω) for rs = 22 (θ = 1) and selected wavenumbers ki = qi/qF from the three models: the ESA and the method of moments with the frequencies ω 3(4) (moments C6, C8) reconstructed with the Shannon entropy ("SHAN") and as the fit to the intermediate scattering function F (q, τ ) ("9MA") [the optimized solution in ω 3 (4) ], along the relative deviation measure δFr(q) [in percentage points] of two of these models ("SHAN", "9MA") from F PIMC (q, τ ). The dashed black line "PIMC" stands for the statistical uncertainty in the PIMC data, Eq. (29). The normalized ISF from the three models ("ESA","SHAN","9MA") vs. ab initio PIMC data are represented by black symbols with error bars. The ISF is shown only up to τ = β/2 due to the symmetry, F (q, τ ) = F (q, β − τ ), provided by the DSF detailed balance condition, S(q, −ω) = e −β ω S(q, ω). where δF QMC(PIMC) (q, τ i ) is the statistical uncertainty in the evaluation of ISF. For the wavenumbers q such that the Shannonentropy-based solution leads to the reconstructed ISF, i.e. S trial (q, ω) ⇒ F trial (q, ω), which satisfies the criterion this solution can be accepted as a plausible physical solution, which in addition satisfies the set of involved power moments exactly. In the q segment where such condition is violated, a refinement of a trial entropy-based solution is necessary. This approach has been successfully used in the reconstruction of the plasmon feature as presented in Fig. 5, where the higher-order moments C 6(8) (or ω 3(4) ) were used as the fitting parameters to satisfy the acceptance criterion (30). In the analysis of the low-density regime (16 ≤ r s ≤ 36), discussed below in detail in Secs. IIIC and IIID, we have followed a similar strategy: → ω (n) 3(4) }, at every step n the corresponding quantities are reevaluated: (4) ). An example of the optimization procedure for r s = 22 and θ = 1 is presented in Fig. 8. The left-hand panel shows three model DSFs for the selected values of q. The 9MA solution with the Shannon and the optimized frequencies (denoted as "SHAN" and "9MA") are shown along with the ESA solution. For each case the corresponding ISF was evaluated (see the right-hand panel) and the qualifying deviation measure δF r (q) (the central panel) was estimated to confirm the acceptance condition (30). The measure of the statistical noise δF PIMC r (q) in the PIMC data is demonstrated by the dashed black line (the central panel). As one can see, among three models only the 9MA solution with the dynamical Nevanlinna parameter function and the optimized frequencies ω 3(4) satisfies the acceptance condition for all q ≤ 3.2q F , and predicts new energy resonances around q ∼ 2.2q F (for a full DSF see Sec. IIIC). The corresponding q segment with this new feature is close to the position of the broad maximum in the SSF, see Fig. 7. Both ESA and SHAN models fail to predict a high-energy eigenmode for the selected wavenumbers (q 4 = 2.08q F , q 5 = 2.17q F and q 6 = 2.51q F ) and reproduce a broad distribution with a high-frequency shoulder. Next, we observe that the lowfrequency DSF maximum in the SHAN and the 9MA solutions nearly coincide, while the ESA peak position is always shifted to higher frequencies. The same trend was already observed in the moderate density regime (r s = 6; 10, see Fig. 4), where both solutions with the dynamical correlations (SHAN and DLFC) demonstrate a very good agreement and a redshift with respect to the predictions of the ESA model. This fact is reflected in the asymptotic behaviour of the intermediate scattering function F (q, τ ) as τ → β/2, see the right-hand panel in Fig. 8. Here, we observe that the 9MA and SHAN solutions are in a very good agreement with ab initio PIMC data (symbols with the error bars), while the ESA ISF, F ESA (q, τ ) (dotted black curves), demonstrate systematic and significant deviations with some acceptable agreement with the PIMC data being achieved only for the smallest wavenumbers {q 1 , q 2 } when only a single plasmon resonance (ω(q) ∼ ω p ) dominates in the full spectral density. Notice, however, that even in this case the plasmon width (decrement) is underestimated by the ESA model and leads to small but noticeable deviations in the asymptotic value F ESA (q, β/2). Similar observations apply to the SHAN solution at q 2 . Here, in contrast, the plasmon feature is smoothed by the maximization of the entropy functional and leads to the overestimation of the plasmon decrement against the optimized solution: compare the DSF plots S SH (q 2 , ω), S 9MA (q 2 , ω) on the left-hand panel. In summary, we can qualify different trial DSF solu-tions based on the deviation measure introduced above and presented in the central panel of Fig. 8. The deviations δF ESA r (q) exceed 2% and are not shown. The SHAN model allows to reduce the deviation measure, δF SH r (q), by an order of magnitude but it still significantly exceeds the upper bound specified by the statistical noise, δF QM C r (q). Hence, only the optimized solution 9MA is acceptable at these conditions. Similar analysis has been performed for {r s = 16; 22; 28; 36} and {θ = 1; 1.5; 2; 4; 8}. More examples are presented in Figs. 9,10 and lead us to several important conclusions. First, the double-peak DSF structure is reproduced at all analysed densities (16 ≤ r s ≤ 36) and low temperatures (θ 2) but only in a finite range of wavenumbers, 1.77 q/q F 2.9. Both ESA and SHAN models are missing this important spectral feature and violate in this part of the spectrum the acceptance condition (30). The deviation measure of the entropy-based solution (SHAN) is significantly reduced with increasing temperature so that at θ 4 it becomes comparable to the optimized solution, i.e δF SHAN r (q) ∼ δF 9MA r (q). Even, at θ = 2, as it is demonstrated in the central and the right-hand panels of Fig. 10, the SHAN solution already reproduces the ISF, F PIMC (q, τ ), within the error bars, except for the interval k 3 < k < k 8 , where some reminder of the second shifted mode is still visible. Notice that the integrated deviation measure, δF SH r (q), at this temperature does not exceed 0.2% while at θ = 1 it might reach 1%, (Fig. 9). The suppression of the highfrequency resonances with θ, c.f. Figs. 9,10 observed here is analysed in detail in Sec. IIID. Finally, the above analysis supports our previous conclusion with respect to the applicability of the Shannonentropy approach at high and moderate densities (r s ≤ 10). Once the interaction and decay processes of the quasiparticle excitations result in a smooth and slow varying spectral density, the entropy principle applies and already leads to an optimized DSF form related to a physically relevant solution. Moreover, the entropy maximization permits to reconstruct a physically reliable model of the dynamical Nevanlinna function using the compact representation based on only two optimization parameters {ω 3(4) (q)}. This fact is proved by the present detailed analysis and, in our opinion, has a clear advantage over the complex and not physically transparent representation of the DLFC function of [34,35] which followed the idea of Dabrowski [87] motivated by exact DLFC limiting forms by introducing an "extended" Padé-type expression for the imaginary part of the DLFC with six "random" parameters. C. Dynamical structure factor: observation of the second excitation branch and temperature effects Here we provide some graphical representations of the UEG excitation spectrum in the low-density regime (16 ≤ r s ≤ 36) based on the accurate reconstruction recipe presented in the previous section. Three temperature cases are shown in Figs. 11, 12, 13, 14 with a pronounced emergence of the high-frequency mode starting at q 1.77q F and ω 2ω p , which, at first sight, can be attributed to the double plasmon excitation. Comparing different density cases, the sharpest energy resonances are observed at the lowest density r s = 36 and the lowest physical temperature, θ = T /T F = 1, due to the scaling T F ∼ r −2 s . By decreasing the electron gas density from r s = 36 (θ = 1) to r s = 16 (θ = 1) we demonstrate a systematic shift of the high energy branch to higher frequencies along with the damping enhancement. For all density cases at θ = 1 the upper mode can be observed only up to q ∼ 2.9q F , and for larger wavenumber values it transforms the DSF into a broad distribution with a single maximum. Simultaneously, in the same wavenumber interval (1.77 q/q F 2.9) a well defined low-frequency mode is present possessing a roton-like feature in the dispersion curve. Similar effect has already been observed at higher densities, cf. r s = 10 in Fig. 6. Next, the central and right-hand panels in Figs. 11,12,13,14 demonstrate the redistribution of the spectral weight and the damping of both modes when the temperature increases. At θ = 2 there is some reminiscence of the second branch, while at θ = 4 we can only observe a high-frequency shoulder observed previously for r s = 6 and r s = 10 (cf. Fig. 4). Thus, when θ = 4 both modes become overdamped and cannot be well separated in the DSF. This result is found to be in full agreement with our previous discussion in Sec. III F. The explicit temperature dependence of the DSF at three different wavenumber values corresponding to the plasmon, roton and beyond the roton segments of the spectrum is presented in Figs. 15, 16, 17. The two- mode structure is clearly seen in the spectrum within the roton segment which evolves into the pattern with the high-frequency shoulder when the higher mode becomes strongly overdamped . D. Dispersion relations: confirmation of the high-energy quasiparticle branch In this subsection we present our results with respect to the solutions of the explicit dispersion equation, (q, z) = 0, z being the complex frequency. The dispersion relations of both characteristic modes, Ω 1(2) (q), and cussed in Sec. III E and IV B, at least for the densities with r s 10, the decrement of the ESA plasmon (with the static LFC) is always underestimated (cf. Fig. 5) compared to the 9MA plasmon Ω 1 (q) with the dynamical correlations included via the Nevanlinna parameter function Q 2 (q, z). Next, with the reconstruction of the dielectric function on the complex frequency plane within the 9MA approach, we can explicitly analyze the behavior of the plasmon decrement as it approaches the pair excitation continuum (grey shaded area). The corresponding dashed red lines, Ω 1 (q) ± ∆Ω 1 (q), in Figs. 18, 19, specify the wavenumber dependence of the half-width of the lower mode Ω 1 (q) (the red dots) which represents in this wavenumber segment the plasmon excitation. In addition, the dispersion equation, (q, z) = 0, predicts here a second solution Ω 2 (q) (the grey dots with a solid line). In our opinion, this additional solution should be considered for these wavenumbers as a virtual mode, since its decrement is found to be comparable to the excitation energy, i.e. ∆Ω 2 (q) Ω 2 (q). Physically, the presence of such a solution for q 0.5q F can indicate that the main plasmon mode is superimposed on a broad multi-excitation continuum with the center of mass and the characteristic half-width characterized by the resolved parameters {Ω 2 (q), ∆Ω 2 (q)}. This interpretation applies equally in the considered spectral domain (q q c ) to all density and temperature cases presented in Figs. 18, 19, 20. Next, a close inspection of the DSF in Fig. 15 (for θ ≤ 4) implies that the Ω 2 (q) solution in the frequency range 1.5 ω/ω p 3 has a significantly reduced spectral weight compared to the plasmon mode and does not lead to the DSF structure with two shifted modes. It is also interesting to observe that the Ω 2 (q) dispersion converges to the plasmon mode Ω 1 (q) for q ∼ q c with q c ∼ 1.5q F , which can indicate that the physical nature of excitations changes for q ≥ q c . Indeed, for q > q c the short wavelength segment with the negative plasmon dispersion is followed by the rotonlike minimum. Besides, exactly in this region we observed a discontinuity in the resolved dispersion relation Ω 1 (q), which is preceded by the divergence of the plasmon decrement by approaching q ∼ 1.5q c (cf. the shaded red area bounded by Ω 1 (q)±∆Ω 1 (q) in Fig. 18). A similar discontinuity but now for the visually well resolved two-mode solution is clearly observed near q ∼ 3q F (∼ 2q c ). It is also preceded by the divergence of the modes' decrements ∆Ω 1 (2) . Finally, for larger wavenumbers, q > 3.4q F , the lower mode Ω 1 (q) (indicated now by blue dots) shifts closer to the position of the parabolic RPA-dispersion centered in the pair excitation continuum, while the upper mode Ω 2 (q) (shown here by grey solid line with dots) possesses a very large decrement, and physically, due to the interaction effects, represents the multi-excitation contributions beyond the upper bound of the ideal Fermi gas, ω q+q F − q F . In summary, the performed detailed analysis of the dispersion relations and the q-dependence of the modes' decrements permits us to clearly distinguish three characteristic wavenumber segments with quasi-excitations of different nature, confirming the physically expected result. First, the dispersion equation predicts the usual plasmon which is followed by the roton feature observed for 1.8q F q 3q F . The roton segment is always revealed in the dispersion relation of the first mode Ω 1 (q) and is accompanied by the higher-frequency branch Ω 2 (q), however, only in the same roton wavenumber domain, and approaching a lower bound specified by the double plasmon excitation, 2 ω p (q) (cf. Fig. 16), when the density is diminished (r s = 36). A clear distinction of the transition point between the roton and single-particle segments becomes more difficult at higher temperatures and densities. At an intermediate temperature (cf. θ = 2 in Fig. 19) the roton segment is still observable when 2q F q 3q F , but the lower mode Ω 1 (q) is significantly damped due to the decay into particle-hole excitations. In contrast, the upper mode is not influenced by this decay channel being well above the pair excitation continuum, and has a significantly smaller decrement. The right-hand boundary of the roton-segment around q ∼ 3q F again can be identified by a steep increase in ∆Ω 1,2 (q), in particular, in the strong coupling case (r s = 36). For higher densities/temperatures such that when r s ≤ 16 or θ ≥ 4 the decrement ∆Ω 1 (q) of the main mode is drastically enhanced and overlaps with the highfrequency solution Ω 2 (q), cf. θ = 4 in Fig. 20. At these conditions both modes become virtual. The ESA and RPA models do not describe such a complicated spectrum structure though the roton feature is seen in the unique ESA eigenmode. Finally, in Fig. 21 we demonstrate that at higher temperatures the Shannon EM approach, once used for the reconstruction of the higher characteristic frequencies ω 3(4) (q), does not qualitatively influence the physical results for the observed roton feature and the supplemen- 19. As in Fig. 18 but for θ = 2. 20. As in Fig. 18 but for θ = 4. The frequencies ω 3(4) (q) are found within the 9MA model as a best fit to F (q, τ ). Notice that due to the weak dependence of F (q, τ ) on slight variations in ω 3(4) (q) at temperatures θ 4 the solutions of the dispersion equation, Ω 1(2) (q) have uncertainties similar to those of the input values ω 3(4) (q) and, hence, we obtain non-smooth dispersion curves (mostly in the plasmon region). This problem is not present at lower temperatures (θ = 1; 2) and can be partially subdued by the employment of the Shannon frequencies ω SHAN 3(4) (q) resolved using the entropy maximization principle being applicable beyond the plasmon region (q > qc) and higher temperatures as discussed in Sec. III E. The improved dispersion is presented in Fig. 21. tal high-frequency shoulder due to multi-excitations as compared to the optimized solution (9MA) presented in Fig. 20. A comparative discussion of the high-energy branch is provided in the next subsection. 21. As in Fig. 18 but for θ = 4. The frequencies ω SH 3(4) (q) are found by the Shannon entropy maximization procedure. This choice is physically substantiated as in Fig. 20 since both options reproduce F (q, τ ) within the statistical error bars. A. Comparative discussion of the dispersion relation It is natural now to compare our approach to the analysis of the dynamical properties of Fermi fluids of charged particles within existing standard methods of quantum statistical physics, in particular, those based on the calculation of the Feynman diagrams. Traditionally, these calculations are reduced to the evaluation of the leading corrections to the RPA bubble, i.e., to the evaluation of the DLFC G (q, ω) function in a certain approximation and under certain conditions. In particular, the role of the short-range dynamical correlations in the density response of the homogeneous electron gas in the high-density limit corresponding to some simple metals was studied in detail in Refs. [88][89][90][91][92][93][94][95][96]. By "short-range" we mean any physical correlation mechanism other than the collective plasma oscillation, whose macroscopic Coulomb origin is well understood through the random-phase approximation. These efforts were driven by the observation of the DSF shape presenting either a double peak or a main peak with shoulders, the shape which could not be described within the unextended RPA. First, the correlated-basis-functions theoretical method was employed whose advantage was that it provided a clear physical insight into the physical processes leading to the observable characteristics like S(q, ω) and the inverse longitudinal dielectric function −1 (q, ω) interrelated by the fluctuation-dissipation theorem. Computations of the leading proper polarization Feynman diagrams outside the particle-hole continuum performed by Sturm and Gusarov [90] permitted to go beyond the RPA and to describe (in the high-density limit and at zero temperature) the DSF structure attributed to the correlation-induced double-plasmon excitations. Further on, an even better agreement with the observed complicated DSF structure with the second harmonic of the original plasmon excitation in a significantly broader realm of variation of density and temperature was achieved within a complete dynamic theory for the electron gas at high to metallic densities. This theory (valid for large and small momentum transfers and at high to metallic electron densities) was combining the dominant features of the shielded-interaction and the T-matrix approximations with the conservation sum rules [94][95][96]. It was found within this theory that the dynamic properties of the resulting polarization function and the dynamic structure factor could not be adequately approximated by the local-field constructions. In particular, the non-local effects were demonstrated to be important for the dynamic properties of the electron gas, see [27,[97][98][99][100]. In addition, the higher harmonic generation in strongly coupled classical plasmas was earlier observed using the method of molecular dynamics and described [101] in terms of the nonlinear generalization of the quasi-localized-charge approximation [102]. On the other hand, there is a formal non-linear algebraic relation between the DLFC and the dynamical Nevanlinna function Q 2 (q, z), see [68], constructed here to satisfy nine sum rules and thus involving three-and four-particle correlations. Though the present extended self-consistent method of moments based on this Nevanlinna function is completely within the linear response theory, it has permitted us to observe a clear sign of the dynamical correlation effects in the strongly coupled UEG (r s ≥ 16, θ ∼ 1). They manifest themselves as the observed bi-modal structure of the DSF, in a finite momentum range q c ≤ q ≤ 2q c , with the sharp highfrequency resonances (cf. Fig. 16) at the position predicted by the second solution of the dispersion equation, Ω 2 (q), see Fig. 18. The critical momentum q c can be estimated as the crossing point of the plasmon dispersion curve with the upper bound of the pair excitation continuum, ω p (q c ) ≈ q 2 c + 2q c . For the wavenumbers q > q c we can, following [103], assume that a high energy branch is formed due to the interaction of several quasiparticle excitations. To observe this multi-excitation as a distinct spectral feature, their combined energy should be above the parabolic upper bound of the pair continuum, and, the constituting quasiparticles should have a sufficiently long lifetime. Such criteria, in the case of strongly coupled UEG can be satisfied by two possible combinations: two plasmons + roton (2P+R) or two rotons + plasmon (2R+P). Since for r s ≥ 16 the roton minimum lies well below the plasmon frequency, the (2P+R)-states most probably will decay into the lower energy (2R+P)-states, and only the contribution from the latter will dominate in the spectral density. Moreover, the integrated density of quasiparticles is scaled proportionally to S(q) which indicates their high population near the roton minimum, and, conse-quently, a higher probability of the 2R-excitation over a double plasmon. This simple considerations lead us to a qualitative explanation the position of the higher-energy branch in Figs. 18,19, where the second solution of the dispersion equation Ω 2 (q) is seen as a combination of the 2R state, 2Ω 1 (q), and a plasmon of the energy ω p . We wish to mention here a few details more. The moment approach permits to construct an analytical expression for the dielectric function (q, ω) and to analyze the intrinsic discrepancies between the locations of the broad peaks in the DSF spectrum and the explicit solutions of the corresponding dispersion equation. Another advantage of the method of moments is that the involved sum rules for any mathematically correct Nevanlinna function are satisfied automatically so that even in the static approximation for the latter, the emerging local field due to the intimate link with conservation principles, is still a qualitatively correct dynamic characteristic permitting to go beyond the relaxation-style modifications of the RPA similar to the Mermin theory [68]. On the other hand, static approximations to the local field [63] only modify the static potential in the RPA and lead to no qualitative change in the shape of S (q, ω). B. Conclusions and outlook The predicted new shape of the UEG spectrum at low density/strong coupling (r s ≥ 16) of the electrons, constitutes the main result of the present work achieved, in addition, with a significantly lower computational effort and with much lower complexity in comparison to the quantum Monte-Carlo path-integral method based on the DLFC reconstruction [34,35]. The relative simplicity of the method of moments for theoretical and numerical calculations allows to carry out the on fly reconstruction of the dynamical characteristics of warm and dense uniform electron fluids of variable density and coupling. The interrelation between the PIMC-generated dynamic local-field correction and the nine-moment Nevanlinna function is to be studied in detail elsewhere. Our results testify the importance of the dynamical correlation effects in terms of multiple excitations beyond the particle-hole band. The single particle-hole excitations alone are presumably not sufficient to explain the obtained ab initio results for the intermediate scattering function and the interrelated high-frequency tail of the dynamical structure factor. The predictions of the present results, as well as the description of the position and magnitude of the observed high-frequency branch, deserve, in our opinion, future experimental investigations which will provide deeper understanding of the collective excitations in the UEG in the low-density/strong coupling regime. The input required by the suggested approach is reduced to that of a limited set of frequency moments and the simulation data on only two static characteristics, the static structure factor S(q) and the static value of the system dielectric function. Both quantities can be accurately estimated from the first-principle PIMC simulations [24,34]. For an approximate evaluation of S(q) a broad list of methods is available, e.g. the effective static local-field (ESA) parametrization [63], the hypernettedchain method [104], and the STLS scheme [60]. On the other hand, the observation of new details in the system spectrum is directly related to the incorporation to the model of four higher-order sum rules not taken into account in earlier models. The values of these sum rules are determined here using the Shannon entropy maximization procedure optimized, where necessary, by the PIMC calculations of the ISF. Their direct determination in terms of the three-and four-particle static correlation functions found using the PIMC approach could be a difficult but interesting work to do. In onecomponent electron liquids, but not, e.g., in hydrogenlike two-component plasmas [105][106][107][108][109], even more frequency moments/sum rules converge and it might be curious to investigate their influence on the eigenmodes. The SCMM permits to carry out such a development but it remains to be seen whether it would lead to observable new details of the system dynamical properties. The obtained algebraic expressions for the inverse dielectric function and other dynamical quantities can be also employed in a variety of WDM applications and beyond, e.g. in the interpretation of XRTS experiments [8], analysis of the ion stopping power models [110,111] or to evaluate the ionization potential depression in dense plasmas [9,112]. Some mathematical aspects of the moment approach are provided along the mathematical details of the nineand five-moment versions of the self-consistent method of moments. To mention that Nevanlinna's theorem can be proven on the basis of the technique of generalized resolvents of M.G. Krein, see [113,114]. Further details of the method of moments can be found in [115]. The dynamical Nevanlinna function. Nevanlinna's formula [41] establishes a one-to-one linear-fractional transformation between all solutions of the Hamburger problem and all Nevanlinna functions Q n (q, z) such that lim z→∞ Q n (q, z) /z = 0: ∞ −∞ dL (q, ω) z − ω = E n+1 (z; q) + Q n (q, z) E n (z; q) D n+1 (z; q) + Q n (q, z) D n (z; q) , n = 0, 1, 2, . . . (38) The coefficients of this transformation are polynomials D n (z; q) orthogonal with the weight L (q, ω), which can be easily constructed using the standard Gram-Schmidt procedure, while the polynomials E n (z; q) are their conjugate [44]. In the main text we consider the five-and nine-moment Hamburger problems so that we need only the following polynomials: In the case of 5 = 2n + 1 moments, by virtue of the Kramers-Kronig relations, we arrive at the expression for the inverse dielectric function provided in Eq. (18) in the main text. In quantum systems we abandon the static approximation for the Nevanlinna function h 2 (q) = Q 2 (q, 0) = ω 2 2 (q) / √ 2ω 1 (q) , and reconstruct the dynamic five-moment Nevanlinna function by equalizing the r.h.s. of Eq. (38) with n = 2 to the same with n = 4: where from we express the five-moment Nevanlinna func-tion in terms of the nine-moment one: Then, we applied to the loss function, which is obviously proportional to the imaginary part of the r.h.s of Eq. (41), the procedure employed in [38,39] to determine the fivemoment-parameter (40), and obtained the zero-frequency value of the nine-moment Nevanlinna function: . (43) This approximation turned to be sufficient not only for the reliable analytical description of the UEG-DSF QMC data, but for the direct observation of the two-mode structure of the system spectrum. Moreover, the above nine-moment expressions simplify into the previous fivemoment solution (40) as soon as we consider two successive limiting transitions: ω 4 (q) → ∞ and ω 3 (q) → ∞.
2023-02-15T06:42:53.261Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "576ef7f7ea3f07c92eb3767289db63bb125e0700", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1b5642e087e35fbcbb13c35e11fe43263aa4f309", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
109294044
pes2o/s2orc
v3-fos-license
Different approaches in sulfonated poly (ether ether ketone) conductivity measurements Ion conductivity of sulfonated poly (ether ether ketone) (SPEEK) membranes with various degree of sulfonation (DS) was investigated using impedance analysis with different measuring cell configuration and ion conductivity was calculated from resistances of polymer membranes. SPEEK was synthesized from poly (ether ether ketone) (PEEK) via sulfonation reaction in concentrated sulfuric acid (95–98%). Scanning electron microscopy (SEM) analysis of membrane surface was performed to determine possible mechanical damage to the membrane during resistance measurements. Introduction In modern society there is increasing interest towards innovative technology as well as transition to sustainable energy. Polymer electrolyte membrane is a key component in fuel cell technology, as it provides ion transport in fuel cell [1]. Sulfonated poly (ether ether ketone) (SPEEK) is promising material for application in direct methanol fuel cell [2]. And it was reported that the SPEEK membrane might be cross-linked to enhance its properties [3]. Ion conductivity of polymer electrolyte membrane is a key parameter for providing high efficiency and therefore correct conductivity measurements are very important. Impedance analysis is being used widely to determine membrane conductivity. The literature data on specific cell configuration is limited. At the same time the methods for measuring conductivity of polymer membrane were not described sufficiently by authors previously. The membrane contact with the electrode is important and the contact resistance is difficult to evaluate. That's why it is considered as a probable reason for decreased conductivity. It was also mentioned, that the conductivity of a membrane can be measured by two probe method and slight anisotropy of proton conductivity is observed and through-plane setup showed slightly higher proton conductivity than inplane setup. Four probe method is limited for polymer membranes due to the low material hardness and strong dependence on relative humidity [4]. The aim of our study is to compare two different methods of measuring ion conductivity of polymer electrolyte membranes using impedance analysis and evaluating them as not being damaging for membranes. First, the standard method by pressing sample in-between two metal electrodes was used. Second, the membrane was pressed in-between two Nafion membranes with a known conductivity (differential method). In this paper two approaches of ion conductivity measurements have been applied for SPEEK membranes with different degrees of sulfonation (DS) using resistance data obtained by performing impedance analysis. It was revealed, that the differential method as compared to the single membrane method shows good correlation with a reference material, which was commercial Nafion N-117 membrane. Experimental SPEEK was synthesized from poly (ether ether ketone) (PEEK) via sulfonation with concentrated sulfuric acid (95-98%) at 60 o C. Degree of sulfonation was determined using titration method as described previously [3,5]. Metrohm Autolab potenciostat/galvanostat with FRA was used to determine the membrane resistance in a frequency range from 100 Hz to 50 kHz and signal amplitude 10 mV. Measurements were taken at 22 o C temperature. The membranes were immersed in a deionized water for 24 h and maintained RH = 100% in a measuring chamber. Conductivity measurements of SPEEK membranes as well as Nafion membranes were performed through-plane using impedance analysis with various cell configurations. In differential method, the SPEEK membrane was sandwiched between two Nafion membranes and pressed between two copper electrodes (1 cm in diameter). The resistance R1 was obtained from Nyquist plot extrapolating to the high frequencies. Using the same method, resistance of two Nafion membranes (R2) was determined and as a result, from the difference between two of these measurements, resistance Rmembrane and conductivity of SPEEK membrane was calculated using equation (1). Rmembrane = R1 -R2 (1) In case of the single membrane method the SPEEK membranes were pressed between two copper electrodes. Impedance analysis was performed, and the resistance was found from Nyquist plot. The Nafion membrane was used as a reference. The conductivity of polymer membrane using differential and single membrane configuration was calculated from the complex resistance data from Nyquist plot and the results were compared with the literature data. Scanning electron microscopy (SEM) method was used to inspect the membrane surface and characterize the mechanical damage before and after conductivity measurements for both SPEEK and Nafion membranes. Results and discussion In our work two methods for measuring conductivity are assessed. Results were compared with a literature data. In both cases, the same membrane preparation method was used and N,N-dimethylformamide as a solvent. Membrane thickness ranged from 0.12 to 0.18 µm and was measured with digital micrometer. Electrode surface area was 0.785 cm 2 . Figure 1 shows cell configuration for differential and single membrane method. The variation of data ( Figure 2, Table 1 and 2) is significant. The Nafion and SPEEK membrane conductivities that were obtained by using both methods are shown in Table 3. The Nafion conductivity according to the product information is 0.10 S/cm and the same value was reproduced using differential method. As we can see from the Figure 2, the SPEEK membrane conductivity as a function of DS might be plotted as a line, if the same method is used for measurements. However, the data from both methods vary significantly. We can conclude that the contact resistance between electrodes and membrane is quite high. The usage of differential method allows to exclude it efficiently. From such point of view evaluating the literature data, it is evident that the contact resistance was the reason also for high variation of literature data. It is worth mentioning that the cell stability can be evaluated via Kramer-Kronig test and during our measurements, the function χ 2 , χ 2 re and χ 2 im values were between 10 -5 and 10 -6 and that means that the cell was stable, and equilibrium was not disturbed during impedance analysis. Figure 3 shows an example of data for SPEEK membrane with DS = 0.82 impedance analysis with differential method. Nyquist and Bode plot were presented for cell configuration that consists of two Nafion membrane and one SPEEK membrane between them. Resistance is obtained from extrapolation to the high frequencies where imaginary impedance is equal to zero. Extrapolation with semicircle method cannot be used in this case because full semicircle is not obtainable and therefore linear regression method is being used. Bode plot shows that the contact resistance is significant. Consequently, this explains higher resistance and lower conductivity for single membrane method and it shows that the differential method excludes contact resistance more efficiently. Table 1. Ion conductivities of SPEEK membranes as measured by differential method. Figure 2. Ion conductivities of SPEEK membrane from literature data and by using two measuring methods: differential (1) and single membrane (2) method. Figure 3. Impedance analysis for SPEEK membrane with a DS = 0.82 by applying differential method with two Nafion membranes: a) Nyquist plot, b) Bode plot, c) obtaining the resistance from Nyquist plot by extrapolating to the high frequencies. SEM analysis was used to inspect the membrane surfaces. As an example, the SPEEK membrane with DS = 0.82 is presented. Figure 4 reveals some characteristic surface defects, which were produced during synthesis process. After impedance analysis with a differential method no additional defects were observed, but after measuring with a single membrane method, it is obvious, that there are significant scratches. Similar pattern was observed by inspecting Nafion membranes. After impedance analysis the increasing mechanical damage of the Nafion membrane surface could be observed. SPEEK membranes are less elastic, so the mechanical damage is also more pronounced. Figure 4. Scanning electron microscope images of SPEEK membrane with DS = 0.82: a) before impedance analysis, b) after impedance analysis with a differential method, c) after impedance analysis with a single membrane method, d) Nafion membrane before impedance analysis and e) Nafion membrane after impedance analysis with a differential method. Conclusion In this study sulfonated poly (ether ether ketone) membranes were studied using impedance analysis. Differential method proved to be more accurate in conductivity measurements as compared to the single membrane method. Commercial Nafion membrane was used as a reference. SEM analysis revealed also that the differential method is less damaging to the membrane surface. Therefore, the differential method in impedance analysis proved to be more efficient. The membranes are typically used in different solid state ionic devices and the same membrane might be used in a device also after conductivity measurements. The variety of distribution of literature data might be explained by variety of the contact resistance between membrane and measuring electrodes.
2019-04-12T13:50:53.975Z
2019-03-25T00:00:00.000
{ "year": 2019, "sha1": "00c1ca6f195d7015216ff2d37e8c4e9cb397ba1a", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/503/1/012030", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "8359c15d745b1943fc3f96f94091cb69749563ce", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
236944905
pes2o/s2orc
v3-fos-license
Genetic variant of cyclooxygenase-2 in gastric cancer: More inflammation and susceptibility Gastric cancer accounts for the majority cancer-related deaths worldwide. Although various methods have considerably improved the screening, diagnosis, and treatment of gastric cancer, its incidence is still high in Asia, and the 5-year survival rate of advanced gastric cancer patients is only 10%-20%. Therefore, more effective drugs and better screening strategies are needed for reducing the incidence and mortality of gastric cancer. Cyclooxygenase-2 (COX-2) is considered to be the key inducible enzyme in prostaglandins (PGs) synthesis, which is involved in multiple pathways in the inflammatory response. For example, inflammatory cytokines stimulate innate immune responses via Toll-like receptors and nuclear factor-kappa B to induce COX-2/PGE2 pathway. In these processes, the production of an inflammatory microenvironment promotes the occurrence of gastric cancer. Epidemiological studies have also indicated that non-steroidal anti-inflammatory drugs can reduce the risk of malignant tumors of the digestive system by blocking the effect of COX-2. However, clinical use of COX-2 inhibitors to prevent or treat gastric cancer may be limited because of potential side effects, especially in the cardiovascular system. Given these side effects and low treatment efficacy, new therapeutic approaches and early screening strategies are urgently needed. Some studies have shown that genetic variation in COX-2 also play an important role in carcinogenesis. However, the genetic variation analysis in these studies is incomplete and isolated, pointing out only a few single nucleotide polymorphisms (SNPs) and the risk of gastric cancer, and no comprehensive study covering the whole gene region has been carried out. In addition, copy number variation (CNV) is not mentioned. In this review, we summarize the SNPs in the whole COX-2 gene sequence, including exons, introns, and both the 5’ and 3’ untranslated regions. Results suggest that COX-2 does not increase its expression through the CNV and the SNPs in COX-2 may serve as the potential marker to establish risk stratification in the general population. This review synthesizes emerging insights of COX-2 as a biomarker in multiple studies, summarizes the association between whole COX-2 sequence variation and susceptibility to gastric cancer, and discusses the future prospect of therapeutic intervention, which will be helpful for early screening and further research to find new approaches to gastric cancer treatment. INTRODUCTION Gastric cancer is the fifth most commonly diagnosed cancer and the third leading cause of cancer-related deaths worldwide. The incidence of gastric cancer remains high in Eastern Asian despite its global decrease in the last few years[1,2]. Approximately 75% of patients with gastric cancer are diagnosed at advanced stage and the median survival is 7-10 mo [3]. Therefore, individualized prevention and early detection and treatment are of clinical significance in improving the survival time and reducing the mortality of gastric cancer. Environmental factors including smoking, drinking, and Helicobacter pylori (H. pylori) infection and genetic alterations such as susceptible genetic variants and epigenetic alterations have been associated with gastric carcinogenesis [4,5]. Cyclooxygenase-2 (COX-2) has been extensively studied in carcinogenesis, and its participation in chronic inflammation and various infections (such as H. pylori infection and chronic viral hepatitis) significantly increases the risk of cancer [6,7]. In this review, we will summarize the association between whole COX-2 sequence variation and susceptibility to gastric cancer. We will also discuss the crucial role of COX-2 in the occurrence of gastric cancer and its mechanisms. MOLECULAR CHARACTERISTICS OF COX-2 COX-2 is known as the key inducible enzyme in prostaglandins (PGs) synthesis, and the COX-2 gene is located at chromosome 1q25.2-25.3 and composed of 9 introns and 10 exons [8]. The 5' region of the COX-2 gene has binding sites for several activated transcriptional factors, such as nuclear factor-kappa B (NF-κB), stimulatory protein 1 (SP1), activator protein-2 (AP-2), and transforming growth factor. In order to explore the expression of COX-2 in normal tissues, the expression data of COX-2 were downloaded from the genotypic tissue expression (GTEx) database (https:// xenabrowser.net/datapages/) and the distribution of COX-2 expression in different tissues was visualized by plotting an anatomical map with R-3.5.3 software. Detailed data are shown in Supplementary material 1. Previous studies have shown that COX-2 has negative expression in normal tissues and organs under physiological conditions, though it is constitutively expressed in the brain and kidney. We also found that COX-2 gene was rarely expressed in normal tissues (including the stomach), but distributed more in the colon and lungs, both in males and females ( Figure 1). However, its expression is increased dramatically in response to certain inflammatory stimuli such as cytokines, oncogenes, and tumor inducers[9]. COX-2 have been shown to play crucial roles in tumorigenesis[10]. The COX-2/PGE2 pathway activates macrophage infiltration and further induces cytokine signaling to activate the transcription factors NF-κB and signal transducer and activator of transcription 3 (Stat3) [11,12], which can change the tumor microenvironment and affect the occurrence of cancer. GENETIC VARIANTS OF COX-2 IN TUMORIGENESIS COX-2 has been implicated in the etiology of cancer and its expression has been confirmed to be increased in gastric cancer. Genetic variants may lead to an increase in expression and change in the function of COX-2, which may affect the occurrence of cancer. Studies have suggested that COX-2 single nucleotide polymorphisms (SNPs) may affect the gastric tumorigenesis. However, these studies only focused on a few or particular region SNPs, and lacked an overall description of the whole sequence variation of COX-2. In this review, we summarize the SNPs in the whole COX-2 gene sequence, including exons, introns, and both the 5' and 3' untranslated regions (UTR). In addition, we also analyze the copy number variation (CNV) information of COX-2 in gastric cancer. July 28, 2021 Volume 27 Issue 28 CNV of COX-2 in gastric cancer The SNPs of COX-2 have been widely studied, but its CNV was rarely mentioned. We downloaded the copy number data of the COX-2 gene in gastric cancer from The Cancer Genome Atlas (TCGA) database (https://www.cancer.gov/about-nci/ organization/ccg/research/structural-genomics/tcga), and then visualized the data with R-3.5.3 software (detailed data in Supplementary material 2). The genes displayed are all genes with CNV, but no CNV of the COX-2 gene was found. Association between COX-2 SNPs and gastric cancer The SNPs of COX-2 may have a functional effect on COX-2 transcription and cause COX-2 overexpression to change the response to various inflammatory stimuli. However, only a single locus of SNP can explain the occurrence of cancer very little, which is not enough to fully demonstrate the association between COX-2 SNPs and gastric cancer. We combined data from the TCGA (https://portal.gdc.cancer. gov/repository; downloaded data in Supplementary material 3) and Ensembl ( http://grch37.ensembl.org/Homo_sapiens/Tools/VcftoPed?db=core) using Haploview 4.2 software to screen SNPs. The criteria for screening SNPs were minor allele frequency ≥ 0.05 and pairwise r 2 < 0.8. All obtained SNPs are shown in Figure 2. At the same time, we retrieved the SNPs that have been studied. The results showed that 14 SNPs were associated with cancer in the whole sequence of COX-2, including 9 SNPs associated with gastric cancer (Table 1). At present, five COX-2 polymorphisms have been extensively studied, including rs5275 and rs689470T>C that are located in the 3' UTR, as well as rs689466G>A and rs20417G>C mutations that are located in the promoter region with multiple enhancers and transcriptional regulatory elements. SNPs in the COX-2 promoter region may change the activity of the promoter and Creactive protein (CRP), which may be related to acute or chronic inflammation [13]. Although SNPs may have functional effects, there are still a large number of functional features of SNPs that have not been discovered, and their mechanism needs to be further studied. Meanwhile, risk estimates of previous studies have been inconsistent. Therefore, we made a summary and pooled analysis of the extracted data. The results showed that rs689466G>A, rs20417G>C, and rs3218625G>A in the promoter region conferred a higher risk of gastric cancer [A vs G: odds ratio (OR) = 1.19, 95% confidence interval (CI): 1.10-1.29; C vs G: OR = 1.26, 95%CI: 1.12-1.41; and A vs G: OR = 1.62; 95%CI: 1.02-2.56]. Similarly, rs5275T>C and rs689470T>C in the 3'UTR were significantly associated with gastric cancer (C vs T: OR = 1.14, 95%CI: 1.01-1.29 and TC vs TT: OR = 7.49; 95%CI: 1.21-46.2). As to the rs2066826 G>A polymorphism, a significant association was detected in pancreatic cancer (A vs G: OR = 1.60, 95%CI: 1.06-2.40, P = 0.026). However, rs5279 T>C in the exon region and rs4648298A>G in the 3′ UTR may reduce the risk of gastric and colorectal cancers (TC vs TT: OR = 0.24, 95%CI: 0.08-0.73 and G vs A: OR = 0.24; 95%CI: 0.10-0.56). In our previous study of 296 gastric cancer patients and 319 control family members in the Chinese Han population, an increased risk was observed in individuals with the COX-2 rs689466AA genotype (OR = 2.03; 95%CI: 1.27-3.22), and the association decreased as the degree of relationship decreased [14]. Recently, we further performed genotyping in 660 gastric cancer cases form the First Affiliated Hospital of Zhengzhou University from 2013 to 2015 and 660 control individuals from a community-based cardiovascular diseases survey in the same time. Our results found that individuals with rs20417 GC genotype were more likely to develop gastric cancer (OR = 1.54, 95%CI: 1.08-2.19). Meanwhile, Zhang et al [15] found that rs689466 G>A enhanced the transcriptional activity and thus increased the expression of COX-2 by creating a c-MYB binding site. These results suggest that the SNPs of the COX-2 gene plays an important role in the carcinogenesis of gastric cancer, especially the variation in the promoter region which may have functional consequences. In addition, SNPs in the promoter region could enhance COX-2 gene transcription, affect the stability of mRNA, regulate the inflammatory response, and consequently lead to individual variation in susceptibility to gastric cancer [16,17]. Our study provides a basis for more thoroughly exploring the exact function of COX-2 in the occurrence of gastric cancer. Further functional studies will be considered and be elaborated in another study. infection, NF-κB activation, K-ras expression, and the dysregulation of some transacting regulatory factors) can lead to overexpression of COX-2 and more inflammation in neoplasia[20-23]. A study found that H. pylori infection stimulates Toll-like receptors (TLRs), to activate innate immunity and the COX-2/PGE2 pathway, which induces "infectionassociated inflammation" [such as CXCL1, 2, and 5, CCL3 and 4, interleukin (IL)-11, IL-23, and tumor necrosis factor alpha (TNF-α)], to generate an inflammatory microenvironment and further lead to gastric tumorigenesis [25,26]. Another study using AGS gastric cancer cells showed that H. pylori (patient isolate) promotes COX-2 transcription, which may be due to the activation of mitogen-activated protein kinase (MAPK) pathways (ERK1/2, p38, and JNK) and the activation of cAMP response element (CRE) and AP-1 on the COX-2 promoter by TLR2/TRL9 [27]. Jüttner et al [28] found that the binding of upstream stimulatory factor 1/2 (USF1 and 2) to the CRE/Ebox site of the COX-2 promoter promotes the upregulation of COX-2 after H. Liu et al [57], and Niikura et al [78] Oxadiazole 10c Colon Increases antitumor activity; increases sensitivity Docked into the COX-2 bind site El-Husseiny et al [79] Celecoxib and platinum Noteworthy, COX-2 is overexpressed not only in H. pylori positive gastritis and gastric cancer, but also in precancerous lesions such as intestinal metaplasia and atrophic gastritis, suggesting that COX-2 plays a key role in the early gastric carcinogenesis [35,36]. These may be associated with individual genetic susceptibility, especially inflammatory genes, such as COX-2, IL-1b, and TNF-α gene polymorphisms in our previous reports [14,37]. Inflammatory pathway of COX-2 COX-2 is regulated by multiple pathways in gastric cancer cell lines. The pathway of COX-2/PGE2 has been shown to play crucial roles in tumorigenesis by triggering the production of an inflammatory microenvironment[10,38,39]. However, the exact tumor-promoting mechanism of PGE2 remains unclear. It has been reported that TLR signaling through the adaptor molecule MyD88 induces the COX-2/PGE2 pathway to promote the occurrence of gastritis and gastric cancer [26]. Meanwhile, the expression of COX-2 was significantly reduced when NF-κB signal was blocked by chondroitin sulfate [40]. Some inflammatory cytokines, such as IL-6, IL-8, and TNF-α, can activate NF-κB to induce overexpression of COX-2[41]. It has also been reported that the expression of K-ras and the activation of matrix metalloproteinase-2 (MMP-2) and MMP-9 are significantly related to the increased expression of COX-2[42]. They may jointly promote the occurrence of gastric cancer, but the mechanism is not clear. Recent studies suggest that the cooperation of the COX-2 ⁄PGE2 pathways and TLR⁄MyD88 signaling through NF-κB activation is crucial in tumorigenesis [26]. Some genetic studies have shown that the activation of carcinogenic Wnt is related to the occurrence of gastric tumors induced by COX-2[10,38]. In the TCGA database, the Wnt signal and the NF-κ B and COX-2 inflammatory pathways were observed to be activated simultaneously in intestinal gastric cancer [26]. The adenomatous polyposis coli (APC) regulates the expression of COX-2 through a β-catenin-independent mechanism[43]. Inducible nitric oxide synthase (iNOS) can increase the activity of COX-2 to upregulate the production of PGs [44]. These results suggest that COX-2 promotes the occurrence of cancer through induction of various inflammatory pathway signaling and generation of an inflammatory microenvironment (Figure 3). ROLE OF COX-2 IN CANCER PREVENTIONS AND THERAPEUTICS Epidemiological studies have indicated that the application of COX-2 inhibitors can reduce the inflammatory response and suppresses gastrointestinal cancerization. It may be an effective and crucial target to treat patients with atrophic gastritis and reduce the risk of H. pylori-related gastric cancer [22,45]. The use of non-steroidal antiinflammatory drugs (NSAIDs) such as aspirin can reduce the risk of malignant tumors of the digestive system by blocking the effect of COX-2[46]. NSAIDs can reduce the number and size of colorectal carcinomas in patients with familial adenomatous polyposis. Celecoxib, an selective COX-2 inhibitor, and NSAID can also reduce the occurrence of digestive system cancers, such as inhibiting the proliferation of gastric, esophageal, and colorectal cancer cells [47,48]. It is estimated that long-term use of NSAIDs can reduce the incidence of colon cancer by 40%-50% [49]. However, studies have shown that the use of NSAIDs is not an effective chemoprophylaxis for all cancer patients, as aspirin has no effect on the incidence of colorectal adenoma or cancer in patients with Lynch syndrome [50]. Therefore, the combined use of COX-2 inhibitors and the development of new inhibitors have gradually emerged and have better antitumor activity. More detailed results are shown in Table 2. Moreover, clinical use of COX-2 inhibitors to prevent or treat gastric cancer may be limited because of potential side effects, especially in the cardiovascular system, such as elevated blood pressure and myocardial infarction [51,52]. Recently, a systematic review of 329 studies suggested that in addition to COX-2-selective inhibitors, NSAIDs also increase the risk of cardiovascular morbidity[53]. These side effects and low treatment efficacy hinder the application of NSAIDs and COX-2 selective inhibitors as chemopreventive drugs to prevent cancer. At the same time, a study indicated that the combined regulation of the inflammatory microenvironment by inhibiting the COX-2/PGE2 and TLR/MyD88 pathways may be an effective strategy to prevent or treat the development and malignant progression of gastrointestinal cancer, especially those with p53 gain-of-function mutations [54]. Therefore, targeting the COX-2/PGE2 pathway combined with TLR/MyD88 signal pathway may inhibit the inflammatory microenvironment and the stemness of gastric tumor cells, which may be an effective strategy for the prevention and treatment of gastric cancer and needs further clinical evaluation [26]. In addition, as the information of genetic susceptibility and COX-2 SNPs may have the potential to establish risk stratified markers in the general population, it is helpful for early screening and treatment of precancerous lesions in high-risk population groups to reduce the incidence of gastric cancer and avoid unnecessary treatment. CONCLUSION It has been established that the expression of COX-2 in gastric cancer cells is induced by various pathways including H. pylori infection and COX-2 overexpression results in the generation of an inflammatory microenvironment to promote the occurrence of gastric carcinomas. The polymorphisms including rs689466G>A, rs20417G>C, rs3218625G>A, rs5275T>C, and rs689470T>C in COX-2 confer individuals a higher susceptibility to gastric cancer. NSAIDs can reduce the risk of digestive system malignant tumors. In addition, the combined regulation of the COX-2/PGE2 and TLR/MyD88 signaling pathways may be an effective strategy to prevent or treat the occurrence and development of gastrointestinal tumors. However, these treatments may increase the incidence of cardiovascular diseases. The above results encourage further functional research to find more accurate individualized prevention strategies and better therapies for gastric cancer.
2021-08-08T05:27:02.381Z
2021-07-28T00:00:00.000
{ "year": 2021, "sha1": "c0ebfe676709fd77b7a4363af3977ae73a0550b8", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v27.i28.4653", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c0ebfe676709fd77b7a4363af3977ae73a0550b8", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233844711
pes2o/s2orc
v3-fos-license
Does Social Capital Affect Voter Turnout? Evidence from Italy In this paper we develop a new composite indicator, named Social Catalyst, able to account for the complex and multifaceted nature of the social capital in a unitary measure. We use our indicator, as well as its components, to explore the relation between social capital and electoral participation in the parliamentary elections in Italy from 1994 to 2008, addressing the potential endogeneity bias. Our findings show that (i) the Social Catalyst positively and significantly affects voter turnout in both Chambers; (ii) among the different dimensions of social capital, social norms and associational networks play a prominent role in the Italian regional context. Introduction Does community activity lead to collectively virtuous behavior and improve the quality of democracy? Since the second half of the last century social capital has attracted increasing attention as a relevant factor underlying voting in democracies. A bulk of literature has recognized that civic involvement, social connections, sharing common goals may play a crucial role in explaining differences in political participation. People who socially interact represent an important channel to transmit norms of civic and political participation and recruit other people in these activities (see, among others, Almond and Verba 1963;Putnam 1993;Verba et al. 1995). Integration into formal and informal organizations helps members of the organizations to develop communication and cooperation skills; it also increases trust in people as well as in the political system. Trust, and in particular political 1 3 trust, is considered an important prerequisite for an active and vigilant citizenry (Almond and Verba 1963;Easton 1965;Gronlund and Setala 2007). 1 Empirical evidence, mainly based on individual data or experiments performed on survey data in different countries, is mixed. Some papers find that social capital positively affects voter mobilization by strengthening co-operative behavior or increasing the flow of information (e.g. Putnam 1993;Henn et al 2007;Putnam 2000;Gerber et al. 2008). Others show that it discourages electoral participation by exposing citizens to conflicting views or providing them with an alternative channel to achieve personal goals and satisfaction (e.g. Mutz 2002;Aktinson and Fowler 2014). Overall, these mixed findings are the result of several issues: the difficulty to adequately and unanimously define a phenomenon which encompasses different dimensions ranging from social networks to trust and civic norms; the subsequent complexity of translating these multifaceted definitions into a 'good' measure 2 ; the methodological problems of omitted variables and endogeneity that may affect the electoral participation-social capital nexus. For all these motives, the empirical evidence on such a nexus is ambivalent. Moreover, if we consider that several governments and international organizations have undertaken policies increasing community activity as a way to promote (in developing countries) and strengthen (in advanced countries) democracy (see, on this point, Krishna 2007), further investigations are useful. This paper focuses on Italy, an exemplary case study in the literature on social capital (among others, Putnam 1993; Guiso et al. 2004;de Blasio and Nuzzo 2010;Crescenzi et al. 2013;Guiso et al. 2016). Using aggregate data, we aim at verifying whether the regional endowment of social capital is a good predictor of the electoral participation in 5 national parliamentary elections from 1994 to 2008. Our contribution to the literature is twofold. As we mentioned before, a key issue in the empirical analyses on social capital is its measurement. In this respect, we develop a new composite indicator, named Social Catalyst, able to account for the complex and multifaceted nature of the social capital in a unitary measure that captures the 'compositional effect power' of the different dimensions recognized by the literature. The methodology, which we test using Italian data, is not country-specific, and can be generalized and applied to other countries. Then, we use our new measure of social capital, as well as of its components, to verify its impact on electoral participation. One problem with testing whether social capital affects voter turnout is possible reverse causation. Several works on social capital have stressed the relation between trust and civic engagement arguing that people who do not trust others will be less likely to participate in public life. These works are based on a generalized worldview optimism, which for the most part does not depend on experiences of participation in civil and political life; they rather reflect a conception of trust as a sentiment linking people who do not know each other (Uslaner 2002;Uslaner and Brown 2005). Other contributions, instead, link civic engagement to trust and trust to participation (Brehm and Rahn 1997;Stolle 1998). According to this view, the experiences of political and civic engagement may shape trust; social activity over time may affects voter turnout and, in turn, participating in politics may 1 3 increase social capital. 3 Another concern is that social capital might be related to other unobservable factors that contribute to explain the variation in turnout across regions. We take into account all these problems and handle them through a robust identification strategy based on an historical instrument, the number of free city-state in each region. Our results show that the nexus between social capital and turnout is positive and strongly significant in both Chambers and in all the specifications. Besides the 'bundle effect' identified by the Social Catalyst, we also focus on the channels through which social capital affects voting participation in Italy and find that social norms and civicness play a relevant and positive role, although with a different intensity. The paper is organized as follows: Section 2 analyzes the dynamics of the national parliamentary elections in our sample and discusses the methodology we use to build the Social Catalyst. Section 3 presents the model and discusses the results. An additional analysis is provided in Sect. 4. Section 5 concludes. Why Italy? As mentioned above, our analysis verifies the voter turnout-social capital nexus focusing on 18 Italian regions from 1994 to 2008. We capture various aspects of civic capital in a unitary measure and disentangle the weight of each variable entering the indicator, so that we are able to separately discuss both the effects of the measure and of its individual components. Since the pioneering research by Almond and Verba (1963), which focuses on voluntary organizations as "the most important foundations" of stable democracies 4 , Italy has been considered an interesting case-study. Almond and Verba's results indeed portray the Italian political culture as characterized by alienation, social isolation and horizontal and vertical dis-trust. In other words, Italian citizens are depicted as uninformed and less interested in politics and civic affairs compared to the counterparts of other countries like Germany, Great Britain, Mexico and USA. The subsequent Putnam's seminal work (1993) relates the variation of social capital in the Italian regions, rooted in different historical experiences, to their different institutional performance. More recently Tabellini (2010) and Guiso et al. (2016) focus on the long-term effects of social capital in the Italian municipalities. Italy still deserves attention today. The Eurobarometer and World Value Surveys report that the country is characterized by a low level of trust among people and towards politics and institutions (Cartocci 2011). Moreover, the divide between the Northern and the Southern Italian regions continues to embody a paradigm of withincountry differences. Indeed, as shown in Figs. 1 and 2 (and also highlighted in previous research by Buonanno et al. 2009; Barone and de Blasio 2010), turnout as well as social capital are heterogeneously distributed throughout the regions despite common institutions, school systems and religious and ethnic identities. We concentrate our analysis on the period 1994-2008. In the mid-nineties indeed a decentralization process started in Italy and ended up in 2001 with the creation of a federalist architecture which gave autonomy to the regions and greater importance in terms of political decisions and public policies' provision (Ambrosiano et al. 2010). Moreover, until the end of the 2000s, political participation has followed traditional channels of expression, which only later have radically changed with the rapid diffusion of the Internet (Istat 2018). Also, the traditional form of social capital has been weakened by the recent phenomenon of social networks which have acted as a substitute for the time people devote to voluntary activities (Prior 2005;Campante et al. 2018). Voter Turnout in Italy Our dependent variable is regional voter turnout in national parliamentary elections. According to the Italian Constitution the Parliament consists of two Houses -the Chamber of Deputies and the Senate -which share the legislative power. The Chamber represents national interests, while the Senate the regional ones. Citizens who are 18 and older have the right to vote for the Chamber of Deputies; citizens who are 25 and older for Senate. It is worth clarifying that voter turnout in referenda is excluded from the dependent variable because it has been largely used to proxy social capital (among others, Putnam 1993; Guiso et al. 2004Guiso et al. , 2008. Indeed, while turnout in referenda is itself a civic value and is unlikely to be driven by pure individual interests, voting at the political elections could be subject to rent-seeking or patronage, resulting in the erosion of social capital. Economic literature has employed different measures of voter turnout depending on the statistics available (Geys 2006). 5 We employ the share of population which cast its vote (Turnout) calculated in each region as the ratio between the number of voters and the population of voting age (i.e., the 'age eligible' population) in parliamentary elections. Data come from the Historical Archive of Italian Ministry of Interior. Figures 1 and 2 show the dynamics of the regional turnout at the Chamber and Senate elections, respectively. The turnout varies between 65% and 95%. Its trend is decreasing in all the regions of our sample except in four Southern ones (Basilicata, Calabria, Molise and Sicily). The latter are characterized by a very low electoral participation, which has grown between 1996 and 2006 and has decreased again in 2008. Overall, the figures display a general stability of turnout along the time and space dimensions and a higher turnout in the Northern regions. These stylized facts point out a persistent phenomenon of spatial polarization. The Social Catalyst One of the main challenges of the paper is to build a new indicator of social capital, named Social Catalyst, which has two major characteristics: (1) it is able to capture the multiple facets of social capital identified in the literature into a unitary measure; (2) it can be easily generalized to different contexts. As already highlighted, social capital is a complex phenomenon encompassing various dimensions: social norms, shared community values, trust among people and towards institutions, social networks, memberships in associations, civic engagement, which foster cooperation and collective actions for mutual benefits. Some scholars have mainly focused on trust (see, among others, Fukuyama 1995; Bowles and Gintis 2002). Others have emphasized the civicness dimension of social capital intended as common interests and shared values of a community (like Coleman 1988;Putnam 1993) or the benefits that the access to networks and social connections generate for the individuals (like in Bourdieu 1986;Burt 2000;Glaeser et al. 2002). Several contributions have highlighted the general relevance of culture and social norms (see, among others, Putnam 1993;Bertrand et al. 2000). Empirical studies on social capital suffer from lack of uniformity with respect to the approaches and the related indicators. In this paper we share the conceptualization offered by Putnam (1993) and subsequently revisited by Guiso et al. (2016). Indeed Putnam (1993) describes social capital as a collective resource referring to social networks, norms of reciprocity and trust that result from connections among individuals. Guiso et al. (2016) which define social capital as a 'civic capital', i.e. a set of collective values and beliefs that facilitate cooperation among the members of a community. In this regard, social capital is closely related to 'civic virtue'. Therefore, our indicator points towards a measure of social capital generated by a combination of networks/cooperation, social norms and trust as interlinked dimensions which all together depict the heritage of a community, consequently contributing to form a unitary concept. These dimensions are captured individually through several variables that are not mutually exclusive, although each of them has its own characteristics and may have a single impact on the dynamics of voter turnout. Under this perspective, the individual components of our social capital indicator are important not only per se but because they contribute to form a bundle that can affect voter turnout differently from each component, once separately considered. We name this effect the 'compositional effect power' and measure it employing our composite synthetic indicator. Following Goletsis and Chletsos (2011) and Pontarollo and Serpieri (2020), we construct the Social Catalyst through a normalization and weight elicitation which is based on Principal Component Analysis (PCA). This procedure allows to transform multiple dimensions into a set of uncorrelated dimensions and to reduce dimensionality. Our approach consists of two stages: (a) normalization of data and (b) weight elicitation. Through normalization 6 we remove the different scale of each variable and identify indicators that may be positively correlated with the phenomenon of interest. This stage is necessary to ensure that an increase in the normalized indicators corresponds to an increase in the composite indicator. Considering the h th indicator I for region i, I hi is transformed to I std hi , taking values within the interval [0,1] according to the following equation: We employ the Principal Component Analysis for weight elicitation. This methodology aggregates sub-indicators that are collinear into new ones named 'components' and determines the set of weights, which explains the largest variation in the original data. The PCA has also the advantage that the largest factor loadings are attributed to the sub-indicators that show the greater variation across units. In order to retain the maximum of information, we keep the first two components that cumulatively explain more than 70% of the overall variance of the original data and use the other components selected for the aggregating procedure to guarantee that our variables are not correlated. (1) We estimate weights as normalized squared loadings, which implies that each component's share of variance is explained by each variable, and use the highest loading per variable weighted according to the relative explanatory power of the overall variance by each component. We aggregate the indicator through the following weighted additive function: where i is the region, w h is the weight of indicator h and I adi hi is the adjusted value of indicator I h for region i. Table 1 illustrates the individual components of our Social Catalyst. The choice of these components is in line with the literature, which distinguishes forms of generalized and particularistic trust captured by several proxies (see, among others, Uslaner and Brown 2005;de Blasio et al. 2014). The dimension of generalized trust signals the degree of honesty and integrity of a whole community and captures the relevance of general rules and values which dominate the entitlements of personal relationships 7 . We proxy it with the variable Corruption exposure (Istat various years). We proxy associational networks as a dimension of particularistic trust using Blood donors (AVIS various years) and Associations (Istat various years). These variables depict, in slightly different ways, cooperation and social interactions. While Associations relates to networks often motivated by expectations about the behavior of other individuals which create trust mostly among the members, Blood donors captures the humanitarian people's propensity to volunteering in favor of others. We measure the dimension of social norms (civicness) using two variables: Newspapers diffusion (ADS Cronos various years) and TV license taxpayers (RAI-Radio-Televisione Italiana various years). The variable Newspapers diffusion 8 works as a channel that facilitates the transmission and sharing of information among citizens, strengthening their sense of membership to a community and their social participation. It is worth noting that during our time span most of the news in Italy are still diffused by means of printed newspapers 9 . The circulation of the newspapers in fact is quite stable from 1995 to 2006, while it has been steadily decreasing since 2007 (ASIG 2011). TV license taxpayers captures citizens' tax morale and civic behavior with respect to a tax payment which is very weakly enforced. 10 1, 2018). It is also worth noting that from 2001 to 2010 newspaper readership has been growing increasingly reaching almost 45% of the adult population in 2010. This is also explained by the growing diffusion of access to sites web of newspapers (ASIG 2011). 10 During our sample period the television license tax was easy to evade and the fines for a household are low relative to cost (up to 516 Euros plus a mandatory 5-year license purchase) (Bracco et al. 2015). We consider the variable over three times population since a representative family in Italy includes three people. The choice of the proxies we employ in this paper is driven by the specificity of the Italian case especially with respect to variables like Corruption and TV license taxpayers, which are suited to capture phenomena of social and tax morality. Indeed, corruption and tax morale are fundamental problems in Italy and have been -and still are -at the forefront of the political and economic debate (Barone et al. 2012;Filippin et al. 2013;Fiorino and Galli 2013). However, it is worth noting that such a choice does not undermine the potentiality of generalization of our methodology, which can be tested with alternative measures of the Social Catalyst components. The identified components account for approximately 82.5% of total variance. Variable loadings and weights are in Table 2. The weight associated with Newspapers diffusion is 0.37, and the weight associated to Corruption exposure and TV taxpayers are around 0.23. The weight of Blood donors is equal to 0.14, and of Associations to 0.01. This means that, in construction of the indicator, newspapers diffusion is the most significant component, followed by corruption exposure and TV taxpayers with equal significance (see Freudenberg 2003: 10). 11 Figure 3 illustrates the distribution of the average values of the Social Catalyst by quintile. A clear geographical pattern appears. The Northern regions show a higher social capital than the Southern ones, confirming previous research on the Italian North-South divide (Bigoni et al. 2016;Cartocci and Vanelli, 2015). Finally, Figures 4 and 5 graphically show the correlation between Social Catalyst and turnout in the Chamber of Deputies and in the Senate, respectively. In both cases we observe a clear-cut positive correlation: in areas of the country where social capital is higher, turnout also is higher. Moreover, no outliers and leverage points are detectable in the graphs. The Empirical Model and Results Our empirical model is the following: (3) (Becker et al., 2015) According to Nardo et al. (2008) in additive aggregations weights the meaning of substitution rates. For an interesting overview see Decancq and Lugo (2013). Where i is the number of regions, of which there are 18, t is the electoral year, namely 1994,1998,2001,2004,2008. T is a time dummy, α, β and δ are parameters to be estimated and the ε i i.i.d error term. Turnout is alternatively Senate of Chamber turnout. Social capital is proxied by the Social Catalyst which is our variable of interest, and X is a vector accounting for the following demographic, economic and institutional control variables (descriptive statistics are in Table A1 and correlation matrix in Table A2): -Population density, measured as the log of population over regional area in squared kilometers. Data come from Istat. The literature is not unanimous about the effect of population density on turnout. On the one hand, urbanization may generate a weakening of interpersonal relations and social norms, hereby reducing social pressure to vote (see, among others, Geys 2006). On the other, the neighborhood context may play a role in shaping voters' participation to elections as living in close proximity, belonging to the same social networks and interacting frequently can affect people's patterns of political behavior (see, among others, Fieldhouse and Cutts 2012). -Income inequality, measured by the Gini index (based on data on the Survey of Household Income and Wealth conducted by the Bank of Italy). With respect to this issue, the literature is ambivalent. Some studies argue that greater income gap intensifies social conflict for redistribution, increasing expected gains and losses and therefore voter turnout (Meltzer and Richard 1981). Others highlight that poor to unequal redistribution by withdrawing from the political process (see, for example, Goodin and Dryzek 1980). -School attainment, measured as the share of population enrolled in high school, is associated to greater electoral participation (Blais 2000). Data come from CRENOS and Istat. -Winning margin, computed as the percentage vote gap between the largest party and its closest competitor at the district level, captures the positive effect of political parties' competitiveness on turnout (Matsusaka and Palda 1993;Geys 2006). The source of data is Italian Ministry of Interior. All the independent variables have been lagged by one year in order to avoid reverse causality issues. As we show in the Results section, we employ a pooled and a random effect model, which is the most suitable model according to the Hausman Test. Although our results are based on covariates lagged of one year, we also provide IV pooled and IV random effects, which allow to address simultaneity bias as well as errors in measurement of the Social Catalyst. The IV approach requires the identification of a valid instrumenti.e. an instrument that should satisfy the relevance and exclusion restrictions conditions. According to these conditions, the instrument should correlate with the key explanatory variable but not with the error term. We instrument the Social Catalyst with the number of free city-state per region as in Guiso et al. (2016). The choice of this instrument rests on the following. Investigating the factors behind social-capital accumulation, several studies have stressed the role of the long-term evolution of culture and social norms in Italy (Putnam 1993;Bracco et al. 2015;Albanese and de Blasio 2016;Guiso et al 2016). As matter of fact, the medieval experience of selfgovernment (Comuni) stimulated social and political institutions that built mutual trust and cooperative norms over the long run, persisting to the present day. Based on this argument, we hypothesize that the number of free city-state in each region correlates with our Social Catalyst. Furthermore, several reasons suggest that the free city-state does not exert a direct effect on our dependent variable. Firstly, the long-standing determinants of social capital remove any simultaneity bias caused by local shocks that occurred in more recent history. Indeed, it is hard to believe that these shocks could have influenced both the medieval experience of self-government and the current levels voter turnout. Therefore, we can exclude the existence of any source of simultaneity. Secondly, the exclusion restriction might be violated if some missing permanent characteristics related to people agglomeration or economic conditions of a region drove both the history of the free city-state and the current levels of turnout. However, we directly control in our regressions for the most relevant economic characteristics, and for population density. In light of these motivations, we consider the free city-state a good candidate to instrument our Social Catalyst. Finally, given that the free city-state is a time invariant variable, we estimate a random effect IV model and a pooled IV model including time fixed effects in both cases. The Kleibergen-Paap F statistic rejects the null hypothesis, meaning that our instrument is strong. The first stage regression on Table A3 confirms the relevance of the instrument. However, the Wu-Hausman test on endogeneity shows that the null hypothesis cannot be rejected, raising doubts about the potential endogeneity of the Social Catalyst. The results are reported in Table 3. The signs and significance of the coefficients are confirmed in all models. The positive nexus between social capital and turnout is strongly significant in all the specifications, corroborating Henn et al. (2007) and Gerber et al. (2008) findings, although with aggregate rather than individual level or survey data. No Table 3 Estimates for the Chamber of Deputies and Senate differences occur at both Chamber of Deputies and Senate. Among the control variables identified by the literature on the determinants of turnout, population density is positively correlated with the regional turnout, showing the prevalence of a "socialization effect"; income inequality seems to discourage voters, being negatively correlated; education turns out weakly and positively correlated only in the IV pooled model in the Chamber, and in both the pooled and IV pooled models in the Senate; Winning margin has a null effect in stimulating voter turnout. Bundle Effect vs Individual Effects As emphasized in Sect. 2.3, the Social Catalyst is able to capture the bundle effect on voter turnout created by the individual components of the indicator, which are important not only per se but because of the 'relational or spillover' effects they generate on each other as well as on the measure of social capital. Our estimates show that the bundle effect identified by the Social Catalyst is strongly and positively correlated with voter turnout suggesting that the complex and multifaceted nature of social capital overall influence voting decisions. Table 4 compares the 'bundle' with the 'individual' effect of each component of our measure of social capital on voter participation, once considered separately. Both the social norms (TV taxpayers and Newspapers' diffusion) and associational networks' variables (Associations and Blood donors) are strongly and positively correlated with voter turnout, showing that the particularistic dimensions of social capital is able to promote an active citizenry (see, among others, Gronlund and Setala 2007). Instead, the coefficient on Corruption exposure is not significant, suggesting that electoral participation is not affected by the general sense of integrity and legality of the community. While corruption enters social capital as a relevant dimension, it does not affect electoral participation, once taken separately. This result interestingly implies that the influence of corruption on electoral participation is mediated by the strength of social norms as well as by informational environment (see, Chang et al. 2010). In other words, corruption is not relevant per se in shaping voters' behavior; what matters is the 'bundle effect' of social channels, which make voters aware of corruption in political life. Conclusions In this paper we develop a new composite indicator of social capital, the Social Catalyst, which is able to capture the 'compositional effect power' of different dimension of trust by analyzing the weight of each individual component. The methodology, which we test using Italian data, can be generalized and applied to different countries. The Social Catalyst is employed to explore the following questions: Is local social capital a good predictor of the electoral participation in national Parliamentary elections? Among the different dimensions of social capital, which are the drivers of voter turnout? Firstly, we provide empirical evidence that the nexus between social capital and turnout is positive and strongly significant in both Chambers, after testing for the circular causation between social capital and political participation. Secondly, we find that social norms and civicness play a prominent role, although with a different intensity. We can conclude therefore that greater involvement in politics and trust in the electoral process seems to be favored by an increase of community social norms and civic participation. Future studies may consider other methodologies that would allow to empirically test latent variables relationships, enriching the set of indicators for each latent variable and further developing our empirical contribution. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2021-05-07T00:04:40.326Z
2021-02-28T00:00:00.000
{ "year": 2021, "sha1": "eb6e4c142fe368a1374f57e91762248ecb868bc1", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11205-021-02642-6.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "235fc816183fdf7cd5097a5f84f5c11dd70a5f65", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Economics" ] }
236635081
pes2o/s2orc
v3-fos-license
On the geometry of electrovacuum spaces in higher dimensions A classical question in general relativity is about the classification of regular static black hole solutions of the static Einstein-Maxwell equations (or electrovacuum system). In this paper, we prove some classification results for an electrovacuum system such that the electric potential is a smooth function of the lapse function. In particular, we show that an n-dimensional locally conformally flat extremal electrovacuum space must be in the Majumdar-Papapetrou class. Also, we prove that any three or four dimensional extremal electrovacuum space must be locally conformally flat. Moreover, we prove that an n-dimensional subextremal electrovacuum space with fourth-order divergence free Weyl tensor and zero radial Weyl curvature such that the electric potential is in the Reissner-Nordstr\"om class is locally a warped product manifold with (n-1)-dimensional Einstein fibers. Finally, a three dimensional subextremal electrovacuum space with third-order divergence free Cotton tensor was also classified. Introduction and Main Results Static electrovacuum spacetimes model exterior regions of static configurations of electrically charged stars or black holes (see [7,8,10] and the references therein). Equations of motion for an (n + 1)-dimensional reduced Einstein-Maxwell spacetime are given by where F represents the electromagnetic field and Ric is the Ricci tensor for the metric g. Our main ground is the static space-time ( M n+1 ,ĝ) = M n × f R such that where (M n , g) is an open, connected and oriented Riemannian manifold, and f is a smooth warped function. Considering as electromagnetic field F = dψ ∧ dt, for some smooth function ψ on M from the warped formula (see [7,9,12] and the references therein). The well known electrostatic (or electrovacuum) system is described below. Definition 1.1. Let (M n , g) be an n-dimensional smooth Riemannian manifold with n ≥ 3 and let f, ψ : M → R be smooth functions satisfying where Ric, ∇ 2 , div and ∆ are the Ricci and Hessian tensors, the divergence and the Laplacian operator on the metric g, respectively. Furthermore, f > 0 on M . Moreover, when M n has boundary ∂M, we assume in addition that f −1 (0) = ∂M . We also refer (M n , g, f, ψ) as electrovacuum (or electrostatic) system (or space). The smooth functions f , ψ and the manifold M n are called lapse function, electric potential and spatial factor for the static Einstein-Maxwell spacetime, respectively. We first observe that taking the contraction of the first equation and combining it with the second equation in (1.1), we obtain that the scalar curvature which is denoted by R is given by (1.2) f 2 R = 2|∇ψ| 2 . Second, when ψ is a constant function, then the electrostatic system reduce to the static vacuum Einstein equations, i.e., f Ric = ∇ 2 f and ∆f = 0. (1. 3) These equations characterize the static vacuum Einstein spacetime which was widely explored in the literature. Furthermore, the most important solution for this system is the Schwarzschild solution. This solution represents a static black hole with mass, but without electric charge or magnetic fields. Therefore, we can see that Definition 1.1 generalizes the system (1.3) and we will consider the case where ψ is a constant function as trivial. In 1918, independently, G. Nordström, and H. Reissner found a class of exact solutions to the Einstein equation for the gravitational field of a spherical charged mass (see [17] for a wide-ranging discussion about these solutions). The Reissner-Nordström (RN) electrostatic spacetime is one of the most important solutions for the electrostatic system and it can be thought as a model for a static black hole or a star with electric charge q and mass m. The RN spacetime is called subextremal, extremal or superextremal depending if m 2 > q 2 , m 2 = q 2 or m 2 < q 2 , respectively. For instance, we have the following RN solution given by the Riemannian manifold M n = S n−1 × (r + , +∞) with metric tensor g = dr 2 1 − 2mr 2−n + q 2 r 2(2−n) + r 2 g S n−1 , where r represents the radius of the Reissner-Nordström black hole. Here, m 2 ≥ q 2 are constants, and r + > (m + m 2 − q 2 ) 1/(n−2) . Moreover, the outer horizon for the Reissner-Nordström space-time is located at (m + m 2 − q 2 ) 1/(n−2) , which corresponds to the zero set of the lapse function of the RN manifold. The static horizon is defined as the set where the lapse function for a static manifold is identically zero. This set is physically related with the event horizon, the boundary of a black hole. The RN space is locally conformally flat (see [9] for instance). It is well known that the lapse function f and the electric potential ψ of an electrovacuum system asymptotic to Reissner-Nordström of total mass m and charge q, with suitable inner boundary, satisfies the functional relationship (see [7,Equation A.1] and [12, Lemma 3]) Another important electrovacuum solution is the Majumdar-Papapetrou (see [9,10,14]), which is related to an extremal RN solution. The Majumdar-Papapetrou (MP) solution to Einstein-Maxwell theory represents the static equilibrium of an arbitrary number of charged black holes whose mutual electric repulsion exactly balances their gravitational attraction. A spacetime will be called a standard MP spacetime if the metric tensor is given byĝ for some positive constants m i , and the electric potential The classification problem of an electrovacuum spacetime can be stated as follows. Suppose that where q i is the charge of the i-th connected degenerate component of the electric charged black hole. Then the black hole is either a RN black hole, or a MP black hole. There are some important and recent results in the literature concerning the classification of electrovacuum space (see for instance [8,12,14] and their references). The most common assumption on the analysis and classification of an electrovacuum space is to consider that such space is asymptotically flat (see [7,9,12,14]). It is well known that using the positive mass theorem we can conclude that the space is conformally flat. We can then use classical calculations to prove that the solution for the electrovacuum system is either MP or RN (we refer to the reader see the main steps in the proof of [9,Theorem 3.6]). Those asymptotic conditions guarantee information about the metric, lapse function and the electric potential at infinity. Even though this condition is restrictive in the topological sense it is physically reasonable in the study of isolated gravitational system. Usually, in differential geometry some condition over the curvature is more often. However, considering just a condition over the curvature on the classification of the electrovacuum space seems to be not enough, since we lose information about the electric potential and the lapse function. We recall that an asymptotically flat n-dimensional static Einstein-Maxwell space is extremal (i.e., m = |q|) if, and only if, the magnetic field is zero and f = 1± 2(n − 2)/(n − 1)ψ, admitting f = 0 at ∂M (see Lemma 1 in [12]). Also, in [12,Lemma 3], the authors proved how the some kinds of electrovacuum solutions combined with an equation relating ψ and f have implications on the non-existence of magnetic fields. It is worth to say that an extremal RN space-time contains an unique photon sphere, on which light can get trapped and it has the largest possible ratio of charged to mass (see [7]). The theory of extremal black holes is very important in physics and has very interesting properties. For instance, extremal charge black holes may be quantum mechanically stable, which is consistent with the ideas of cosmic censorship (see [11]). There is also an important type of electrovacuum solution in supergravity theory (see [14]). Moreover, there is evidence that this type of black hole is important to the understanding of the no hair theorem (see [3]). The RN and MP solutions for the electrovacuum system suggest that there exists a class of solutions where the electric potential is a smooth function of the lapse function, i.e., ψ = ψ(f ). Our first result proves that there is a certain rigidity in this class of solutions. Theorem 1.2. Let (M n , g, f, ψ), n ≥ 3, be a complete electrovacuum space such that ψ = ψ(f ). Then, the electric potential (locally) is either where σ, β ∈ R. Moreover, σ = 0 if and only if ψ(f ) is an affine function of f . Remark 1.3. It is worth to highlight that the completeness hypothesis over (M, g) is just to ensure that the critical set {∇f = 0} is not dense on M . So here, completeness can be replaced by assuming that the critical set is not dense. It is interesting to remark how the constants σ and β given by (1.7) are related with the mass m and electric charge q for a RN solution which satisfies (1.4). A straightforward computation shows us that So, we can say that a solution satisfying (1.7) is called subextremal, extremal or superextremal depending on if σ < 0, σ = 0 or σ > 0, respectively. The above theorem shows us that an electrovacuum system such that ψ = ψ(f ) has basically two possible solutions and these solutions are closely related with the RN and MP solutions, respectively. It is also important to highlight that with the conformal metric g = f 2/(n−2) g the inverse of the electric potential 1 ψ(f ) given by (1.8) is harmonic in the metric g. Moreover, (M n , g) is Ricci-flat (see Lemma 3). Then, considering asymptotic conditions, by the positive mass theorem, (M n , g) is isometric to the Euclidean space. Of course, in three dimensional case this is a direct consequence of (M n , g) to be a Ricci-flat. This fact is important for the classification of extremal electrovacuum solutions. As pointed out in [12,Remark 1] and [14] any suitably regular asymptotically flat black hole solution in the Majumdar-Papapetrou class must has a space isometric to Euclidean space (minus a point for each horizon) and a harmonic function of the form (1.6). In this case the spacetime is a Majumdar-Papapetrou multi-centred black hole solution (see [14]). We need to emphasize that we are not considering any asymptotic conditions, so the positive mass theorem is not necessarily valid here. In the next result we prove that an extremal eletrovacuum space under certain hypothesis necessarily must be in the Majumdar-Papapetrou class. Theorem 1.4. Let (M n , g, f, ψ), n ≥ 3, be an extremal electrovacuum space satisfying (1.8). Then, the Schouten tensor for the metric g is Codazzi. If (M n , g) is locally conformally flat, then any extremal solution must be in the Majumdar-Papapetrou class, i.e., (M n , f 2/(n−2) g) is locally isometric to R n . Moreover, any three or four dimensional complete extremal electrovacuum space (M, g) must be locally conformally flat. We observe that if the Schouten tensor is Codazzi then in dimension three (M 3 , g) is locally conformally flat metric, already in dimension n > 3, then the Equation 2.4 implies in harmonic Weyl curvature. Codazzi tensors in Riemannian manifolds are important by themselves (see [1,Proposition 16.11]). In addition, if an extremal electrovacuum solution is locally conformally flat, then is possible to use classical calculations to prove that it is a MP solution (see [9,Proposition 3.4]). Moreover, the extremal case was recently considered in [14], where the author proved that the only asymptotically flat spacetimes with a suitably regular event horizon, in a generalised Majumdar-Papapetrou class of solutions to higher-dimensional Einstein-Maxwell theory are the standard multi-black holes (1.5). As a consequence of Theorem 1.4 we need to highlight the following result. Corollary 1.5. Any five dimensional extremal electrovacuum spacetime must be in the Majumdar-Papapetrou class. Now, it remains to consider the electrovacuum solutions in the RN class, i.e., such that the electric potential is given by (1.7). Here, we are considering divergence conditions on Weyl (W ) and Cotton (C) tensors for a static Einstein-Maxwell spacetime instead of the traditional asymptotic conditions. Divergence conditions on W have been recently explored in several works (see [5,6,13,15] and the references therein). When the divergence of the Weyl tensor is identically zero, i.e., divW = 0, we say that the manifold has a harmonic Weyl curvature. It is well know that if the scalar curvature is constant, then harmonic Weyl curvature implies in harmonic curvature. This condition is equivalent to zero Cotton tensor in dimension more than 3 (see 2.4). In what follows, we will consider that a Riemannian manifold (M n , g) has zero radial Weyl curvature if where ∇f is the gradient for a smooth function f : M → R. This condition was used in [5] and [13] in the study of Einstein-type manifolds, see more details in the references therein. Now, we are ready to announce our next classification result. Theorem 1.6. Let (M n , g, f, ψ), n ≥ 3, be a complete subextremal electrovacuum space with harmonic Weyl curvature and zero radial Weyl curvature such that ψ is in the Reissner-Nordström class, i.e., such that ψ is given by (1.7). Then, around any regular point of f , the manifold is locally a warped product with (n − 1)-dimensional Einstein fibers. Remark 1.7. In the three dimensional case, it is important to notice that the Weyl tensor W is identically zero. So, the zero radial Weyl curvature condition is trivial. Moreover, the harmonic Weyl curvature condition must be replaced by locally conformally flat metric, i.e., C = 0. Remark 1.8. The subextremal condition required in the above theorem is just to avoid any major technical problem and can be relaxed by considering that is not dense at M . We are also considering that {f = 0} is not dense. In this paper, we will provide several results about divergence-free conditions in an electrovacuum space such that ψ = ψ(f ). Our goal is to provide a classification for an electrovacuum space having a fourth-order divergence-free Weyl tensor, i.e., div 4 W = 0 (the space being compact or not). In the three dimensional case the discussion reduces to consider the Cotton tensor free from divergence, i.e., div 3 C = 0. We will show that this higher-order divergence conditions can be reduced to harmonic Weyl curvature condition (or locally conformally flat curvature in the three dimensional case), under some additional hypothesis. The idea is to prove that the higher-order divergence-free conditions can be reduced to harmonic Weyl curvature (or zero Cotton tensor for n = 3) using an appropriate divergence formula combined with some cut-off function and then, by integration of such formula, concluding that the Cotton tensor is identically zero, which is a similar strategy used by [2,5,13,15]. Next, as a consequence of Theorem 1.6 (see also Corollary 4.8), we get the following result. Corollary 1.9. Let (M n , g, f, ψ), n > 3, be a complete subextremal electrovacuum space with fourth-order divergence free Weyl curvature and zero radial Weyl curvature such that the electric potential ψ is in the Reissner-Nordström class (i.e., satisfying Equation (1.7)). Around any regular point of f , if f is a proper function, then the manifold is locally a warped product with (n − 1)-dimensional Einstein fibers. In the three dimensional case the computations follow closely the same strategy of the above result and also we provide some interesting results reducing the order of divergence for the Cotton tensor. Let us show the most general case below (see partial results in the three dimensional compact space in Section 4.3). In this way we obtain the following result. Corollary 1.10. Let (M 3 , g, f, ψ) be a complete subextremal electrovacuum space with third-order divergence free Cotton tensor such that ψ is in the Reissner-Nordström class. Around any regular point of f , if f is a proper function, then the manifold is locally an Einstein manifold, i.e., (M 3 , g) is locally isometric to either The paper is organized as follows. Section 2 introduces terminology used throughout this paper. In Section 3, we present some structural lemmas that will be used in the proof of the main results. Finally, in Section 4 we prove the main results. Background In this section, we fix our notation and recall some basic facts and useful lemmas. In particular, we need to remember some special tensors in the study of curvature for a Riemannian manifold (M n , g), n ≥ 3. The first one is the Weyl tensor W which is defined by where R ijkl denotes the Riemann curvature tensor. The second one, is the Cotton tensor given by And finally, considering n ≥ 4, the Bach tensor is defined by We observe that the Weyl tensor has the same symmetries of the curvature tensor, that is Moreover, we note that the Bach, the Cotton and the Weyl tensors are totally trace-free in any two indices (see [4] for instance), i.e., When the dimension of M is n = 3, then the Weyl tensor W ijkl vanishes identically and the Cotton tensor C ijk = 0 if and only if (M 3 , g ij ) is locally conformally flat; this fact holds if and only if W ijkl = 0, considering dimension n ≥ 4. Thus, for n ≥ 4 we have some well known relations with these tensors and their derivatives (see [4,5,13]). Involving the Weyl and Cotton tensors a straightforward computation yields to So, if the Cotton tensor vanishes, then the Weyl tensor is harmonic. Now, for n ≥ 3 combining (2.3) and (2.4) we can rewritten the Bach tensor as In particular, (see [4]), in dimension n = 3, since the Weyl tensor identically zero, we can conclude that This equation leads us to the following fact: Is convenient to express the divergence for the Bach tensor, which is given by Moreover, it is easy to see that From the contracted second Bianchi identity and from commutation formulas for any Riemannian manifold we can infer that Moreover, remember that (n − 2)C ijk = ∇ i S jk − ∇ j S ik , where S stands for the Schouten tensor of g, i.e., Structural lemmas Next, motivated by ideas in [2,5,13,15] we obtain some structural lemmas, which are fundamental to proof our results. Note that in a local coordinates system, using (1.2) we can rewritten (1.1) as Lemma 1. Let (M n , g, f, ψ), n ≥ 3, be an electrovacuum system. Then, Proof. We take the derivative of (3.1) to obtain Subtracting (3.4) from (3.5) and using that the Hessian operator is symmetric, we can deduce that It is well known that in any Riemannian manifold we can relate the Riemannian curvature tensor with a smooth function by using the Ricci identity Then, replacing the Ricci identity (3.6) and the Cotton tensor (2.2), we infer that Now, using the Weyl formula (2.1), we have So, the proof is finished. In the sequel, we define the covariant 3-tensor V ijk by The tensor V ijk was defined similarly to D ijk in [4]. Note that from a straightforward computation, we observe that the tensor V has the same symmetries of the Cotton tensor C, i.e., V ijk = −V jik and V ijk + V jki + V kij = 0. This 3-tensor has a fundamental importance in what follows. From Lemma 1, we have In particular, if we suppose that ψ = ψ(f ) in the Lemma 1, we obtain the following result. On the other hand, by the right conformal change of the metric we get our next lemma. Proof. We consider the conformal change of the metric g = f 2 n−2 g. From [5,Appendix] the Cotton tensor for metric g is given by Moreover, for g (see [1, page 58]), we obtain where in the last equation we used (3.2). Considering ψ = ψ(f ), from 1.1, we get By hypothesis ψ = ψ(f ) satisfies (1.8), then Consequently, from (3.15) and (3.16), we conclude that (M n , g) is Ricci-flat. In this case, the Schouten tensor for g is given by This shows that S is Codazzi, because S = 0, i.e., ( ∇ X S)(Y ) = ( ∇ Y S)(X) for all X, Y ∈ T M . Therefore, the Cotton tensor for the metric g is identicaly zero. So, from (3.13) we have Thus, we conclude our proof. Now, our goal is to obtain an useful formula for the norm of the Cotton tensor involving the divergence of the tensor V . To prove this, we need to show several lemmas. Lemma 4. Let (M n , g, f, ψ), n ≥ 4, be an electrovacuum system. Then, Proof. In fact, from (2.5) and (3.8), we can deduce that Now, using (3.1), we obtain Since the Weyl tensor is trace-free, we have From (2.4), we get the result. Proceeding, we can use the previous lemma to obtain the following result. Lemma 5. Let (M n , g, f, ψ), n ≥ 4, be an electrovacuum system. Then, Proof. Taking the divergence of (3.17) and using (2.9), we infer that Since the Hessian is symmetric, then renaming indices and using the symmetries of the Weyl tensor, we deduce Analogously, we have the same expression for the lapse function f , i.e., Combining (3.19) and (3.20), we obtain Since the Cotton and Weyl tensor are trace-free, using the symmetries of the Weyl tensor, (2.4) and (3.1) we get Now, we need to remember some facts. Firstly, B ij = B ji , R ij = R ji and the Cotton tensor is skew-symmetric, then we have an analogous relation to (3.20), i.e, (3.22) C ikj R ik = 0. Secondly, using (2.8), we infer C jik = C jki + C kij , this implies that C jik R ik = C jki R ik . Thus, from (2.7) and using these observations after renamed the indices, we obtain Finally, using the above equation in (3.21) the result holds. Lemma 6. Let (M n , g, f, ψ), n ≥ 4, be an electrovacuum system. Then, Proof. Taking the divergence of (3.18), we have Note that from the symmetries of the Cotton tensor and renaming indices, we get Then, combining (3.23) and (3.24), we can infer that From (2.2) and using that the Cotton tensor is trace-free, we obtain the result. Proof of the main results In this section, we prove our main results. Classification Results. Now, we are ready to present the proofs of Theorem 1.2, Theorem 1.4 and Theorem 1.6 which are the main classification results in this present work. They will be stated again here for the sake of the reader's convenience. We start with Theorem 1.2 which shows us how related the electric potential and the lapse function can be. This result was inspired by [12] and [16]. Theorem 1.2). Let (M n , g, f, ψ), n ≥ 3, be a complete electrovacuum space such that ψ = ψ(f ). Then, the electric potential (locally) is either where σ, β ∈ R. Moreover, σ = 0 if and only if ψ(f ) is an affine function of f . From (3.2) and (4.1), we have Combining the last equations with (3.3) and (4.1), we geẗ Notice that there is no open subset Ω of M where {∇f = 0} is dense. In fact, if f is constant in Ω, since M is complete, then we have that f is analytic, which implies f is constant everywhere. By a straightforward computation, we arrive atḣ where h =ψ f . So, by solving this ODE, we getψ By integration we obtain, either Moreover, from (4.2) we have the following useful identity To finish, we observe that if σ = 0, then from the above equation,ψ(f ) is a constant, this implies that ψ(f ) is an affine function. In the next result we prove that an extremal eletrovacuum space under certain hypothesis necessarily must be in the Majumdar-Papapetrou class. Theorem 1.4). Let (M n , g, f, ψ), n ≥ 3, be an extremal electrovacuum space satisfying (1.8). Then, the Schouten tensor for the metric g is Codazzi. If (M n , g) is locally conformally flat, then any extremal solution must be in the Majumdar-Papapetrou class, i.e., (M n , f 2/(n−2) g) is locally isometric to R n . Moreover, any three or four dimensional complete extremal electrovacuum space (M, g) must be locally conformally flat. Proof. The proof follows from the previous section. In fact, remember that when ψ is an affine function of f , we have the equation (3.16). Then, from (3.9) we conclude that P = Q = U = 0, and so the tensor V ijk is identically zero. Thus, from (3.8) we obtain f C ijk = W ijkl ∇ l f. Immediately, for n = 3 the Cotton tensor is identically zero which means that (M 3 , g, f, ψ) is locally conformally flat. Now, considering n > 3, from the proof of Lemma 3 we obtain that the Ricci tensor, Ric, for the conformal change of the metric g = f 2/(n−2) is identically zero, and so the Cotton tensor C ijk . At the same time, using (3.12), we can infer that Consequently, the Schouten tensor (2.10) is Codazzi, i.e., (∇ X S)(Y ) = (∇ Y S)(X) for all X, Y ∈ T M. Furthermore, since Ric is identically zero, we conclude (M 3 , g) is isometric to R 3 . Using again the conformal change of the metric g = f 2/(n−2) g (see [1, page 58]), we have In the last equality we used (3.2) and (3.16). Then, from (3.14), we get Combining these two last identities we obtain Note that the tensor T coincides with the Schouten tensor S given by (2.10). If the Weyl tensor for g is identically zero, then from (2.1) we have Therefore, replacing the above formula in (4.4), we can conclude that We finish the proof considering the four dimensional case (see [4,Lemma 4.3]). First, remember that in any open set of the level set Σ = {f = c}, where c is any regular value for f , and using the local coordinates system (x 1 , x 2 , x 3 , x 4 ) = (f, θ 2 , θ 3 , θ 4 ), we can always express the metric g in the form where g ab (f, θ)dθ a dθ b is the induced metric and (θ 2 , θ 3 , θ 4 ) is any local coordinate system on Σ (see [4,Remark 3.4]). In the following, we use a, b, c to represent indices on the level sets which ranges from 2 to 4, while i, j, k are used to represent indices on M ranging from 1 to 4. Next, as it is well known that ν = −∇f |∇f | is the normal vector field to Σ. Then is easy to see that Consider the referencial {e 1 , e 2 , e 3 , e 4 }, where e 1 is normal and e a are tangent to Σ. Since the Schouten tensor is Codazzi, from (3.8) we have W ijk1 = 0. Hence, we only need to show that The Weyl tensor has all the symmetries of the curvature tensor and is trace-free in any two indices. Thus, Next, we prove Theorem 1.6 about classification's result which was inspired by [13]. Theorem 4.3 (Theorem 1.6). Let (M n , g, f, ψ), n ≥ 3, be a complete subextremal electrovacuum space with harmonic Weyl curvature and zero radial Weyl curvature such that ψ is in the Reissner-Nordström class, i.e., such that ψ is given by (1.7). Then, around any regular point of f , the manifold is locally a warped product with (n − 1)-dimensional Einstein fibers. Proof. We consider an orthonormal frame {e 1 , e 2 , . . . , e n } diagonalizing the Ricci tensor Ric at a regular point p ∈ Σ = f −1 (c), with associated eigenvalues R kk , k = 1, . . . , n, respectively. That is, R ij (p) = R ii δ ij (p). From Lemma 2, we infer where P, Q and U are given by (3.9). Without lost of generalization, consider ∇ i f = 0 and ∇ j f = 0 for all i = j. Observe that Ric(∇f ) = R ii ∇f , i.e., ∇f is an eigenvector for Ric. From (4.5), we obtain that λ = R ii and µ = R jj , j = i, have multiplicity 1 and n − 1, respectively. Moreover, if ∇ i f = 0 for at least two distinct directions, then using (4.5) we have that µ = R 11 = . . . = R nn and we also obtain that ∇f is an eigenvector for Ric. Therefore, in any case we have that ∇f is an eigenvector for Ric. From the above discussion we can take {e 1 = −∇f |∇f | , e 2 , . . . , e n } as an orthonormal frame for Σ diagonalizing the Ricci tensor Ric for the metric g. Now, we note from (3.1) that ∇ a f ; a ∈ {2, . . . , n}. Hence, equation (4.6) gives us |∇f | is a constant in Σ. Thus, we can express the metric g in the form where g ab (f, θ)dθ a dθ b is the induced metric and (θ 2 , . . . , θ n ) is any local coordinate system on Σ. We can find a good overview of the level set structure in [4,13]. Observe that there is no open subset Ω of M n where {∇f = 0} is dense. In fact, if f is constant in Ω, since M n is complete, we have that f is analytic, which implies f is constant everywhere. Thus, we consider df |∇f | such that the metric g in U I can be expressed by Let ∇r = ∂ ∂r , then |∇r| = 1 and ∇f = f ′ (r) ∂ ∂r on U I . Note that f ′ (r) does not change sign on U I . Moreover, we have ∇ ∂r ∂r = 0. From (3.1) and the fact that ∇f is an eigenvector of Ric, then the second fundamental formula on Σ is given by where H = H(r), since H is constant in Σ. In fact, contracting the Codazzi equation On the other hand, since R 1a = 0, we conclude that H is constant in Σ. 4.2. Fourth-order divergence free Weyl tensor. In this subsection, our aim is to prove some integral theorems in dimension n ≥ 4 with fourth-order divergence-free Weyl tensor for an electrovacuum space in the RN class. To that end, we use the lemmas in the previous section. In our results we are considering subextremal Riemannian manifolds satisfying the zero radial Weyl curvature. Indeed, the fact that electrovacuum space can not be extremal appears naturally in the first theorem of this subsection. where σ is a non-null constant. Remark 4.5. It is important to point out that the choice of φ in the above theorem should be made in such way that terms like φ(f ) f m , where m = 1, 2 or 3, will be integrable at K. From Theorem 1.2, since M is subextremal (i.e., σ < 0), we have that is, the Cotton tensor is identically zero. Therefore, from (2.4) the result holds. In the last equality we use integration by parts. Now, we take φ(f ) = f 5χ (f ) in the Theorem 4.4 and since div 4 W = 0, we obtain i.e., C = 0 in M s . Taking s → +∞, we obtain that C = 0 on M . Now, we are ready to present the proof of Corollary 1.9 that will be stated again here for the sake of the reader's convenience. Corollary 4.8 (Corollary 1.9). Let (M n , g, f, ψ), n > 3, be a complete subextremal electrovacuum space with fourth-order divergence free Weyl curvature and zero radial Weyl curvature such that the electric potential ψ is in the Reissner-Nordström class (i.e., satisfying Equation (1.7)). Around any regular point of f , if f is a proper function, then the manifold is locally a warped product with (n − 1)-dimensional Einstein fibers. 4.3. Third-order divergence free Cotton tensor. In this subsection, we will return to our results and study them in dimension n = 3. Firstly, it is important to point out that the lemmas 4, 5 and 6 are not valid in dimension n = 3 due equation (2.4), which was used in their demonstrations. However, we can prove another version of them in a convenient way. Other point is the fact that the Theorem 4.4 is not valid in dimension n = 3, but the main issue here is that the Weyl tensor vanishes in dimension three. Nonetheless, the computations are very much similar to the previous results proved in dimension more than 3. We will prove all those results for n = 3 for the sake of completeness of the text. After these considerations, we can proceed with our results. To that end, since the Weyl tensor vanishes identically in dimension n = 3, we can observe that equation (3.8) becomes (4.11) f C ijk = V ijk . Consequently, we have the following lemma. Lemma 7. Let (M 3 , g, f, ψ) be an electrovacuum space. Then, Proof. In fact, from (2.6) and (4.11), we obtain Taking the derivative over i, we have Since n = 3, from (2.7), using (2.8) and (3.22) after renamed the indices, we infer Thus, combing these two last relations the result holds. Lemma 8. Let (M 3 , g, f, ψ) be an electrovacuum space. Then, Proof. Taking the divergence in Lemma 7, we get Using (3.24), we have Now, since the Cotton tensor is trace-free, from (2.2) and renaming the indices, we obtain Therefore, the result holds. Finally, we prove Corollary 1.10, whose statement is as follows. Corollary 4.12 (Corollary 1.10). Let (M 3 , g, f, ψ) be a complete subextremal electrovacuum space with thirdorder divergence free Cotton tensor such that ψ is in the Reissner-Nordström class. Around any regular point of f , if f is a proper function, then the manifold is locally an Einstein manifold, i.e., (M 3 , g) is locally isometric to either R 3 or S 3 . Proof. This result is a consequence of Theorem 1.6 and Theorem 4.11.
2021-08-02T01:16:15.667Z
2021-07-30T00:00:00.000
{ "year": 2021, "sha1": "f7a3e9d0750d3c87540a586530ba2a441a55623f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "f7a3e9d0750d3c87540a586530ba2a441a55623f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
118480376
pes2o/s2orc
v3-fos-license
Quantum spin Hall effect induced by electric field in silicene We investigate the transport properties in a zigzag silicene nanoribbon in the presence of an external electric field. The staggered sublattice potential and two kinds of Rashba spin-orbit couplings can be induced by the external electric field due to the buckled structure of the silicene. A bulk gap is opened by the staggered potential and gapless edge states appear in the gap by tuning the two kinds of Rashba spin-orbit couplings properly. Furthermore, the gapless edge states are spin-filtered and are insensitive to the non-magnetic disorder. These results prove that the quantum spin Hall effect can be induced by an external electric field in silicene, which may have certain practical significance in applications for future spintronics device. Recently, the quantum spin Hall effect (QSHE) has attracted significant interests in the fields of condensed matter physics and material science as it constitutes a new phase of matter and has potential applications in spintronics. [1][2][3][4][5] The novel electronic state with a nontrivial topological property and time-reversal invariance has a bulk energy gap separating the valence and conduction bands and a pair of gapless spin-filtered edge states sat the sample boundaries. The QSHE has been first predicted by Kane and Mele in graphene in which the intrinsic spin-orbit coupling opens a band gap at the Dirac points. 1 However, the QSHE can occur in graphene only at unrealistically low temperatures since the intrinsic spin-orbit coupling in graphene is rather weak. [6][7][8] Therefore, it is crucial to search new materials with strong spin-orbit coupling for realizing the QSHE. Recent theories and experiments provide evidences of the QSHE in two-dimensional HgTe-CdTe quantum wells. 2,3 Very recently, a close relative of graphene, a slightly buckled honeycomb lattice of Si atoms called silicene has been synthesized. [9][10][11][12][13] Silicene can be well compatible with current silicon based electronic technology. Many progresses in the study of silicene have been made, both experimentally and theoretically. For example, electronic properties and the giant magnetoresistance in silicene have been reported. 14-17 Moreover, almost every striking property of graphene could be transferred to silicene. 13,18 It has been theoretically shown that the strong intrinsic spin-orbit coupling in silicene may lead to detectable QSHE. 5,[19][20][21][22][23] In this paper, we provide systematic investigations on the band structures and electron transport properties of a) Electronic mail: yanyang@semi.ac.cn silicene in the presence of an external electric field. Silicene consists of a buckled honeycomb lattice of silicon atoms with two sublattices A and B. We take a silicene sheet on the x − y plane, and apply the electric field in z direction. The electric field generates a staggered sublattice potential between silicon atoms at A sites and B sites due to the buckled structure of the silicene. On the other hand, two kinds of Rashba spin-orbit coupling, referring to the nearest and next-nearest neighbor hoppings respectively, can also be tuned by the external electric field. We find that a gap can be opened by the staggered sublattice potential and gapless edge states are induced in the gap by Rashba spin-orbit coupling. We predict that the QSHE can be observed by applying an external electric field in silicene even if the intrinsic spin-orbit coupling in the system is very weak. In the tight-binding representation, the silicene sample with an external electric field can be described by the the following Hamiltonian: 20 where c † iα creates an electron with spin polarization α at site i; ij and ij run over all the nearest and nextnearest neighbor hopping sites, respectively. The first term is the nearest-neighbor hopping with the transfer energy t = 1.6eV . The second term is the staggered sublattice potential term, where µ ij = ±1 for the A (B) site and ε i is the potential energy induced by the external electric field. The third and fourth terms, re-spectively, represent the first Rashba spin-orbit coupling associated with the nearest neighbor hopping and the second Rashba spin-orbit coupling associated with the nextnearest neighbor hopping. Both of them are induced by the external electric field. Here σ = (σ x , σ y , σ z ) is the Pauli matrix of spin and d 0 ij = d ij /| d ij | with the vector d ij connecting two sites i and j. The intrinsic SOC term has been ignored intentionally since the main focus of this work is the Rashba terms and the staggered potential, which can be tuned by the external electric field. We assume that the temperature is set to zero and two semi-infinite silicene ribbons are employed as left and right leads. The two-terminal conductance of the system can be calculated by the nonequilibrium Green's function method and Landauer-Büttiker formula as is the retarded Green function with the Hamiltonian in the center region H cen . 24 The self-energy Σ r p due to the semi-infinite lead-p can be calculated numerically. 25 With the help of the nonequilibrium Green's function method, the local current flowing on site i with spin σ can be expressed as where G < iσ,jσ ′ = i c † jσ ′ c iσ is the matrix element of the lesser Greens function of the scattering region and J iσ,jσ ′ is the current from site i to j. After taking Fourier transformation, the local current J iσ,jσ ′ can be calculated as Eq. (4) has been widely used in the local-current studies of tight-binding models. [26][27][28] When the sample is at zero temperature and the applied voltage is small, by applying the Keldysh equation 29 the Eq. (4) can be written as where V L and V R are the voltages at the Lead-L and R, respectively. G n (E) = G r (E)Γ L (E)G a (E) is electron correlation function. The first part of Eq. (5) can only generate the equilibrium current and does not contribute to the transport, so it can be dropped out in present work. It is the second part that gives rise to the nonequilibrium current. In the following numerical calculations, we use the hopping energy t as the energy unit. The width of the zigzag ribbon is 59a, where a is the silicon-silicon distance. In Fig. 1 we show the energy bands obtained from diagonalizing the tight-binding Hamiltonian (1) with various parameters for a zigzag nanoribbon. The nanoribbon with only the nearest-neighbor hopping shows a semimetallic behavior, as shown in Fig. 1 (a). An energy gap can be opened due to the inversion symmetry breaking induced by the staggered sublattice potential and the magnitude of the gap is 2ε i (see Fig. 1 (b)). When the first and second Rashba spin-orbit couplings induced by the external electric field are taken into account properly, which turn silicene from normal insulating to quantum spin Hall regime, gapless edge states appear within the band gap (see Figs. 1 (c) and (d)). The gapless edge states with different spins connect the conduction band and valence band. As usual, these gapless edge states are originated from the nontrivial topological orders in the bulk. According to different values of the first Rashba spin-orbit coupling, the QSHE induced by the external electric field can be divided into two types, QSHE1 and QSHE2. In QSHE1, when the first Rashba spin-orbit coupling is strong, the edge states traverse the bulk gap within each valley, as shown in Fig. 1 (c). However, in QSHE2 when the first Rashba spin-orbit coupling is weak, the edge states inter-connect two valleys. Moreover, they bend and give rise to "subgaps" around K and K ′ , which makes the structure of propagating channels complicated in the bulk gap. To investigate the QSHE induced by the external electric field in more details, the configurations of the spindependent local-current-flow vector are plotted in Fig. 2. We focus only on the left-injected current. The Fermi energy is set to be E = 0.05t for QSHE1 ( Fig. 2 (a)) and E = 0.005t for QSHE2 ( Fig. 2 (b)). For these Fermi energies, there are only the lowest transmission channels, i.e., the gapless edge states. For QSHE1, the spin-up local currents locate mainly on the lower edge (see Fig. 2 (a1))and the spin-down local currents locate mainly on the upper edge (see Fig. 2 (a2)). Contrary to QSHE1, for QSHE2, the spin-up local currents locate mainly on the upper edge (see Fig. 2 (b1)) and the spin-down local currents locate mainly on the lower edge (see Fig. 2 (b2)). These results show that the gapless edge states are spin-filtered and the two kinds of Rashba spin-orbit couplings can drive an ordinary insulating state of the silicene to the topological insulator. Next, for QSHE1, the Fermi energy is tuned to E = 0.1t or E = −0.1t, reaching slightly into the bulk band. The configurations of spin-dependent local-current-flow vector in such regions are plotted in Fig. 3. We find that the edge states are not fully spin-filtered when they are inside the bulk band. However, in this case, the electrons flow along the lower edge (see Fig. 3 (a)), while the holes flow forward along the upper edge (see Fig. 3 (b)). In order to have a global view on the phase transitions, the phase diagram at ε i = 0.1t is plotted in Fig. 4. When these two kinds of spin-orbit couplings are tuned properly, two kinds of QSHE, QSHE1 and QSHE2 appear. For QSHE1, the silicene nanoribbon has a large bulk gap and there are only gapless edge states in the bulk gap because the first spin-orbit coupling can widen the bulk gap. 21 On the other hand, for QSHE2, the system has a narrow bulk gap and even the gapless edge states is located in the bulk band because the second spin-orbit coupling can narrow the bulk gap. Finally, we examine the non-magnetic disorder effect on the conductance plateau 2e 2 /h of the QSHE. Random on-site potential w i is added for each site i in the central region, where w i is uniformly distributed in the range [−w/2, w/2] with the disorder strength w. Figs. 5 (a) and (b) show the conductance G versus the disorder strength at various energy for QSHE1 and QSHE2, respectively. The results show that these quantum plateaus are robust against non-magnetic disorder because of the topological origin of the edge states. Especially, the quantum plateau of QSHE1 maintains its quantized value very well even when w reaches 2.0t for E = 0.05t, as shown in Fig. 5 (a). We can also find that the gapless edge states of QSHE1 are more insensitive to the non-magnetic disorder than those of QSHE2 because the bulk gap of QSHE1 is larger than that of QSHE2. With further increasing of the disorder strength, the conductance gradually reduce to zero and the system eventually enters the insulating regime. In summary, we predict that the QSHE can be induced by applying an electric field in silicene even if the intrinsic spin-orbit coupling is very weak. The energy bands, the configurations of the spin-dependent local-currentflow vector, and the conductance of the system are numerically studied using the tight-binding Hamiltonian. The first and second Rashba spin-orbit couplings, referring to the nearest and next-nearest neighbor hoppings respectively, can be tuned by the external electric field due to the buckled structure of silicene. The staggered sublattice potential induced by the external electric field open a bulk gap and the gapless edge states are built in the gap by the two kinds of Rashba spin-orbit couplings. With the help of spin-dependent local-current-vector configurations, we find that the gapless edge states are indeed spin-filtered. We also find that when the two kinds of Rashba spin-orbit couplings are tuned properly, there are two types of QSHE, QSHE1 with a wide bulk gap and QSHE2 with a narrow bulk gap. Moreover, the gapless edge states have also been found to be robust against non-magnetic disorder.
2012-12-19T05:17:07.000Z
2012-12-19T00:00:00.000
{ "year": 2012, "sha1": "a84a489e0d14992c1cc4b1fa2b0de2a409999620", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1212.4577", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a84a489e0d14992c1cc4b1fa2b0de2a409999620", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253079156
pes2o/s2orc
v3-fos-license
Management of Obese Type 1 Diabetes Mellitus (Double Diabetes) Through Telemedicine During COVID-19 Pandemic Lockdown: A Case Report Metabolic syndrome in Type 1 diabetes mellitus (T1DM) has been shown to be an independent risk factor for macro-vascular and micro-vascular complications. Obesity also affects many people with T1DM across their lifetime with an increasing prevalence in recent decades. Individuals with T1DM who are overweight, have a family history of type 2 diabetes, and/or have clinical features of insulin resistance, are known as "double diabetes". It is challenging for a person with double diabetes to achieve reasonable glycemic control, avoid insulin-related weight gain, and prevent hypoglycaemia. This was especially true during the coronavirus disease 2019 (COVID-19) pandemic lockdown. The aim of this report is to show that lifestyle modification through telemedicine can immensely help in managing uncontrolled T1DM with associated morbid obesity in lockdown situations, with the help of the diabetes educator. In this case, the complicated history of double diabetes was taken through telephonic and online consultations with the help of a nutritionist and diabetes educator, and the treating clinician supervised the insulin doses and frequency. Patient Health Questionnaire (PHQ)-9 questionnaire was used to assess depression. Medical nutrition therapy (MNT) was given through online consultations, where the patient was reoriented to carbohydrate counting, insulin dose adjustment, along with modifications in the diet. Regular exercise was advised along with frequent self-monitoring of blood glucose (SMBG). Moreover, the diet order was changed to eat protein and fibre first, followed by carbohydrates, later. The three-tier system of the medical expert, clinical dietitian, and diabetes educator was applied. The subject was trained for carbohydrate counting and insulin dose adjustment by teaching her about the insulin-to-carb ratio and insulin sensitivity factor (ISF). She was asked to examine her insulin injection sites by visual and palpatory methods for lipohypertrophy. Once a week, the diabetes educator and nutritionist did telephonic follow-up and counselling, while online consultation was done by the treating clinician once a month. As a result, her weight, BMI, and waist circumference were reduced drastically, and there was an improvement in haemoglobin A1C (HbA1C), lipid parameters, and blood pressure after the intervention. Thus, implementing diabetes education via telemedicine in circumstances such as the COVID-19 pandemic can help achieve the best possible compliance for strict diet adherence, regular exercise and monitoring, reducing obesity, glycosylated HbA1c, insulin doses, and risk of depression in a person with double diabetes. Introduction A person with type 1 diabetes mellitus (T1DM) is traditionally described as lean and insulin-sensitive, where insulin deficiency rather than insulin resistance is the primary pathophysiological mechanism. The global increase in overweight and obesity, the so-called obesity epidemic [1], is associated with metabolic disturbances like insulin resistance, hyperinsulinemia, dyslipidemia, and subclinical inflammation, which results in the development of micro and macro-vascular diseases [2]. Obesity also affects a large number of people with T1DM across their lifetime, with an increasing prevalence in recent decades and with rates ranging from 2.8% to 37.1% [3], which is termed "double diabetes" [4]. Insulin resistance and tight glycemic control also increase weight, insulin demand, and the risk of hypoglycemia [5]. A sedentary lifestyle, a high-calorie diet rich in fats and simple sugars, and a low-fibre diet in T1DM also lead to poor metabolic control, weight gain, and affective disorders like depression that further aggravate the condition [6]. Metabolic syndrome in T1DM has been shown to be an independent risk factor for macro-vascular and micro-vascular complications [7]. Managing uncontrolled T1DM on a high insulin dose with associated morbid obesity is challenging and may also be accompanied by behavioural changes like eating disorders or associated depression, especially during situations such as the coronavirus disease 2019 (COVID-19) pandemic lockdown, which may further complicate the diabetes management. Studies have used Diabetes Eating Problem Survey (DEPS-R) for the diagnosis of eating disorders and the Patient Health Questionnaire (PHQ)-9 scale for the assessment of depression [8,9]. Lifestyle modification through telemedicine can play a vital role in such cases, which, if effectively implemented, can give rewarding results. Patient information A 33-year-old female homemaker, vegetarian, with a diagnosis of T1DM for 24 years, currently staying at her hometown, presented with uncontrolled diabetes, despite high doses of insulin, increasing weight, limitation of movements, tiredness, exhaustion, and emotional instability with negative thoughts. She was on subcutaneous insulin basal-bolus therapy and had a sedentary lifestyle due to the COVID-19 lockdown. She has polycystic ovarian syndrome (PCOS), had pre-gestational diabetes mellitus nine years ago, and delivered a normal baby through a lower (uterine) segment Caesarean section, without significant perinatal complications. The patient was also hypertensive for four years, which was well controlled on telmisartan 20 mg once a day. She had non-proliferative diabetic retinopathy, and there was no other significant past, family, or personal history. She was on basal-bolus insulin therapy in divided doses, 140 units/day. She was also on metformin for polycystic ovarian syndrome (PCOS) and voglibose for postprandial hyperglycaemia. She was leading a sedentary lifestyle with minimal activity since the lockdown in March 2020, that is for the past six months till the presentation. Clinical picture She consulted us via teleconsultation from a distant city in September 2020. For the past seven years, she had been managing her blood glucose levels by herself and had not consulted anyone else during this period. She was performing the SMBG testing method using the glucometer and was not using the continuous glucose monitoring (CGM) device. Her height, weight, BMI, waist circumference, and haemoglobin A1C (HbA1C) were 150 cm, 67.8 kg, 30.1kg/m 2 , 101 cms, and 8.7%, respectively. The blood pressure measurement was taken at home by herself using an electronic blood pressure apparatus, of a standard cuff size of 16 cm by 36 cm. Her mid-arm circumference was 35 cm. Also, there was a gradual increase in weight in the past three years. Other biochemical parameters were within physiological limits. Diagnostic assessment Her history was taken through telephonic and online consultations. The important parameters of history like easy fatiguability, nocturnal micturition, pre-gestational diabetes mellitus, and PCOS were recorded. The clinician, nutritionist, and diabetes educator engaged both in the conference calls with the patient and her husband and also without the patient to formulate further treatment protocol, so as to improve the condition of the patient. The PHQ9 questionnaire was used to assess depression. The initial PHQ9 score was 10, which depicts that she was suffering from moderate depression. The DEPS-R scale showed that she did not have any eating disorder. The treating clinician supervised the insulin doses and frequency. Details like the presenting complaints, history of presenting illness, treatment history, 72 hours of dietary recall with food frequency, physical activity profile, and history of associated complications were reviewed online by the nutritionist and diabetes educator. Therapeutic interventions Medical nutrition therapy (MNT) was carried out through online consultations. Many T1DMs may underestimate their calorie consumption. However, the patient was literate and was using the kitchen weighing scale. She was given training in food exchange and the hand & plate method. Past energy intake was 1700-1800 kcals/day (carbohydrate 259 g, protein 46 g, fat 54 g), calculated by the 24-hour dietary-recall method by the nutritionist. The food frequency questionnaire revealed that fast/fried food and bakery food consumption were twice and three to four times a week, respectively, while nuts, fruits, and green leafy vegetables were consumed once a week. Her total calorie intake was reduced to 1200 kcals/day (carbohydrate 195 g, protein 72 g, fat 36 g), including a moderate carbohydrate, low fat, high fibre diet, and introducing free foods (containing less than 20 calories or 5 g carbohydrate per serving). The patient was aware of carbohydrate counting, insulin dose adjustment, insulin-to-carb ratio (ICR) (ICR=450/total daily dose (TDD)), and insulin sensitivity factor (ISF) 1700/ TDD. However, she was not compliant and wasn't following these methods. Hence, she was counselled about their importance. Regular exercise of 45-60 mins (walking, jogging) in two to three sessions was advised, along with a more frequent SMBG. Postprandial blood glucose was high, and hence, protein snacks replaced carbs. The food order was changed to eat protein and fibre first, followed by carbs later. The diabetes educator played an essential role in getting optimum diet, lifestyle changes, blood glucose monitoring compliance, etc., through multiple telephonic calls, WhatsApp (Meta Platforms, Inc.Menlo Park, California, United States), and frequent online meetings. The three-tier system of the medical expert, clinical dietitian, and diabetes educator was applied. The timeline for the intervention was around 16 weeks or 120 days. She was asked to examine her insulin injection sites by visual and palpatory methods for lipohypertrophy. Once a week, the diabetes educator and nutritionist did telephonic follow-up and counselling, while online consultation was done by the treating clinician once a month. Follow-up and outcome of the intervention Five months after the intervention, improvements in lipid parameters and blood pressure were seen. Except for insulin doses, other medications remained unchanged. Her PHQ9 score dropped down to 5, indicative of decreased severity of depression. The basal-bolus dose of insulin after the intervention was: Inj. human actrapid (regular) insulin 06IU-18IU-14IU (BBF, BL, and BD, respectively); Inj. human insulatard (NPH) insulin 06IU at 9 am and 15IU at 9 pm: TDD of insulin 59 units/day. Her weight, BMI, and waist circumference were reduced by 10%, 10%, and 14.85%, respectively. The TDD of insulin was reduced by 56%, while her HbA1C, fasting, and post-meal blood glucose were reduced by 11.5%, 20%, and 29%, respectively, from baseline ( Table 1). The prevalence of obesity is increasing globally, which not only increases the risk of type 2 diabetes mellitus but also affects people with T1DM, primarily due to changing dietary habits and poor exercise compliance. Management of such cases of double diabetes is challenging. Sept 2020 (Baseline) January 2021 (Post-intervention) Post-intervention % Change On the evening of March 24, 2020, the Government of India ordered a nationwide lockdown to limit the movement of India's entire 1.38 billion (138 crores) population as a preventive measure against the COVID-19 pandemic in India [10]. People with uncontrolled diabetes are believed to have a higher risk of complications, severity, and death. Studies have shown the potential benefits of remote telemedicine in diabetes care, and its use is rapidly increasing due to the pandemic [11][12][13]. In a systematic review of 29 studies in paediatric diabetes care, it was concluded that telemedicine has the potential to facilitate patient monitoring and improve short-term glycemic control in some contexts [14]. While discussions about remote telemedicine are rapidly increasing due to the pandemic, a number of studies have previously investigated the potential benefits of telemedicine in diabetes care [15]. A recently published perspective article highlighted how paediatric patients with T1DM have historically led the way in the adoption of diabetes technology [16]. In their article, Danne and Limbert identify how young patients have been particularly receptive to new technologies such as insulin pumps and glucose sensors, suggesting the trend would continue for telemedicine [16]. This strategy of telemedicine approach is unique because it is usually just the clinician and dietitian who are involved in the treatment protocol. This may decrease the efficiency of the treatment, as most of the time, patients may not be compliant enough to follow all the prescribed medications and techniques regularly, and it may not be possible to take regular follow-ups by healthcare professionals. Hence, diabetes educators can proactively get involved, play an important role as the connecting link, and collectively help in increasing the effectiveness of the treatment, achieving the desired goals at a faster pace. Weight management is a continuous process, and the patient has been counselled for consistent regular follow-ups with the team regarding the same, targeting HbA1c between 6.5-7%, and BMI less than 23kg/m 2 , which is the cut-off for Asians. Urine spot microalbumin creatinine ratio must be checked as it is an independent risk factor for cardiovascular diseases. Once there is a further decrease in weight, the insulin doses will be consequently reduced, and we shall also withdraw the oral drug, voglibose in the subsequent follow-ups. We shall also perform annual screening for target organ damage to assess any structural or functional impairment of major body organs. Thus, our case report has shown that telemedicine through structured virtual/telephonic connections can also help patients improve their condition, even when they cannot avail themselves of facilities from faraway places, especially in a challenging case like double diabetes (obese T1DM). Conclusions Through a holistic approach, diabetes education can be implemented via telemedicine for lifestyle modification by a diabetes educator and nutritionist under a clinician's guidance. This can help us to achieve the best possible compliance for strict diet adherence, regular exercise, and monitoring. Moreover, it is also evident from our case report, that people with T1DM can be managed in a very efficient manner, if they are well acquainted with the technology and can achieve the possible glycemic targets, even if they are situated in remote areas. Telemedicine can help in weight control, reducing glycosylated HbA1c, lowering blood pressure, achieving better glycemic control on lesser insulin doses, and improving lipid parameters. It also offers a better quality of life by reducing the risk of depression in a person with double diabetes (obese T1DM) during times like the COVID-19 pandemic when in-person consultation is not possible. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2022-10-23T15:26:55.461Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "e3d047f69f2ece4c57d67787b21181900ec8ba45", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/107468-management-of-obese-type-1-diabetes-mellitus-double-diabetes-through-telemedicine-during-covid-19-pandemic-lockdown-a-case-report.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "39659d35cb7bb49b3b2577db294fc4e9a899f5eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18375904
pes2o/s2orc
v3-fos-license
Dressing wear time after breast reconstruction: study protocol for a randomized controlled trial Background One of the major risk variables for surgical site infection is wound management. Understanding infection risk factors for breast operations is essential in order to develop infection-prevention strategies and improve surgical outcomes. The aim of this trial is to assess the influence of dressing wear time on surgical site infection rates and skin colonization. Patients’ perception at self-assessment will also be analyzed. Methods/Design This is a two-arm randomized controlled trial. Two hundred breast cancer patients undergoing immediate or delayed breast reconstruction will be prospectively enrolled. Patients will be randomly allocated to group I (dressing removed on postoperative day one) or group II (dressing removed on postoperative day six). Surgical site infections will be defined by standard criteria from the Centers for Disease Control and Prevention (CDC). Skin colonization will be assessed by culture of samples collected at predefined time points. Patients will score dressing wear time with regard to safety, comfort and convenience. Discussion The evidence to support dressing standards for breast surgery wounds is empiric and scarce. CDC recommends protecting, with a sterile dressing for 24 to 48 hours postoperatively, a primarily closed incision, but there is no recommendation to cover this kind of incision beyond 48 hours, or on the appropriate time to shower or bathe with an uncovered incision. The results of the ongoing trial may support standard recommendations regarding dressing wear time after breast reconstruction. Trial registration ClinicalTrials.gov identifier: http://NCT01148823. growth, Patient satisfaction Background Surgical site infection (SSI) is a relevant problem in surgical practice, with surrounding issues that still need to be clarified [1][2][3][4]. Interventions to reduce the incidence of SSI are essential to reduce not only morbidity, but also costs to the individual and to society [2,[5][6][7]. Identifying SSI risk factors for breast surgery is essential in order to develop infection-prevention strategies and improve surgical outcomes [7]. Risk factors for SSI are usually classified into preoperative (patient-related, for example, age, obesity, tobacco use, comorbidities, use of immunosuppressive medications), perioperative (procedure-related factors, such as type and duration of the operation, hypoxia, operating room traffic and operating room parameters) and postoperative categories [2,19]. One of the major risk factors in the postoperative period is wound management [2,[19][20][21]. Despite the importance of surgical wound management in preventing infection, literature on incisional wound management is sparse [21,22]. Postoperative wound care is an ancient practice, with recorded evidence dating back 4,000 years [23]. Purposes for wound dressing include protection of the wound from trauma and contamination, absorption of wound exudates and compression to minimize edema and obliterate dead space [23][24][25][26][27]. The search for an ideal postsurgical breast dressing has led to the development of several different materials and application techniques [28,29]. Despite the abundance of wound dressing products available nowadays, there is little empiric evidence to guide product choice for site-specific incisional wounds [26,30], and traditional low-technology gauze-based dressings are commonly used [25,27,29,31,32]. Due to lower costs, this kind of dressing is largely used in the public health system in Brazil. From the literature, the ideal dressing wear time is also controversial. Some authors recommend the early exposure of the surgical wound, to allow easy wound inspection without inconvenience to the patient, to release the patient for his/her routine personal care and to decrease costs [21,22,24]. Other authors recommend the use of dressings for a longer time, frequently until sutures are removed, without increase in SSI rates [6,31,33,34]. The Centers for Disease Control and Prevention (CDC) provide recommendations concerning prevention of SSIs. No recommendation is offered for some practices, either because there is a lack of consensus regarding their efficacy or because the available scientific evidence is insufficient to support their adoption [19]. Available evidence to support dressing standards for incisional wounds, including breast surgery wounds, is empiric and scarce [20,23,24,26,27,30,35]. CDC's guidelines instruct that wounds that are closed primarily should be covered with a sterile dressing for 24 to 48hours [19]. There is neither recommendation of the type of dressing nor the ideal dressing wear time following breast surgery. These remain unsolved issues, and decisions regarding routine dressing are made on the basis of surgeons' personal experience [19]. This randomized controlled trial was designed to assess the influence of dressing wear time after breast reconstruction on SSI rates, skin colonization and patients' perceptions of safety, comfort and convenience. Study aims To assess whether dressing wear time after breast reconstruction will influence SSI rates. Secondary aims are to assess the influence of dressing wear time on skin colonization and patients' perceptions with regard to safety, comfort and convenience. Ethical issues The Universidade do Vale do Sapucaí Ethical Committee approved the study protocol (786/07 and 1623/11). Only participants who have agreed to provide written informed consent will be included in the study. Study design and setting This is a two-arm parallel group randomized controlled trial, to be conducted in a university-affiliated hospital. Patients will be recruited from the Breast Unit of the Plastic Surgery Division of the Hospital das Clínicas Samuel Libânio -Universidade do Vale do Sapucaí. This trial was registered in ClinicalTrials.gov (NCT01148823). Eligibility criteria Inclusion criteria Breast cancer patients over 18 years of age undergoing immediate or delayed breast reduction at Hospital das Clinicas Samuel Libânio will be considered eligible for participation. Exclusion criteria Patients with body mass index (BMI) above 35Kg/m 2 , those with usual contraindications for breast reconstruction procedures, such as diabetes or heavy smoking, will be excluded. Patients whose dressings get wet in the first 24 hours after operation, thus requiring their change, will also be excluded. Groups' assignment, randomization and allocation concealment Two hundred patients will be prospectively enrolled after giving informed consent. Patients will be randomly assigned to group I (n = 100), which will have dressings removed on the first postoperative day or to group II (n = 100), whose dressings will be removed on the sixth postoperative day. The rationale for comparing 24 hours versus 144 hours of dressing include CDC recommendation of protecting an incision with a sterile dressing for 24 to 48 hours [19] (group I) and the time of removing sutures (group II). The allocation will be determined by a computer-generated sequence (Bioestat 5.0, Instituto de Desenvolvimento Sustentável Mamirauá, Belém, PA, Brazil). A numbered and sealed opaque envelope will be opened on the first postoperative day to reveal the group to which the patient is allocated. Baseline procedures and interventions Patients will take a shower with liquid detergent-based chlorhexidine 4% prior to the operation [36], and an alcoholic solution of chlorhexidine 0.5% will be used for the antisepsis of the surgical site in the operating room [37]. Patients will undergo immediate or delayed breast reconstruction under general anesthesia, by the use of flaps and/or implants and, performed by the same surgical team. All patients will receive prophylactic antibiotics (cephazolin). At the end of the operation, the surgical site will be cleansed with sterile physiological saline, a sample for quantitative skin culture will be obtained and a conventional gauze and tape dressing, which use is the established practice in our hospital, will be applied. Sutured wounds will be completely covered with four layers of dry sterile cotton gauze and fixed in place by a microporous adhesive tape. The surgical team will not be aware of the group to which the patient will be allocated. Patients allocated to group I will be instructed to keep their wounds uncovered and to follow their usual personal hygiene routine, and patients in group II will be instructed not to wet the dressing. Quantitative skin cultures will be obtained in the operating room immediately before applying the dressing, as well as immediately after the removal of dressing. In group I, an additional sample will be collected at the sixth postoperative day. A standard 5cm by 10 cm area (determined by a sterile pattern) over the surgical wound will be swabbed with a sterile cotton swab premoistened with sterile saline. This swab will be placed in a sterile container with 1.0 ml of saline and immediately conducted to the laboratory. Patients will be discharged from the hospital on the first postoperative day after allocation, and they will return weekly for follow-up, for four weeks. Microbiological methods Standard microbiological methods and criteria will be used to identify microorganisms [38]. Aliquots of 0.2 ml of the sample will be plated on hypertonic manitol (HM) agar, selective for staphylococci; on blood agar, to identify hemolytic colonies; on Sabouraud agar with chloramphenicol (0.05mg/ml), selective for fungi and yeasts; and on eosin-methylene blue (EMB) agar, selective for enterobacteria. The plates will be incubated aerobically for 48 hours at 37°C. The same laboratory technician will process all samples and, after 48 hours, the plates will be examined by a microbiologist. Bacterial count results will be reported as colony forming units (CFU) per plate. Whenever CFU count in a plate exceeds 300, it will be scored as over 300. Staphylococci will be identified as coagulase-negative Staphylococcus sp. or S. aureus on the basis of Gram stain, the presence of hemolysis and on coagulase testing. The same microbiologist will assess all the plates. Both the laboratory technician and the microbiologist will be blinded. Assessment of postoperative infection The Centers for Disease Control and Prevention (CDC) considers SSI to be an infection that occurs within 30 days after the operative procedure if no implant is left in place, or within one year if an implant is in place and the infection appears to be related to the operative procedure [39]. Thus, patients will be systematically followed-up once a week for 30 days regarding postoperative infection, by a single surgeon. Patients who receive an implant will have an additional assessment by the same surgeon, one year after the operation. The CDC definitions and classifications of SSI will be considered (Table 1) [39]. As with all CDC definitions of nosocomial infections, a surgeon's diagnosis of infection will be considered an acceptable criterion for an SSI [39]. Patients' assessments of dressing wear time On their return in the second week after operation, patients will be asked to rate their dressing wear time with regard to safety, comfort and convenience, by the use of a 5-point rating scale (Table 2) [35]. Outcomes measures The primary outcome is the incidence of SSI, which will be defined on the basis of CDC's definitions [39]. Skin colonization rates and patients' preferences regarding dressing wear time are secondary outcomes. End points Participation is considered complete after the 30 th postoperative assessment day if no implants have been used, or after the 12 th postoperative month if an implant was used. Another exit point is if the dressings of group II patients get wet before the sixth postoperative day, thus requiring withdrawal from the study. Statistical analysis The rejection level for the null hypothesis will be fixed at 5% (α ≤ 0.05). The Mann-Whitney test will be used to compare groups I and II with regard to age, BMI and duration of operation. The Fisher test will be used to compare groups I and II regarding SSI occurrence. Logistic regressions will be applied to detect significant associations between variables such as age, BMI and duration of operation and the incidence of SSI. For inter-group assessment of skin colonization, the Mann-Whitney test will be used to compare groups I and II pre-dressing and at the sixth postoperative day. These tests will be applied independently for each medium used. Because skin colonization could play a role in SSI rates, an intra-group assessment of skin colonization will also be performed. For this assessment, in group I Friedman two-way analysis of variance will be used to assess the differences in number of CFU among the three moments (pre-dressing, first and sixth post-operative day). Whenever the difference is significant, the Friedman two-way analysis of variance will be complemented by the multiple comparisons test to determine which moment significantly differed from the others. In group II, the Wilcoxon test will be used to compare the number of CFU pre-dressing and at the sixth postoperative day. The Kolgomorov-Smirnov test will be applied to compare the two groups regarding patients' assessments of safety, comfort and convenience. The Fisher test will be used to compare groups I and II regarding patients' choice (one day or six days). Discussion SSI is a major source of adverse events in patients undergoing breast cancer surgical procedures, generating psychological issues, increased duration of hospitalization and costs, as well as delay in commencing postoperative adjuvant therapies [2,5,11]. Consequently, surgeons should make a determined effort to prevent SSI. As protocols and guidelines should be founded on evidence-based medicine principles, well-designed studies are essential to support or to adjust clinical practice [20]. Wound management is a key area in preventing SSI [21]. However, few studies have focused on the management of the wound after closure as a method of reducing infection [34]. This randomized trial focuses on the influence of dressing wear time in SSI rates. Since the greater the degree of surgical wound contamination, the higher the risk for infection [2,19], skin colonization assessment is a secondary aim of this study. The management of surgical wounds should involve the principle of minimizing harm, and patient preference and tolerance must also be considered [30]. Thus, assessment of patients' perceptions of dressing wear time is another secondary aim in this trial. Some authors Involves deep soft tissues (fascial and muscle layers) and meets at least one of the following: Involves any part of the anatomy (organs or spaces) and meets at least one of the following: • Purulent drainage from the superficial incision; • Purulent drainage from the deep incision but not from the organ/space component of the surgical site; • Purulent drainage from a drain that is placed through a stab wound into the organ/space; • Organisms isolated from an aseptically obtained culture of fluid or tissue from the superficial incision; • A deep incision that spontaneously dehisces or is deliberately opened by a surgeon when the patient has at least one of the following signs or symptoms: fever (>38°C), localized pain or tenderness, unless the incision is culturenegative; • Organisms isolated from an aseptically obtained culture of fluid or tissue in the organ/ space; • At least one of the following signs or symptoms of infection: pain or tenderness, localized swelling, redness or heat, and the superficial incision is deliberately opened by surgeon unless the incision is culture-negative; • An abscess or other evidence of infection involving the deep incision is found on direct examination, during reoperation, or by histopathologic or radiologic examination; • An abscess or other evidence of infection involving the organ/space that is found on direct examination, during reoperation, or by histopathologic or radiologic examination; • Diagnosis of superficial incisional SSI by the surgeon or attending physician. • Diagnosis of deep incisional SSI by the surgeon or attending physician. • Diagnosis of an organ/space SSI by the surgeon or attending physician. pointed out that omitting a dressing after the first 24 postoperative hours could be convenient for patients, allowing them to carry out their personal hygiene more easily [24]. However, other authors observed that dressings are comforting to patients by masking their scars without increasing SSI rates [23,26,35]. Considering the mutilating characteristics of breast cancer treatment, we hypothesize that patients, particularly those who undergo immediate breast reconstruction, might prefer to keep their dressings in place for a longer time, thus delaying the moment of seeing their reconstructed breasts. CDC's guidelines for managing surgical wounds that are closed primarily instruct patients to keep their wounds dry and covered for 24 to 48 hours [19], but the ideal dressing wear time following breast surgery remains an unsolved issue. The results of this trial may support standard recommendations regarding dressing wear time after breast reconstruction. Trial status Recruitment is on-going (134 patients had been operated on by the end of December 2012).
2016-05-12T22:15:10.714Z
2013-02-22T00:00:00.000
{ "year": 2013, "sha1": "45aebb2215f6310140779d64444df888efd8b0ee", "oa_license": "CCBY", "oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-14-58", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c6399ad82f3a9ce5efc7d6fdd4b046b7992de8fe", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208724527
pes2o/s2orc
v3-fos-license
Dissolution Behaviours of Acetaminophen and Ibuprofen Tablet Influenced By L–HPC 21, 22, and Sodium Starch Glycolate as Disintegrant The dissolution of tablets is one of a drug absorption determinant. Disintegrant agent has play an important role on determining the dissolution of tablets. In this experiment, the dissolution behaviours of Acetaminophen and Ibuprofen Tablet was studied using various disintegrant agent such as Low substituted – Hydroxypropyl Cellulose (L–HPC) 21, L–HPC 22 and Sodium Starch Glycolate (SSG) as comparator. Those disintegrant agents were used at three concentration (6%, 7% and 8%) for every tablets formula. Tablets were made by wet granulation method and pressed using single punch 13 mm flat E. Korsch machine. Evaluation of each tablets quality were conducted include for uniformity of weight and size (diameter and thickness), hardness, friability, disintegration time and dissolution. Physically standards from tablets were in good condition, the standards of the weight and thickness uniformity, hardness and friability met the requirement. The dissolution profile on Acetaminophen Tablets showed that only tablet with 6 % L–HPC 21 did not meet the requirement of FI V (Q = 80%, 30 minutes), but on Ibuprofen Tablets where met the requirement of  FI V (Q = 80%, 60 minutes) only tablet with 8%  L– HPC 21,  7% and 8% SSG. The conclusion of the study was the L–HPC has more disintegrant character at hydrophilic active ingredients. Key words:  Acetaminophen Tablet, Ibuprofen Tablet, SSG, L-HPC 21 and 22, Dissolution Profile Introduction Tablet dosage forms has widely used in global market cause of all advantages and benefits. The composition of all compressed tablets should, in fact, be designed to guarantee that they will readily undergo both disintegration and dissolution in the upper gastrointestinal (GI) tract. Dissolution tests are used nowadays in a wide variety of applications: to help identify which formulations will produce the best results in the clinic, to release products to the market, to verify batch to batch reproducibility, and to help identify whether changes made to formulations or their manufacturing procedure after marketing approval are likely to affect the performance in the clinic. Further, dissolution tests can sometimes be implemented to help determine whether a generic version of the medicine can be approved or rejected [3]. Dissolution tests can be used to predict the in vivo performance of the dosage form when release of the drug is the limiting factor in the absorption process. The drug release profile influenced by complex factors, one of all is excipient selection in tablet formulation [3]. Standard in vitro dissolution testing models include two processes: the release of drug substance from the solid dosage form and drug dissolution. Drug release is determined by formulation factors such as disintegration/dissolution of formulation excipients or drug diffusion through the formulation. Drug dissolution is affected by the physicochemical substance properties (e.g., solubility, diffusivity), solid-state pro-perties of the substance (e.g., particle surface area, polymorphism), and formula-tion properties (e.g., wetting, solubilization). In vitro dissolution testing should thus provide predictions of both the drug release and the dissolution processes in vivo. To reach this goal, the choice of dissolution apparatus and test medium should be carefully considered [6]. Disintegration/dissolution by formulation factors effect from excipient which adding as disintegrant such as Low substituted hydroxypropyl cellulose (L-HPC) and Sodium starch glycolate (SSG). L-HPC is widely used in oral solid dosage forms. It is primarily used as a disintegrant, and as a binder for tablets and granules in wet or dry granulation. Whereas SSG commonly is used in oral pharmaceuticals as a disintegrant in capsule and tablet formulations. Usually disintegration occurs by rapid uptake of water followed by rapid and enormous swelling [7]. In this experiment, the investigation of L-HPC 21, 22 and SSG effect as disintegrant has been compared between hydrophilic substance such as Acetaminophen Tablet and hydrophobic substance such as Ibuprofen Tablet in the differentiation dissolution profiles. The result of this experiment can be used by formulator pharmacist as recommendation in designing solid dosage formulation. Materials Acetaminophen Methods 1. Tablet Formulation The tablets was made in variation formulas shown in Table 1 and 2. The preparations were using wet granulation with 10 mesh for wet mass granulator filter and 16 mesh for drying mass granulator sieving. Then, after manually mixture with disintegrants and lubricants for just 3 minutes, all of granules were evaluation to determine that all of granule ready to compress. Granules Evaluation Evaluation for granules quality were include loss on drying, bulk density, tap density, compressibility, flow ability and angle of requirements that has a good criteria such as good compressibility and smooth fluently. Then all can be pressed into tablet. Tablet Evaluation After all of granules passed from evaluation, granules will be pressed by 13 mm flat single punch E. Korsch Tablet Machine. Then quality of tablet will be checked for hardness, friability, thickness, weight and uniformity include disintegration time and dissolution. Result and Discussion The evaluation result of granules quality were found those all of granules met the requirements standard for compressed tablet. Then, all of granules will proceed to pressed into tablet with 13 mm flat single punch E. Korsch Tablet Machine. After that, tablets was evaluated for checking tablet quality, such as below. Performance of physic from tablets were in good condition, the standards of the weight and thickness uniformity, hardness and friability met the requirement. But, if looked carefully at disintegration time and solubility percentage (dissolution). There were found something make some formula out of standards. The results of disintegration time and dissolution shown that only Acetaminophen tablets in formula F A1 did not meet the requirement for desintegration time (15 minutes) and dissolution (30 minutes, 80%). Found differently at Ibuprofen tablet, all of tablet were not passed from desintegration time but on the dissolution only formula P A3 , P C2 and P C3 were passed from the test (60 minutes, 80%). This results were concluded that L-HPC 21 has more desintegrant character than L-HPC 22 especially with hydrophobic API. Conclusion From the result shown that Aceta-minophen tablets with 6 % L-HPC 21 did not meet the requirement of FI V (Q = 80%, 30 minutes), but on Ibuprofen Tablets where met the requirement of FI V (Q = 80%, 60 minutes) only for tablet with 8% L-HPC 21, 7% and 8% SSG. Based on those data, its were concluded that L-HPC has more disintegrant character with hydrophilic active ingredients. Between those L-HPC , L-HPC 21 has more desintegrant nature than L-HPC 22.
2019-10-03T09:03:21.476Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "3732de9bf594bb1b50fa33365ceb27652f53907c", "oa_license": "CCBY", "oa_url": "http://jurnal.unpad.ac.id/idjp/article/download/23508/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2fb3669b8561493f1f82c0559280720d24e874fe", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
241704526
pes2o/s2orc
v3-fos-license
Prenatal Anhydramnios in a Girl: An Exceptional Case of Obstruction by Bilateral Ectopic Ureter The presence of posterior urethral valves as a cause of oligoamnios is well known. Differential diagnosis should be established with other pathologies such as prune-belly syndrome, high-grade vesicoureteral reflux, bilateral obstructive megaureter, urethral atresia, obstruction of the anterior urethra, ureteroceles and less frequently the presence of an obstructed bilateral ectopic system [1]. The ectopic ureter is more frequent in girls, associated with a duplicated renoureteral system and rarely associated with bilateral obstruction of the urinary tract. We present an exceptional case of anhydramnios in a female fetus secondary to obstruction of a bilateral ectopic ureter and a duplicated left system with severe worsening of renal function. Introduction The presence of posterior urethral valves as a cause of oligoamnios is well known. Differential diagnosis should be established with other pathologies such as prune-belly syndrome, high-grade vesicoureteral reflux, bilateral obstructive megaureter, urethral atresia, obstruction of the anterior urethra, ureteroceles and less frequently the presence of an obstructed bilateral ectopic system [1]. The ectopic ureter is more frequent in girls, associated with a duplicated renoureteral system and rarely associated with bilateral obstruction of the urinary tract. We present an exceptional case of anhydramnios in a female fetus secondary to obstruction of a bilateral ectopic ureter and a duplicated left system with severe worsening of renal function. Material and Methods We present the case of a preterm newborn (33+6 weeks) with adequate weight and a diagnosis in the third trimester of bilateral ureterohydronephrosis with double left system. Labor was induced at week 33 due to anhydramnios. At birth, a bladder catheter was placed and a renal ultrasound was performed. It showed a pielocalicial dilation (right side grade III, left side grade IV) and bilateral megaureter. A left double system was suspected with upper pole ectopic ureter and the lower pole draining in a bladder diverticulum. However, during her admission to the neonatology unit, she presented a progressive creatinine worsening up to 3.3mg/dL despite bladder catheterization and an episode of urinary tract infection. ISSN: 2576-9200 Research in Pediatrics & Neonatology Abstract Introduction: The diagnosis of oligoamnios or anhydramnios is usually associated with male fetuses, being the most frequent urological cause the posterior urethral valves and the ureterocele in second place.We present an exceptional case of anhydramnios in a female fetus secondary to obstruction of a bilateral ectopic ureter with severe worsening of renal function. Material and methods: We present the case of a preterm newborn (33+6 weeks) with a prenataldiagnosis in of bilateral ureterohydronephrosis with double left system. Labor was induced at 33 weeks due to anhydramnios.Progressive clinical, analytical and radiological worsening lead to the need of cystoscopy at nine days of life. The cystoscopic study showed a right ectopic ureter that imprinted on the vagina but whose meatus was not found, a ureter of the left upper kidney leading to the urethra, and the imprint of the left lower kidney ureter in the bladder. With electrocoagulation, a bilateral transurethral neo-orifice (TUNO) is performed as a drainage method of both dilated systems. Results: In the postoperative period diuresis improved and creatinine progressively decreased (normal range within a month of surgery), being discharged one week after surgery.Postoperative ultrasound showed a significant decrease in bilateral ureterohydronephrosis, A year later, the patient remains asymptomatic without UTI. Conclusion: The intravesicalization of the obstructed ectopic ureters by minimally invasive approach allows the resolution of acute urinary obstruction in a case of bilateral complex uropathy. The endoscopic approach is shown as an effective possibility in the diagnostic and therapeutic management of these patients. RPN.000611. 5(3).2021 An MRI was also performed to better define the anatomy. During cystoscopy, the right ectopic meatus could be identified in the upper area of the vagina, the left upper kidney ureter leading to the urethra and the left lower kidney ureter imprint in the bladder. By electrocoagulation a bilateral internal (neo-orifice) shunt was performed as a drainage method of both dilated systems (Figure 1). Result After surgery, the patient presented a clear clinical improvement with spontaneous diuresis and progressive decrease in renal function (normal range one month after surgery) ( Figure 2). Likewise, a notable decrease in bilateral pielocalicial dilation was observed (Figure 3). After one year of follow-up, the patient is asymptomatic, has not presented urinary infections and maintains antibiotic prophylaxis due to vesicoureteral reflux to the lower kidney. Discussion The ectopic ureter occurs in approximately 1 in 2000 newborns. It occurs more frequently in women and in 85% of cases it is associated with a pyeloureteral duplicity, usually depending on the ectopic ureter of the upper kidney [2]. In women, the most common sites of the ectopic meatus include the bladder neck, the proximal or distal urethra, the vestibule and the vagina. There have also been cases of ectopic ureteral orifices in the uterus and cervix [3]. Oligoamnios secondary to urethral pathology or bilateral ureteroceles has been described in children, which can lead to severe worsening of renal function and the need for renal transplantation in some cases [4]. However, there are no reports in the literature of bilateral ectopic ureteral obstruction in a female patient with progressive worsening of renal function and urgent need for bilateral drainage within a few days of life. The treatment of obstructed ectopic ureter is controversial. Typically, an external shunt (nephrostomy or ureterostomy) is performed to relieve hydronephrosis and resolve ureteral obstruction acutely. However, in our practice in these cases we perform minimally invasive techniques to solve this problem, such as intravesicalization of the ureter or the creation of an endoscopic transurethral neorifice to resolve the obstruction of the ectopic ureter with results similar to the traditional treatment [5]. The form of endoscopic treatment used in this case allowed an improvement of the immediate obstruction and an improvement of the renal function, avoiding surgeries with greater morbidity in a newborn. It is a non-invasive procedure that allows adequate ureteral drainage in cases of severe obstruction with a high risk of uncontrollable urinary tract infection or uretero-pionephrosis. The objective of this technique is to create a temporary internal drainage to control urinary infections, preserve the initial renal function and facilitate the baby to mature until a definitive surgery is proposed months later. It is reproducible, safe and does not invalidate other surgical options in case of failure or future definitive treatments, either by classical surgical approach, laparoscopy, laparo-assisted or with robotic surgery. So in conclusion, the endoscopic treatment by intravesicalization of the obstructed ectopic ureters allows an internal shunt to resolve the acute obstruction through a minimally invasive approach in a case of bilateral complex uropathy, reducing the morbidity and mortality associated with open surgeries or more invasive procedures.
2021-08-27T16:03:47.578Z
2021-01-28T00:00:00.000
{ "year": 2021, "sha1": "90a3be7eb20822a0e317535183d9e20bda9ceb69", "oa_license": "CCBY", "oa_url": "http://crimsonpublishers.com/rpn/pdf/RPN.000611.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7d8894958c0b482ddb106f7313f5a0a4530fd4d2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
254870886
pes2o/s2orc
v3-fos-license
Follistatin is a crucial chemoattractant for mouse decidualized endometrial stromal cell migration by JNK signalling Abstract Follistatin (FST) and activin A as gonadal proteins exhibit opposite effects on follicle‐stimulating hormone (FSH) release from pituitary gland, and activin A‐FST system is involved in regulation of decidualization in reproductive biology. However, the roles of FST and activin A in migration of decidualized endometrial stromal cells are not well characterized. In this study, transwell chambers and microfluidic devices were used to assess the effects of FST and activin A on migration of decidualized mouse endometrial stromal cells (d‐MESCs). We found that compared with activin A, FST exerted more significant effects on adhesion, wound healing and migration of d‐MESCs. Similar results were also seen in the primary cultured decidual stromal cells (DSCs) from uterus of pregnant mouse. Simultaneously, the results revealed that FST increased calcium influx and upregulated the expression levels of the migration‐related proteins MMP9 and Ezrin in d‐MESCs. In addition, FST increased the level of phosphorylation of JNK in d‐MESCs, and JNK inhibitor AS601245 significantly attenuated FST action on inducing migration of d‐MESCs. These data suggest that FST, not activin A in activin A‐FST system, is a crucial chemoattractant for migration of d‐MESCs by JNK signalling to facilitate the successful uterine decidualization and tissue remodelling during pregnancy. | INTRODUC TI ON Pregnancy is a complex physiological process, including the interaction between foetus and mother to maintain foetal development. Decidualization is a tissue remodelling process involving a variety of cell types in maternal uterus during pregnancy. The transformation of endometrial stromal cells (ESCs) into specialized secretory decidual cells is a key step in embryo implantation and survival. During decidualization, spindle-shaped fibroblast-like ESCs undergo dramatic morphological changes and differentiate into cobblestoneshaped decidual cells. 1 The blastocyst contacts the uterine epithelial cells, accompanied by the proliferation and differentiation of stromal cells to prepare for embryo implantation. 2 The decidualization process of mouse uterus occurs on Day 4.5 postcoitum (Day 0.5-vaginal plug), after that, a receptive endometrium is created for embryo implantation. The unique decidual environment can provide nutrition and growth factors for implanted embryos, regulate the invasion of trophoblast cells to decidua, control the activity of immune cells at the maternal foetal interface and establish immune tolerance. 3,4 The migration of decidualized ESCs is a critical event during decidualization, accompanying with the invasion of extravillous trophoblast cells into the decidual tissue, 5 but the systematic research on the mechanism of migration of the decidualized ESCs is still limited. Activin is the member of transforming growth factor-beta superfamily, and a double-chain glycoprotein connected by disulfide β subunit composition. It is isolated and identified from follicular fluid of ovary, and is so named because it can promote the secretion of follicle-stimulating hormone (FSH) by pituitary. According to the type of β subunit, activins are divided into activin A (βAβA), activin AB (βAβB) and activin B (βBβB). 6 Among them, activin A (Act A) is widely distributed in various tissues and well-studied. It serves as a sexual hormone regulatory protein that participates in a variety of physiological and pathological processes including regulation of inflammation, fibrosis, tumorigenesis, neurotransmission, angiogenesis and embryogenesis. [7][8][9] There are two signalling pathways of activin A, canonical SMADs-dependent signalling pathway and non-canonical signalling pathway. In the canonical signalling pathway, activin A binds to activin type II receptor (ActRII) to recruit and activate activin type I receptor (ActRI). The serine/threonine kinase residues of ActRI are phosphorylated, which then induce phosphorylation of SMAD2 and SMAD3. Moreover, the SMAD2/3/4 complex is formed to promote gene expression. [10][11][12] In reproductive biology, activin A plays an important role in the regulation of hormone secretion, menstrual cycle and decidualization. 13 Previous studies have reported that the expression of activin A receptor in ESCs increases in early pregnancy, 14,15 and activin A can promote decidualization of human ESCs, 16 while activin A deficient mice develop to term but die within 24 h of birth. 17 The physiological role of activin A derived from endometrium is unclear, but in the paracrine process, activin A can regulate cell differentiation, promote cell proliferation and participate in tissue remodelling and inflammatory response. These physiological processes are consistent with decidualization events. [18][19][20] As activin binding protein, follistatin (FST) is a single chain glycoprotein that is also isolated and identified from follicular fluid of ovary. Contrary to activin A, it can inhibit the secretion of FSH by pituitary. FST has a high affinity for activin A, which can prevent activin A from binding to its receptor and neutralize its biological effect. 21 FST also plays an important role in regulating activin bioavailability in circulation and within tissues. 22 In reproductive biology, studies have reported that both endometrial epithelial cells and decidual stromal cells can secrete FST. 14 Conditional knockout of FST impairs the receptivity and decidualization of mouse endometrium, resulting in serious reproductive defects. 23 These studies suggest that FST and activin A play an important role in decidualization of uterus during pregnancy. Activin A and FST are essential hormone regulatory proteins during decidualization, but their effects on the decidual cell migration remain to be clarified. Therefore, this study analysed the ef- | Primary ESCs/DSCs isolation and culture The endometrial stromal cells (ESCs) were isolated as previously described with slight modification. 24 Briefly, the mouse endometrium was separated under a dissecting microscope and cut into small pieces, then digested in media contained Liberase (0.125 mg/ml), DNase (2 mg/ml) and 0.25% trypsin for 1 h on ice followed by 1 h at room temperature, and 10 min at 37°C in a shaking water bath. Later, discard the supernatant and digest the remaining tissues in DMEM/F12 containing Liberase (0.125 mg/ml) and DNase (2 mg/ml) at 37°C for 40 min. Finally, a 70μm filter was used to remove the un- | RT-PCR Total RNA was extracted from d-MESCs using TRIzol reagent (Takara), and 1 μg of total RNA was used for cDNA synthesis using the PrimeScript Reverse Transcription Kit (Takara). PCR was carried out using PCR Kit (Takara) under the following conditions: 95°C for 90 s, followed by 35 cycles of (94°C for 30 s, 56°C for 30 s, 72°C for 1 min) with a final extension at 72°C for 10 min. PCR products were separated by 2% agarose gel electrophoresis and stained with Super GelRed (US EVERBRIGHT). The cDNA bands were analysed by Image J software (16.0.1), and gene expression levels were normalized against GAPDH. Primer sequences were shown in Table S1. | Real-time cell analysis Cell adhesion was monitored in the xCELLigence Real-Time Cell Analysis (RTCA; ACEA Biosciences Inc.) system using E-Plate 16 (ACEA Biosciences Inc.). Briefly, a total 50 μl of 2% FBS-DMEM/F12 was added into the plates, and baseline measurements were taken. d-MESCs (1 × 10 4 cells) were then seeded into the wells in 150 μl of 2% FBS-DMEM/F12 with or without activin A and/or FST. Cells were monitored every 15 min for 6 h. | Wound healing assay d-MESCs/DSCs were seeded into 12-well plates at a density of 1 × 10 5 cells per well and incubated at 37°C in 5% CO 2 to form a subconfluent monolayer. Then, a scratch-wound was produced in the confluent monolayers using a sterile 200 μl pipette tip, and the de- | Microfluidic cell migration assay Microfluidic devices were fabricated using the standard photolithography and soft-lithography technique as described previously. 28 | Statistical analysis All data were shown as means ± SD. Statistical evaluation was conducted using a Student's t-test or one-way anova followed by Tukey's multiple comparisons test. A significant difference was defined as p < 0.05. | Effects of activin A and FST on viability of d-MESCs The cultured stromal cells from mouse endometrium were assessed by immunocytochemistry using the antibody against Vimentin (a marker of stromal cells). 26 As shown in Figure 1A | Effects of activin A and FST on adhesion and wound healing of d-MESCs To investigate the effects of activin A and FST on the biological behaviour of d-MESCs, real-time cell analysis (RTCA) was performed to examine cell adhesion. 29 As shown in Figure 2A, activin A slightly decreased the adhesion of d-MESCs, while FST inhibited the adhesion of d-MESCs significantly, but FST action was attenuated by activin A. Scratch assays revealed that both activin A and FST promoted wound healing of d-MESCs, while FST exerted more significant effect, but such promoting effects of FST were also attenuated by activin A ( Figure 2B). | Effects of activin A and FST on migration of d-MESCs In vivo, reduced cell adhesion and increased motility are both related to cell migration. Therefore, we tested the directional migration of d-MESCs towards activin A and/or FST using a transwell assay. Among four groups, the total number of migratory cells was the largest in the FST group, followed by the FST combined with activin A group, while activin A alone did not increase the number of migratory cells ( Figure 3A). To better evaluate the cell migratory ability, such as distance, | Effects of activin A and FST on migrationrelated proteins expression and calcium flux in d-MESCs MMP2, MMP9 and ezrin are known to be involved in cell migration, and vimentin is an important marker for mesenchymal/motile phenotype. 30,31 Therefore, to determine the mechanism of action of activin A and FST on d-MESCs migration, MMP2, MMP9, Vimentin and Ezrin expressions were analysed by Western blotting. The results showed that the expressions of MMP9 and Ezrin were upregulated significantly after FST stimulation, and only slightly increased after activin A treatment ( Figure 4A). Neither activin A nor FST had a significant effect on MMP2 and Vimentin expression. | Effects of activin A and FST on expression of signalling proteins in d-MESCs The above results showed that FST was a more effective chemoattractant regulating d-MESCs migration; however, the signalling pathway is unclear. In this section, the protein expressions of canonical and non-canonical activin A signalling pathways were examined by Western blotting (Figure 5A). The results revealed that activin A and FST had no significant effect on the levels of p-SMAD3/SMAD3, indicating that activin A and FST might affect d-MESCs activities through non-canonical pathways. Furthermore, it was found that activin A and FST activated JNK, but not ERK1/2 and p38, resulting in significantly elevated levels of phosphorylation of JNK. To determine whether FST-induced JNK activation was responsible for d-MESCs migration, we used JNK inhibitor AS601245 to repeat the migration assay. d-MESCs were pretreated with 1% DMSO or 1 μM AS601245 diluted with 1% DMSO for 1 h. We found that AS601245 treatment decreased level of p-JNK protein and reduced the migratory ability of d-MESCs towards FST ( Figure 5B,C). Taken together, the results indicated that FST might induce d-MESCs migration through JNK signalling. | Effects of activin A and FST on viability in DSCs To verify the above results, decidual stromal cells (DSCs) were isolated from the uterus of pregnant mouse. As shown in Figure 6A, immunocytochemical staining for anti-Vimentin revealed that the isolated cells belonged to stromal cells and could be used for subsequent experiments. Moreover, the cell viability was measured using CCK-8 assay after 24 h incubation and we found that 5 ng/ml activin A and 10 ng/ml FST promoted viability of primary cultured DSCs of pregnant mouse ( Figure 6B). | Effects of activin A and FST on DSCs migration Wound healing assay revealed that FST promoted wound healing of DSCs, while activin A exerted an antagonistic effect on FST ( Figure 7A). Next, the migration of DSCs was determined by microfluidic devices. We found that the migratory number and distance of DSCs induced by FST increased significantly compared with control group. Although there was no significant difference in the number of migrated DSCs induced by activin A alone (p > 0.05), there was a considerable difference in the migration distance of DSCs, compared with the control group (p < 0.05) ( Figure 7B). These data further confirmed that FST was a more effective chemoattractant for inducing the migration of d-MESCs. teries and spreads to the whole endometrium once pregnancy occurs. 14,33 Vimentin as a marker of stromal cells is highly expressed in F I G U R E 2 Effects of activin A and FST on adhesion and wound healing of d-MESCs. (A) The adhesion was assessed by real-time cell analysis (RTCA) in d-MESCs subject to activin A 5 ng/ml or/with FST 10 ng/ml for 6 h. The graph showed Cell Index from three separate experiments. **p < 0.01, compared with d-MESCs control group. #p < 0.05, compared with FST group. (B) A scratch-wound was generated in monolayer d-MESCs, and then, cells were treated with activin A 5 ng/ml or/and FST 10 ng/ml for 24 h. The graph showed the degree of wound healing from three separate experiments. Scale bar = 250 μm. *p < 0.05, **p < 0.01, compared with d-MESCs control group. | 135 stromal cells no matter decidualization happens or not, 34 and prolactin is widely used as a decidualization marker. 35,36 To investigate whether FST or activin A act on d-MESCs, the cell viability was determined using a CCK-8 kit. We found that activin A and FST both promoted DSCs viability, indicating that d-MESCs are the target cells in response to FST or activin A. In this study, 5 ng/ml activin A and 10 ng/ml FST were selected as working concentration. Real-time cell analysis (RTCA) is a technology based on the principle of microelectronic biosensor, which can realize the real-time analysis of cells without markers in the process of experiment. 37 In the present study, we found that FST inhibited the adhesion of d-MESCs significantly, while activin A did not alter d-MESCs adhesion but neutralized FST action. Moreover, the scratch wound experiments also showed that FST promoted wound healing of d-MESCs, while activin A had an antagonistic effect on FST. Cell migration involves the degradation of extracellular matrix, decreased cell adhesion and enhanced cell chemotaxis. 5,38 In this study, the migration of d-MESCs was first tested by transwell migration assay. However, transwell assays are limited to only measure migrated cell numbers but lacking the ability to characterize quantitative cell motility and chemotaxis parameters at the single cell level such as cell migration speed, distance and directionality. In this regard, the microfluidic device offered quantitative insights into migratory responses of d-MESCs in well-controlled chemoattractant gradient conditions. 39 We found that although activin A did not in- The migration of d-MESCs induced by activin A 5 ng/ml or/and FST 10 ng/mL was analysed by transwell migration assay. Cells that passed through porous membrane were stained with Giemsa. Scale bar = 100 μm. The graph showed the average number of migrated cells in three separate experiments. **p < 0.01, compared with d-MESCs control group. #p < 0.05, compared with FST group. (B) Images of mouse d-MESCs migration towards different concentrations activin A 5 ng/ml or/and FST 10 ng/ml were taken in the microfluidic device at 0 h and 24 h, respectively. Scale bar = 100 μm. (C) The tracked cell trajectories in activin A and/or FST gradient were analysed by Chemotaxis and Migration Tool software. Images represented the directions of migrated cell treated with activin A 5 ng/ml or/and FST 10 ng/ml. (D) The graph showed the average number, distance and chemotactic index (C.I.) of migrated cells in the same size fields of the microfluidic device in three separate experiments. *p < 0.05, **p < 0.01 compared with d-MESCs control group. #p < 0.05, ##p < 0.01 compared with FST group. and MMP9 are the main markers of cell migration and invasion, 43,44 while MMP9 plays a more effective role in regulating cell invasion than MMP2, and MMP9 is the only one that gene deletion leads to the decline of fertility, suggesting that MMP9 plays an important role in the reproductive system. 45 Worthy of note, ezrin is one of the numbers of ERM, which was first isolated from chicken intestinal brush borders. It is the connector between cortical actin filament and cell membrane and involved in the physiological processes such as microvilli formation, cell membrane structure change and cell adhesion. In cells over-expressed ezrin, the migration ability is enhanced. 46 Our data revealed that FST, but not activin A, promoted expression of MMP9 and Ezrin in d-MESCs, suggesting that FST might induce d-MESCs migration by up-regulating Ezrin and MMP9. F I G U R E 4 Effects of activin In addition, cell migration is also related to many intracellular ions flux. Ca (2+) , an important second messenger in cells, regulates a variety of activities of cells, including cell migration, angiogenesis and inflammatory response. 47 The increase of intracellular calcium flux leads to the activation of a variety of signal pathways, furthermore, the disintegration of intercellular adhesion and cytoskeleton rearrangement. 48 In this study, we found that FST significantly enhanced the calcium signal of d-MESCs, while activin A neutralized FST action on calcium flux. Previous study has also shown that activin A can promote the migration of L929 cells and breast cancer cells through calcium pathway. 28,29 Our data support the conjecture that FST might induce migration of d-MESCs through increasing calcium influx. Activin A binds to ActRII and activates shared canonical SMADsdependent signalling pathway. In addition, MAPK, PI3K/AKT, WNT and Notch are activated by activin A, which in turn can transduce the signalling of the independent SMAD proteins, and this cascade constitutes the non-canonical pathways. 49 As an activin binding protein, FST shows high affinity for activin and prevents activin from binding to its signalling receptor. 21 However, FST receptor and its specific signalling pathway are still unclear. In the present study, we found no difference in the levels of p-SMAD3/SMAD3, p-ERK1/2/ERK1/2 and p-p38/p38 in d-MESCs, but obvious increase in the levels of p-JNK/JNK. Moreover, JNK inhibitor AS601245 significantly attenuated FST action on inducing migration of d-MESCs, suggesting that FST in activin A-FST system, not activin A, is a crucial chemoattractant for inducing migration of d-MESCs by JNK signalling. The previous study has indicated that a conditional knockout of FST (FST-cKO) results in a poor decidualization. 23 Our data suggest that FST-cKO may result in mDSCs migration disorder and failure of uterine remodelling to cause decidualization dysplasia. Activin A and FST are essential regulator for decidualization is undisputed; however, decidualization process is extremely complicated, also influenced by many other cells and molecules. Our findings provide a basis for preliminary experiments and possess reference significance for later research. F I G U R E 6 Effects of activin A and FST on viability of primary cultured decidual stromal cells (DSCs). (A) Immunocytochemical staining was performed to detect Vimentin expression in isolated mouse DSCs. Scale bar = 100 μm. The arrows indicated the positive cells. (B) CCK-8 assay was performed to examine the viability of primary cultured DSCs treated with activin A or FST for 24 h. *p < 0.05, compared with control group. In summary, this study indicates that an important chemoattractant is FST rather than activin A to induce the migration of d-MESCs by JNK signalling pathway. The maintenance of the balance of FST-activin A system is very important for regulating remodelling of uterus during pregnancy. Our findings suggest that FST may be used as a treatment target and a potential indicator for predicting d-MESCs migration and uterine decidualization, and administration of exogenous FST may improve decidua remodelling during pregnancy F I G U R E 7 Effects of activin A and FST on migration of primary cultured DSCs from pregnant mouse. (A) A scratch-wound was created in monolayer DSCs, and then, cells were treated with 5 ng/ml activin A or/and 10 ng/ml FST for 24 h. Scale bar = 250 μm. The graph showed the degree of wound healing from three separate experiments. *p < 0.05, compared with control group. (B) Images of mouse DSCs migration towards 5 ng/ml activin A or/and 10 ng/ml FST were taken in the microfluidic device at 0 h and 12 h, respectively. Scale bar = 100 μm. The graph showed the average number and distance of migrated cells in the same size fields of the microfluidic device in three separate experiments. *p < 0.05, **p < 0.01, compared with control group. #p < 0.05, compared with FST group. by inducing d-MESCs migration, subsequently, increase the success rate of assisted reproductive technology. | Limitations There are some limitations in our study. Firstly, these data have revealed that FST is an effective chemoattractant to induce mDSCs migration, but FST's specific receptor is still mysterious. Secondly, mouse decidualization in vitro was discussed, but there are still some differences in the pattern, biological characteristic with that in human. Human decidualization is a spontaneous process, while this study is induced in vitro by estradiol and progesterone. More studies should be carried out to explore the mechanism of the migration of CO N FLI C T O F I NTE R E S T The authors declare no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request.
2022-12-20T16:03:00.162Z
2022-12-18T00:00:00.000
{ "year": 2022, "sha1": "dcbf6db5da2d320bacc81eebdff268610125697d", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "ea551dce0eac9b0712e504856e376dfa90609e7d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
189335377
pes2o/s2orc
v3-fos-license
Micro Super Logistics System The development background of the ultra micro as a starting point,the construction of an ultra micro environmental are analyzed, the pilot micro over the layout of the service function of positioning, the SLP and internal operation unit mutual relationship analysis method of pilot of an ultra micro location, location layout, internal environment were analysis and design, through the establishment of distribution center to solve the distribution of the micro - expansion stage, route optimization problem, by center of gravity method combined with practice of distribution center location, distribution center management process optimization, carried on the design to the information system. Through a series of methods for the final completion of micro super logistics system integration design. Construction Scheme Design Super micro or mini supermarket is a new business platform. As a platform to provide micro super brand promotion and product distribution services for suppliers; community shopping, free delivery services for consumers; business expansion and marketing support for the napa stores. With the development of society and the quickening pace of people's life, micro supermarket, a new type of commercial service platform, is becoming stronger and stronger. Community convenience store is more acceptable to micro super mode, higher acceptance. In recent years, China's government departments have issued a series of policies and measures to encourage the rapid development of the logistics industry and express delivery industry, and the development and growth of the service sector in the economic sector is quite impressive. (2)train of thought and way of construction. At the beginning of the project, relying on the existing micro super model, existing community convenience store for the pilot, adhering to the "service, preferential and convenient" business philosophy, to lay the foundation for the "super micro integration" project construction; In the medium term, analyzing the existing model and the problem and find the solution, and according to the different characteristics of various locations the copy mode expansion, the integration of social resources to find more franchisees later; Later stage, the main task is to improve the online service function. There are many ways to build super micro, small communities, high-end communities and large communities, respectively, relying on property offices, guards, auxiliary rooms, and community squares or community shops to build micro ultra. There are three major functions of ultra micro pilot,which are living shopping, distribution services and value-added services. Living shopping is the most basic function of ultra micro, requiring a wide range of goods, and there is no lack of novelty. The distribution service mainly includes ultra micro self commodity distribution and express delivery. The former refers to meet certain shopping quota after the implementation of door-to-door service,the latter refers to the establishment of automatic pickup cabinet within the ultra micro, providing delivery service at the end of the express delivery. Value-added services refers to the establishment of micro beyond, and vigorously integrate social resources, community ultra micro build into a shopping, leisure, entertainment, catering, information consulting in one of the integrated service platform. Topological Design about Goods Allocation of Ultra Micro Pilot ①The inside topological design about goods allocation based on"SLP" of an ultra micro pilot. The method considering the operation unit of logistics and the relationship between the non logistics relation relationship between operation units.And then according to the close degree of relationship between the operating units in the relationship in the table,decided the distance between the operating units, arranged the position of the industry between each unit, draw the relevant activity position map that will map the actual size and activity position related various operating units together, forming plans related to operating units, and through the amendment and adjustment, get a number of feasible solutions, finally choose the best scheme. Based on "SLP" ,the ultra micro storage layout process as shown in From the physiological and psychological needs of customers, design a people-oriented supermarket layout, will greatly enhance the consumer's purchase intention, improving the operation efficiency of the supermarket, improve service quality, so as to improve the competitiveness and profitability of supermarket. Supermarket layout design should meet the comfortable shopping environment, shopping space distribution is reasonable, and thoughtful service, also should formulate relevant laws and regulations, such as air quality, shelf space constraints; the establishment of trade associations, to achieve self-discipline; establish customer awareness of rights, strengthening the public supervision. Design of Pilot Line in Ultra Micro Pilot Design and internal circulation of the supermarket micro super plane plane shape, size, layout and design, function of regional brand structure positioning and architectural style are closely related, therefore, the supermarket layout must be based on the actual situation of project, according to the design. The aisle of the supermarket is divided into the main channel and the auxiliary channel. The main channel is the main route to induce customer action, and the secondary channel is the branch of the customer moving in the shop. he design of the passageway in the supermarket should follow the condition that the passageway has enough width, straight and flat, the design reduces the corner, meets the requirement of light and barrier free. Extension of Micro Hyper Mode Replication ①The wholly-owned replication mode. According to the developers before the pilot experience to create a super micro, copy in the new position of micro shop founded solely hyperon. ②The expansion of peer collaboration mode. Select the appropriate operation convenience stores, convenience stores rectification by means of cooperation and operation mode, the storage layout decoration. Micro Dispatching Problem At the beginning of the establishment of ultra micro, there are many delivery problems,such as commercial varieties,high requirements, unsatisfactory results, uniform distribution rate, operation difficulty is high, it is difficult to adapt to the traditional logistics resources, express delivery time span, express automatic pick up cabinet accumulated serious problems, it's slowing down to the turnover rate distribution. For the problem of micro delivery, we can solve the problem by establishing a unified distribution center. The basis for adopting a unified delivery model is the establishment of several specialized small distribution centers. The construction of the distribution center is essential in the promotion of the whole project, and its fundamental function is to reduce the delivery cost through highly centralized purchasing and distribution. Distribution Denter Construction The selection of micro super sites should be based on theoretical knowledge, business philosophy and characteristics, development plans and land use planning, combined with traffic environment and local policies. Construction of the Rout Optimization of Traffic Distribution of Logistics Distribution Center The distribution process mainly includes the production from the factory purchase goods work set and gather different needs according to various micro super stores, it need sorting of goods out of order picking process. in the distribution center. The distribution process is shown in Line optimization method based on saving mileage method. The basic idea of the algorithm is to assume that P is the distribution center, and A and B are the distribution points, the distance between them is a, B, C. If the distribution from P to A and B respectively, the total mileage of the vehicle is 2a +2b,and if you distribute from P to A and B circuit, the total mileage is a+b+c. In both cases, the mileage difference is (2a+2b)-(a+b+c) = a+b-c, and if a +b-c >0, the second distribution method makes the total mileage savings. If the continuous distribution for a number of users, If the continuous distribution for a number of users, in order to save the size of the distance to connect the distribution points, that is, the best distribution route. And the more distribution users, the greater the mileage. Adjustment process is shown in Fig 4. Fig 4 Mileage Adjustment Process Optimization process of saving mileage method. Known conditions: requirements set = {1, 2,... N}, the quantity demanded, the shortest distance between points. Step 2: the calculation of the degree of savings. Calculate the savings of all pairs of points, and then calculate the results in ascending order. Step 3: Loop merge. Begin with the highest value in the ascending order of the save sequence until the mileage queue is empty, Repeat the following steps: save the mileage queue from large to small order, analyze the possibility of merging between customer i and j(Is the loading limit satisfied, not within the same path, and the number of merges is not exceeding 2?) ,connect i to j. If this is not the case, the current mileage savings are removed from the mileage queue. Development of Online APP Develop of online APP can facilitate the expansion of basic functions and expand market share. The online APP functions mainly include payment function, purchase function, express business inquiries, pick up, send business and Huimin value-added services. Construction of Micro Distribution Center Information System. Two main systems, RFID warehouse management information system and electronic tag picking system, are constructed. ①RFID warehouse management information system. Warehouse management information system based on RFID mainly consists of three parts, namely RFID hardware, control network and data center. ②Electronic label picking system. The basic picking methods of electronic tag picking system are picking and seeding. Picking is that the operators according to the number in electronic tags complete the purchase of goods unit of "piece" or "box" timely, correctly and easily. The seeding type is only suitable for the small selection of products. In this scheme,we choose the packing of electronic label picking. Conclusion In the integration design of micro -super logistics system, the focus of construction is the problem of location, layout, distribution and information system. Through the construction of micro -over stage, 6 1234567890''"" MTMCE IOP Publishing IOP Conf. Series: Materials Science and Engineering 392 (2018) 062142 doi:10.1088/1757-899X/392/6/062142 this paper solves the problem of partition of micro-super construction, and delimit the functions and scope of work. In order to reduce the cost of distribution, a unified storage and distribution of small distribution centers is proposed to ensure the zero inventory and upload efficiency in the micro-operation process. The research of this paper has played a good role in promoting the integration of micro-logistics system and theoretical support.
2019-06-13T13:18:07.411Z
2018-08-03T00:00:00.000
{ "year": 2018, "sha1": "003511acd22e71a9a93292d4a31e4c7125ce59a0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/392/6/062142", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "365357c936b4270c679fcc981db4b244bb0b4a99", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
246059852
pes2o/s2orc
v3-fos-license
Racial Passing off the Record: A Journey in Reconnection and Navigating Shifting Identities Anyone of African descent or with African ancestry who engages in a genealogy project soon learns that the U.S. Census is a helpful yet frustrating tool. In 2016, equipped with my history degree and an online ancestry search engine, I searched for my great-grandfather Leroy in census records after I saw a picture of him as a young man at work in Philadelphia. This image would have been unremarkable had it not been for the fact that my African American ancestor was so light skinned that he seemed to blend in with his co-workers at Kramer’s Fruit and Vegetables. I thought there had to be a story behind this. Classified as, “Mu”, for mulatto in most of his records, Leroy became “Black” on the census in 1930. My first thought was to question whether this categorization changed for other folks like him. My research led me to my master’s thesis “From ‘Mulatto’ to ‘Negro’: How Fears of ‘Passing’ Changed the 1930 United States Census”. Through this research, I also became closer to my father’s family. This piece will take you through this journey of discovery and my frustrations along the way. Introduction Race is one of the determinants of how far back people can follow their ancestors through the United States census. Many Black Americans struggle to obtain information on their ancestors before 1865 due to enslavement (Hyland 2020). However, the census has added and removed racial categories since 1790, which makes this research harder still (Bennett 2000). Racial categories have been a topic of interest in works such as Shades of Citizenship by Melissa Nobles, What is your Race? by Kenneth Prewitt, and"Racial Reorganization andthe United States Census 1850-1930: Mulattoes, Half-Breeds, Mixed Parentage, Hindoos, and the Mexican Race", by Jennifer L. Hochschild and Brenda M. Powell (Hochschild and Powell 2008). Regarding racial passing, each of these sources connect the U.S. Census Bureau's removal of the "mulatto" category to the lack of information gleaned from that data point, segregation, and the growing "anxieties" of white people losing power. My thesis argued that it was full-blown fear that public officials, eugenicists, and white society held of racial passing. Further, these works do not comment on the effects of this erasure of mixed-race identities on the descendants of "near whites". This piece explores the difficulties of my genealogical research and how I learned to piece together my ancestor's story using the historical context of his time. In the Beginning The first time I saw the photograph of my great grandfather Leroy, it was on my paternal aunt's social media. This was the first time I had heard about him or seen a photo of him. Although I knew that many of my paternal family members were light-skinned Black folks, I did not realize that any of them could blend in with white people the way that Leroy seemed to be in this image. He stood outside of Kramer's Fruit and Vegetables with his white co-workers in 1926 Philadelphia. I searched for Leroy in census records online and, as I skimmed the results, I noticed that his race had changed from "Mu" to "Negro" in 1930. A year or two after I saw that photo, I visited my paternal grandmother and we got to talking about our family tree and showed me some more photos of her father when he was a young man. She explained that her father didn't claim to be white, he just did not correct white folks if they made that assumption in order to get and keep jobs. Leroy was passing for his own survival and that of his family. We continued discussing her father's family through correspondence, and I connected with my great aunt for her insight, as well. My grandmother later shared emails she received from another family member about our ancestors' surnames being in a pamphlet, (Surnames, by Counties and Cities, of Mixed Negroid Virginia Families Striving to Pass as 'Indian' or White by Walter A. Plecker ca. 1943 n.d.). This intrigued me, and I began searching for the document. Sure enough, I found our ancestors' surnames and decided to do some digging on Walter A. Plecker. He was tied to much of my historical research on race in the United States in the early 20th century. Plecker was an advocate for the Racial Integrity Act of 1924, which prohibited interracial marriage, created strict lines of racial identification, and made it a crime to falsify one's racial category on government documents (Nobles 2000, pp. 31-42). This information tied in with my research on why the United States Census Bureau changed the racial classification of mixed-race people with African ancestry from "mulatto" to "negro" in the 1930 census (Womack 2017). My professor Dr. Laura Prieto pointed me in the direction of A Chosen Exile: A History of Racial Passing in American Life by Allyson Hobbs which helped me learn about the lives of people who passed for white, what they lost, and what their families lost due to their decisions (Hobbs 2014). Hobbs emphasizes that passers often felt lonely when they removed themselves from their communities and chose to keep their identities secret. Their families were left with gaps in their daily lives and their descendants, with holes in their family trees. I placed my own family's story among these narratives and gained perspective on the experiences of Black folks throughout the 20th century. I was aware that racial passing existed long before due to stories such as that of Ellen and William Craft (Craft and Craft 1999). Ellen passed as a white slaveholder with her husband William Craft as her body servant so that they could escape enslavement. The more research I did, I began to add information on the history of Black to white racial passing. My research included people in New England, such as Lemuel Haynes, a clergyman who fought in the Revolutionary War (Saillant 2003), Patrick F. Healy, a formerly enslaved man that became a priest and president of Georgetown University (O'Toole 2003), and Anita Hemmings, the first unofficial Black graduate of Vassar College (Sim 1999). According to her descendent Jillian Sim, Hemmings was possibly directly related to the Hemings of Monticello through Peter Hemings, Sally Hemings's brother. Diving Deeper Originally, I began this research hoping to find out why my great-grandfather's racial classification changed from "mulatto" to "negro", but searching for this reasoning, I found academic articles that attributed this transition to growing anxieties around racial identity and race mixing. The more I investigated the historical context of the early twentieth century, however, it seemed that the census changed its racial categories due more to a fear of losing power that many white people attributed to white superiority (Womack 2017). With the eugenics movement on the rise, other influential figures shared Plecker's white supremacist views. Madison Grant (1921) and Lothrop Stoddard wrote books describing the inferiority of non-whites, how race-mixing would create "mongrels", and warned of the increasing populations of people of color (Stoddard 1921). I could see how these men and their works had influenced public opinion and political discourse as laws against interracial marriage spread across the country. As I read page after page of white supremacist rhetoric, I wondered why the mixedrace category was created in the first place. It seemed odd that the United States had this classification despite society's belief in white superiority and aversion to race-mixing. So, I began searching further back in history for the origins of the "mulatto" census category and found it in 1850 (Nobles 2000, p. 35). Josiah Nott believed that since Black people were inferior, those born from racial mixing would likely be infertile like a mule. He convinced a US Congressman to suggest that "mulatto" be added to the race category on the census to study the results. Nott developed his theory amid arguments about whether Black people and white people developed from the same origin and the implications that would have on slavery (Nott and Gliddon 1854). Despite this attention to racial categorization, African Americans could be classified differently depending on their location within the United States. These inconsistencies were already confusing and became more difficult to use for census takers when the census added "quadroon", one-fourth Black ancestry, and "octoroon", one-eighth Black ancestry, in 1890 (Prewitt 2013, pp. 56-59;The New York Times 1910). Between 1910 and 1930, some states adopted "one-drop" statutes which claimed that a person with at least one drop of "Black" blood would be considered Black (Nobles 2000;Racial Integrity Laws (1924-1930) (Racial Integrity Laws (1924-1930. I saw that the discrepancies could be an issue for researchers such as myself. Although scientific racism and the eugenics movement gave me an understanding of the views of people within academia and government agencies, I believed that the erasure of this category would have also required the support of the public. I reasoned that one way to find out what American society thought of racial passing was to see how often the topic was covered in the media (specifically in newspapers, films, and books). I used historical newspaper databases to look through articles in big newspapers from 1910 to 1929 and found many headlines implying that racial passing was a phenomenon fueled by what would later be known as the 'Great Migration'. There were also many news stories about court cases where white men claimed to have been "duped" by their wives (The New York Times 1924). One such case was Rhinelander v. Rhinelander. Leonard Kip Rhinelander was a white New Yorker from a wealthy family who married Alice Jones in secret since she was mixed-race (Smith-Pryor 2009). When his family found out, his father pushed Rhinelander. Rhinelander pursued an annulment on the grounds that he didn't know that his wife was a "mulatto". The case proved that he did. I followed this case through more than one hundred New York Times articles from 1924 to 1925. This coverage, as well as the other articles, made me confident that racial passing was seen as scandalous, and that white people were afraid of being tricked into tainting their whiteness in such a way. To confirm my theory, I decided to investigate how many books discussed or featured racial passing from 1900 to 1930. My hope was that I would see how popular this topic was based on how many books were published before the census change. According to my research, "at least eight novels were published with stories about or including characters passing for white" between those years. This seemed like a small amount, until I realized that six of the books were published between 1920 and 1930. Many of the books encouraged mixed-race people to claim their blackness and stick with Black people or their lives would fall into ruin or worse (Larsen 2007). Nella Larsen's Passing (1929) depicts the emotional loss that passers felt after separating themselves from their families, friends, and Black culture. Films about racial passing stayed close to the same themes of the literature of the 1920s. I analyzed four films between 1930 and 1949, to see if racial passing was still a topic of concern after the census change and found that it was. I struggled to get a hold of the first film, "Veiled Aristocrats" by the Black director Oscar Micheaux since the full version was nowhere to be found (Micheaux 1932). In this film, the main character Rena's mother insists that it is best to pass for white to benefit from whiteness. Rena tries to do so, but ultimately feels that she must be true to herself. The version I purchased only ran about 48 minutes long and had harsh edits. Micheaux was the only filmmaker in my list who was Black and employed an "all colored cast" for his passing film. One of the films, "Imitation of Life", was so popular that it was remade in 1959 (Stahl 1934). In this classic story, a white widowed mother takes in a Black single mother and her light-skinned daughter Peola. Peola is ashamed of her blackness and passes for white. Despite the popularity of some of these films, they still faced censorship, as the census was not the only measure taken in preventing race mixing during segregation (McGehee 2006). There were codes that prohibited any interracial romances or insinuation of interracial relationships in films. Some of the films faced more barriers than the others. "Lost Boundaries", a film based on a real family in Keene, New Hampshire that passed for white for twenty years, was not allowed to be shown in Atlanta since it appeared to promote integration (Werker 1949; Lost Boundaries Becomes a Censorship Test Case n.d.). Each step of the way, the documents, books, articles, and films on race and racial passing were predominantly written and/or created by white Americans and therefore, may have projected their fears. Fears that the population of people of color in the United States was increasing rapidly and a loss of control over positions of power. Both of which, are echoed in white supremacist rhetoric today. Challenges Through my research process, I came up against many challenges using census records due to name changes or misspellings, inconsistencies of racial categorizations, and date restrictions. It is common for names to differ in government documents due to misunderstandings and census takers' interpretation of any person's identity. Census takers were also expected to record each person's race based on their own observations of how much "black blood" they had. They used categories such as "mulatto", "quadroon", and "octoroon" to describe persons with varying degrees of blackness (Prewitt 2013, pp. 56-59). Along with keeping track of the inconsistencies in names and race, I also had to maneuver around the lack of records from 1890 and inability to access those taken after 1940 due to the "72-year-rule" (History: The '72-Year Rule' n.d.). "Most of the 1890 census' population schedules were badly damaged by a fire in the Commerce Department Building in January 1921" (History: Why Can't I Find 1890 Census Records? n.d.). Alongside his race, my great grandfather's name changed from Leroy to Roy in his census records. Despite the name change, I could see that he had not chosen to officially pass for white as the rest of his information stayed the same and his racial classification became "negro". However, Anita F. Hemmings' records were a bit harder to track down as she changed her name to Anita H. Love, moved to New York, and began being classified as "white" on the census. Her descendent, Jillian Sim only found out that she had African American ancestry after her grandmother Ellen Love, Hemmings's daughter, passed away. Until that point, Sim was under the impression that her family was just like any other white family in the United States (Sim 1999). Hemmings's choice to pass for white meant that her descendants might never truly understand their family history or their ancestors as people. Both Leroy and Anita may have felt that they had to live a double life for their families to survive and succeed. This meant that they could not fully share themselves with their children and grandchildren. My great-grandfather likely faced the pain of this alone. Conclusions Through this research, not only did I gain even more understanding of why people passed for white and what that meant for their lives and the lives of their descendants, but I also realized that the U.S. Census change created a hurdle that many people cannot clear. One could be attempting to recreate their family tree only to see that one branch is impossible to find since their ancestor was classified as only white or only Black, thus creating an illusion of purity for white folks and disconnect for Black people. Many African Americans are faced with incomplete family trees and histories due to this erasure of mixed-race identity, as well as the lack of detailed record keeping during our ancestors' enslavement (Parker et al. 2015). Thus, people who search for their African American ancestors may hit this roadblock on top of many others. Had it not been for my paternal aunt, grandmother, and great-aunt, I would not have been able to track down Leroy and make the connections between my family, government documents, and where we stand in the legacy of this country. Not only does this erasure result in an incomplete understanding of African American ancestry, but it also contributes to an inaccurate history of the United States. Removing the "mulatto" category allowed White people to simply pretend that whiteness was never threatened by mixed-race people and racial passing, leaving present-day society to uncover the truth.
2022-01-20T16:23:12.007Z
2022-01-18T00:00:00.000
{ "year": 2022, "sha1": "51150d16b99a77e5ae21089be73bc481533ac1aa", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2313-5778/6/1/8/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "88515be6ce2bbea86ec1daf7cf00e4953f7fe528", "s2fieldsofstudy": [ "History" ], "extfieldsofstudy": [] }