id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
11413147
pes2o/s2orc
v3-fos-license
Effects of a silicon probe on gold nanoparticles on glass under evanescent illumination We have numerically investigated the influence of a nanoscale silicon tip in proximity to an illuminated gold nanoparticle. We describe how the position of the high-permittivity tip and the size of the nanoparticle impact the absorption, peak electric field and surface plasmon resonance wavelength under different illumination conditions. We detail the finite element method (FEM) approach we have used, whereby we specify a volume excitation field analytically and calculate the difference between this source field and the total field (i.e., scattered-field formulation). We show that a nanoscale tip can locally enhance the absorption of the particle as well as the peak electric field at length scales far smaller than the wavelength of the incident light. ©2011 Optical Society of America OCIS codes: (350.4990) Particles; (300.1030) Absorption; (290.4020) Mie theory; (260.6970) Total internal reflection; (250.5403) Plasmonics; (050.1755) Computational electromagnetic methods. References and links 1. R. Zia, J. A. Schuller, A. Chandran, and M. L. Brongersma, “Plasmonics: the next chip-scale technology,” Mater. Today 9(7-8), 20–27 (2006). 2. K. A. Willets and R. P. Van Duyne, “Localized surface plasmon resonance spectroscopy and sensing,” Annu. Rev. Phys. Chem. 58(1), 267–297 (2007). 3. E. A. Hawes, J. T. Hastings, C. Crofcheck, and M. P. Mengüç, “Spatially selective melting and evaporation of nanosized gold particles,” Opt. Lett. 33(12), 1383–1385 (2008). 4. P. G. Venkata, M. M. Aslan, M. P. Menguc, and G. Videen, “Surface Plasmon Scattering by Gold Nanoparticles and Two-Dimensional Agglomerates,” J. Heat Transfer 129(1), 60–70 (2007). 5. V. L. Y. Loke and M. P. Mengüç, “Surface waves and atomic force microscope probe-particle near-field coupling: discrete dipole approximation with surface interaction,” J. Opt. Soc. Am. A 27(10), 2293–2303 (2010). 6. D. M. Schaadt, B. Feng, and E. T. Yu, “Enhanced semiconductor optical absorption via surface plasmon excitation in metal nanoparticles,” Appl. Phys. Lett. 86(6), 063106 (2005). 7. M. W. Knight, Y. Wu, J. B. Lassiter, P. Nordlander, and N. J. Halas, “Substrates matter: influence of an adjacent dielectric on an individual plasmonic nanoparticle,” Nano Lett. 9(5), 2188–2192 (2009). 8. C. E. Talley, J. B. Jackson, C. Oubre, N. K. Grady, C. W. Hollars, S. M. Lane, T. R. Huser, P. Nordlander, and N. J. Halas, “Surface-enhanced Raman scattering from individual au nanoparticles and nanoparticle dimer substrates,” Nano Lett. 5(8), 1569–1574 (2005). 9. R. M. Roth, N. C. Panoiu, M. M. Adams, R. M. Osgood, C. C. Neacsu, and M. B. Raschke, “Resonant-plasmon field enhancement from asymmetrically illuminated conical metallic-probe tips,” Opt. Express 14(7), 2921–2931 (2006). 10. X. Chen and X. Wang, “Near-field thermal transport in a nanotip under laser irradiation,” Nanotechnology 22(7), 075204 (2011). 11. R. Hillenbrand, F. Keilmann, P. Hanarp, D. S. Sutherland, and J. Aizpurua, “Coherent imaging of nanoscale plasmon patterns with a carbon nanotube optical probe,” Appl. Phys. Lett. 83(2), 368–370 (2003). 12. P. L. Stiles, J. A. Dieringer, N. C. Shah, and R. P. Van Duyne, “Surface-Enhanced Raman Spectroscopy,” Ann. Rev. Anal. Chem. (Palo Alto Calif) 1(1), 601–626 (2008). 13. A. Rasmussen and V. Deckert, “Surfaceand tip-enhanced Raman scattering of DNA components,” J. Raman Spectrosc. 37(1-3), 311–317 (2006). 14. R. Fikri, T. Grosges, and D. Barchiesi, “Apertureless scanning near-field optical microscopy: numerical modeling of the lock-in detection,” Opt. Commun. 232(1-6), 15–23 (2004). #142569 $15.00 USD Received 28 Apr 2011; revised 8 Jun 2011; accepted 9 Jun 2011; published 16 Jun 2011 (C) 2011 OSA 20 June 2011 / Vol. 19, No. 13 / OPTICS EXPRESS 12679 15. W. Chen, A. Kimel, A. Kirilyuk, and T. Rasing, “Apertureless SNOM study on gold nanoparticles: Experiments and simulations,” Phys. Stat. Solidi B 247(8), 2047–2050 (2010). 16. R. Esteban, R. Vogelgesang, and K. Kern, “Full simulations of the apertureless scanning near field optical microscopy signal: achievable resolution and contrast,” Opt. Express 17(4), 2518–2529 (2009). 17. R. L. Stiles, K. A. Willets, L. J. Sherry, J. M. Roden, and R. P. Van Duyne, “Investigating tip-nanoparticle interactions in spatially correlated total internal reflection plasmon spectroscopy and atomic force microscopy,” J. Phys. Chem. C 112(31), 11696–11701 (2008). 18. D. Sadiq, J. Shirdel, J. S. Lee, E. Selishcheva, N. Park, and C. Lienau, “Adiabatic Nanofocusing Scattering-Type Optical Nanoscopy of Individual Gold Nanoparticles,” Nano Lett. 11(4), 1609–1613 (2011). 19. Finite Element Analysis Simulation Software, Available from http://www.comsol.com. 20. A. Optics, “SCHOTT,” (2011), http://edit.schott.com/advanced_optics/us/abbe_datasheets/schott_datasheet_nbk7.pdf 21. P. B. Johnson and R. W. Christy, “Optical Constants of the Noble Metals,” Phys. Rev. B 6(12), 4370–4379 (1972). 22. B. Sensors, “AFM probe Model Tap300Al-G “, http://www.budgetsensors.com/tapping_mode_afm_aluminium.html. 23. E. D. Palik, “Handbook of Optical Constants of Solids,” (Elsevier). 24. G. Mie, “Contributions on the optics of turbid media, particularly colloidal metal solutions,” Ann. Phys. IV, 25 (1908). 25. C. F. Bohren and D. R. Huffman, Absorption and Scattering of Light by Small Particles (John Wiley and Sons, 1983). 26. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, 2008). 27. J. Zuloaga, E. Prodan, and P. Nordlander, “Quantum Description of the Plasmon Resonances of a Nanoparticle Dimer,” Nano Lett. 9(2), 887–891 (2009). Introduction There have been many studies of the optical behavior of metal nanoparticles under different conditions.Motivating applications range from communications, computing, and data storage to medical diagnostics and therapies.The enhanced absorption, scattering, and electric fields associated with the localized surface-plasmon resonances (LSPRs) of metal nanoparticles find application in information processing [1], sensing [2], microscopy [3,4,10] and lithography [5,6], materials processing [3][4][5] and photovoltaics [6].In many of these applications, nanoparticles are immobilized on substrates that strongly influence their optical behavior [7,8].In addition, several studies have been conducted of the optical properties of nanoscale probes or tips [9][10][11].These have largely addressed applications in near-field microscopy and enhanced Raman scattering [12,13], and have focused on metal or metal-coated tips.Our recent work using a nanoscale tip to locally modify nanoparticles [3] prompted us to study the behavior of a nanoscale tip in proximity to a metal nanoparticle on a substrate. Although this geometry is frequently encountered, there have been only a few reports describing the optical phenomenon associated with a tip near a particle.Fikri et al. conducted 2D finite element simulations of scanning probe/particle geometries to study the effect of probe vibration and lock-in detection on scanning near-field optical microscopy [14].Chen et al. studied the operation of Apertureless Scanning Near-Field Optical Microscopy (aSNOM) both experimentally and numerically for gold nanostructures on a silicon surface [15].Esteban et al. simulated the achievable resolution and contrast of aSNOM by phase and amplitude imaging process using a gold particle embedded in a glass substrate and illuminated from above at a fixed wavelength [16].In other work, Stiles et al. experimentally studied the effect of a standard AFM tip on optical scattering from nanoparticles [17].Most recently Sadiq et al. investigated a system in which nanoparticles were probed with a metallic, gratingcoupled near-field optical probe [18].Although highly informative, these papers do not address the absorption and field enhancement associated with the geometry of interest here, i.e. a simple high-dielectric-constant tip near a nanoparticle resting on a substrate and illuminated using total internal reflection over a range of wavelengths.In addition to the intrinsic interest of this geometry, this configuration also required addressing several simulation challenges, which should prove useful in addressing a broad range of nanoscale optical problems via the finite element method (FEM). In this paper, we explored how a nanoscale probe made of a high-permittivity material such as silicon (Si) affects the absorption cross-section (C abs ) and electric-field enhancement of spherical gold nanoparticles (AuNP) of different sizes resting on a BK7 glass substrate.The simulated geometry is shown in Fig. 1.Illumination is from below under total internal reflection (TIR) conditions.We considered both transverse electric (TE) and transverse magnetic (TM) excitations, and we varied the position of the Si tip both laterally and vertically with respect to the AuNP.The effect of the tip can be viewed from two perspectives.First, even in the absence of a particle, light will evanescently couple, or optically tunnel, from the substrate to the high-permittivity tip.Second, the tip strongly perturbs the local dielectric environment of the NP.In both cases, one expects the presence of the tip to enhance absorption and scattering for the NP and to enhance the local electric field in the tip-NP gap.Moreover, one would expect the spatial localization of these near-field effects to be governed primarily by the geometry and not to be limited by the wavelength of illumination.The dielectric tip allows us to study these effects without introducing additional complexity associated with the LSPRs of a metal or metal-coated tip.Enhanced understanding of these effects will lead to better control of selective absorption and field enhancement, with possible applications in deterministic patterning, sensing and imaging.A Si tip (length = 370 nm, radius = 10 nm, cone angle = 10°), is illuminated at an angle, θ, of 50°.For TE/TM polarization, the electric/magnetic field is transverse to the plane of incidence.(b) Cross-section of 3D simulation geometry with a truncated Si tip suspended above a gold nanoparticle (AuNP) on a glass substrate.100-nm-thick perfectly matched layers (PML) enclose the simulation domain. Simulation We used COMSOL Multiphysics 3.5a [19] with the RF module to implement the finite element method.COMSOL's 3D scattered harmonic propagation mode calculates the difference between a volume source field defined in the absence of a scatterer and the total field in the presence of the scatterer.This difference is referred to as the scattered field, which still provides access to the details of the near field and should not be confused with techniques for calculating the scattered far field.We defined the source field as a plane wave of wavelength between 450 and 650 nm.The wave is incident from within the substrate at a supercritical angle (50° from the normal to the substrate surface).The source field was defined analytically, using the Fresnel equations, over the entire 3D simulation domain, excluding the perfectly matched layers (PML), as if the NP and tip were absent.For TE simulations the source field was specified in terms of E z , while for TM simulations the source field was specified by E x and E y . Others have used a similar approach in which a volume source field was defined by launching a plain wave from a boundary in a simulation without the scatterer and substituting the resulting total field as the source field in a second simulation, which is otherwise identical to the first but for the explicit presence of the scatterer [7].Our approach has two key advantages: (1) by defining the volume source field, we can eliminate unphysical diffractive effects associated with launching a plane wave from a truncated boundary or from using unrealistic periodic boundary conditions; and (2) by defining the source field analytically, we can use a single finite-element simulation to calculate the final scattered field. Our 600-nm-radius spherical simulation domain is divided into two hemispherical halfspaces (Fig. 1(b)).The lower geometry is the substrate, BK7-type glass, and the upper is air (vacuum).The domain is surrounded by a 100-nm-thick perfectly matched layer (PML) backed by scattering boundaries to prevent spurious reflections.The perfectly matched layers are chosen to match the index of refraction in the adjacent domain, either air or glass.A 50nm-diameter AuNP is placed in contact with the substrate at a single point.In reality, NPs typically contact substrates along a crystal facet; however, simulation of all the varieties of such an interface was impractical for this study.Nevertheless, we did take the precaution of limiting our search for the maximum electric-field enhancement to a rectangular region surrounding the upper hemisphere of the NP.This eliminates any spurious field peaks at the point contact.The refractive index values for BK7 glass were taken from the Schott catalog [20], while the values for gold were taken from Johnson and Christy [21]. The Si tip radius was 10 nm and the cone angle was 10° at the apex, based on the nominal dimensions of a typical probe used in atomic force microscopy (AFM) [22].The refractive index of Si was taken from measurements compiled by J. A. Woolam, Inc. and the University of Nebraska, which are very similar to the data given by Palik [23].In the simulation, the tip length was truncated to 370 nm.While extending the tip through the PML would have approximated a more realistic structure (several microns in length), the scattered-field formulation in COMSOL assumes that the scattering objects are entirely confined within the physical domain.Illuminating at normal incidence produces a strongly guided wave in the Si tip.And so the choice of tip length can significantly affect the simulation results.Loke and Mengüç have also explored the effect of tip truncation using a newly developed Discrete Dipole Approximation with Surface Interaction analysis [5].However, in our case, the wave guiding effect is far weaker because we illuminate at oblique incidence beyond the critical angle.Because the evanescent wave decays exponentially, by the upper end of the tip length the norm of the electric field ( * * * ) is reduced to around 3% of the source field amplitude.In TIR illumination changing the tip length from 370 nm to 670 nm causes no more than 3.4% difference in C abs .The gap between the tip and PML was 100 nm or approximately one fifth of a wavelength.This choice of gap allows the mesh elements to remain approximately the same size as in the surrounding air region, and thus has minimal impact on computation time. The vertical and lateral positions of the tip were varied over a range of 1 to 100 nm and ± 320 nm, respectively.For all the simulations described here we used a single plane of symmetry to reduce computation time.The same geometry was used for all simulations in order to ensure consistent meshing.Therefore, to model a suspended particle in free space, we defined every subdomain with refractive index of unity except for the AuNP.Likewise, for simulations without the tip, the tip subdomain was defined with refractive index 1.The AuNP had 359 mesh elements and the maximum element sizes in the tip, PML and remaining subdomains were 50 nm, 150 nm and 75 nm respectively.These parameters were optimized to run a single-wavelength simulation using dual Intel Xeon quad-core processors (2.27 GHz and 24GB RAM) in 128 s.Although we had only one element between the tip and particle, inserting 1000 elements instead of 1 did not change the C abs of AuNP more than 1%.Reducing the maximum element size from 75 nm to 50 nm in the substrate and air domains reduced the error compared to Mie theory from 5% to only 2% but required 27 times longer simulations. Validation We first compared our FEM results with the Lorenz-Mie theory [24] for a 50-nm-diameter AuNP surrounded by air (n = 1).The absorption cross-section was calculated for both TE and TM waves (defined with respect to the symmetry plane but physically indistinguishable).Over the wavelength range from 450 to 650 nm, the largest deviation between the analytical and FEM absorption cross-sections was 5%, as shown in Fig. 2(a).Secondly, we ran the simulation with the lower half-space as BK7 but without any scatterer in the simulation domain.In this case, any non-zero scattered field results from numerical errors or unphysical reflections from the boundaries.In this case, the maximum norm of the scattered field was found to be three orders of magnitude lower than the source field, while the average norm was four orders of magnitude lower. The geometry, with or without a tip, is azimuthally symmetric, provided the tip is not laterally offset with respect to the NP.For normally incident illumination, there is no physical difference between TE and TM waves; however, the electric field is oriented differently with respect to the symmetry plane.Thus, it is important to confirm that the polarization of the normally incident wave does not significantly affect the results.In fact, with only a particle on the substrate (tip being absent), C abs of the AuNP at normal incidence differed by no more than 0.2% between TE and TM simulations in the 450-650 nm wavelength range.Convergence of the calculations for the entire geometry (tip, particle, and substrate) was examined by enlarging the domain from 600 nm to 1400 nm radius in steps of 200 nm, for each polarization at 60° angle of incidence.C abs in AuNP changed by less than 0.6% for any given step in the domain size. Effects of Si tip on AuNP absorption and field enhancement It is well established that C abs near the LSPR of nanoparticles increases as the permittivity of the surrounding media increases.The peak absorption wavelength also redshifts as the energy associated with the plasmon resonance decreases in the more strongly polarizable environment [25].As a result, the presence of the substrate alone causes a small increase and baseline redshift in the absorption resonance with respect to the particle in free space [26].Comparing Fig. 2(a), the free-space result, with the "No tip" cases in Fig. 2(c) and (d) shows this clearly.More importantly, the high dielectric constant of the Si tip can strongly modulate C abs of the AuNP. Figure 2(b) plots the absorption cross-section as a function of tip-NP separation at a fixed wavelength of 532 nm.Simulations were conducted down to 1-nm separation; however, it has been established that classical electromagnetic analysis is insufficient to accurately quantify interactions at these length scales [27].The data point for a 1-nm gap is thus included as a reference for future comparison with coupled electromagnetic and quantum-mechanical simulations. For the TE case the electric field is polarized transverse to the tip-NP axis and the tip has little effect on particle absorption, even at separations approaching 1 nm.Likewise, Fig. 2(c) indicates that the absorption spectrum does not change significantly with tip-NP separation.This is expected, since the surface charge distribution and hence the electric field are concentrated at the sides of the NP, and away from the tip.As a result, the tip is only a weak perturbation and does not dramatically affect C abs .In contrast, for TM illumination, the electric field is partially polarized along the tip-NP axis.In this case, charge is concentrated at the top and bottom of the AuNP, as well as at the apex of the Si tip, and the tip strongly perturbs the electric field around the AuNP.As the vertical distance between tip and NP decreases, the increasing polarization of the high-permittivity tip reduces the resonant frequency of the configuration [26] and leads to a longer resonance wavelength.As can be seen in Fig. 2(b) and (d), the absorption and resonance wavelength increase and redshift, respectively, as the tip approaches the particle. We also investigated the dependence of absorption on lateral separation between the tip and the particle.In this case the separation is parallel to the substrate-air interface as depicted in Fig. 1(a).In order to simplify the simulation setup, the AuNP, rather than the Si tip, was moved in the simulation domain.A positive number indicates that the tip is to the left of the particle according to the view in Fig. 1(a).The vertical separation is kept constant at 5 nm.As the tip is brought closer to the NP laterally, C abs is enhanced (Fig. 3(a)) under TM illumination.The maximum absorption occurs when the tip is located 5 nm to the left of the particle.Asymmetry in the relationship between C abs and lateral position is not surprising given the asymmetric illumination.We attribute the lower, broader peak at separations near 150 nm, which occurs when the tip is to the right of the particle, to interference effects resulting from reflection and scattering from the tip when the tip-particle separation is approximately λ/4.Once again, the tip has little effect for TE illumination.Importantly, the absorption enhancement is spatially localized at the scale of the tip-apex-NP geometry, not the wavelength of the incident light.As a result, NPs can be selectively targeted for modification, sensing, or processing at a scale far below the diffraction limit.In addition to considering the integrated absorption of the AuNP, we also investigated the maximum field enhancement.These data were sampled at 115351 points in the upper half of the AuNP to find the maximum field, since the maximum field occurs at various positions around the particle surface depending on the lateral displacement of the Si tip with respect to the AuNP. Figure 3(b) plots the field enhancement observed at the top surface of the particle.In this case, the localization of the field enhancement is even more pronounced than the corresponding effect for C abs (Fig. 3(a)).As the tip moves across the top of the AuNP, both the vertical and horizontal gaps between the tip surface and the metal surface are reduced.Thus, the enhancement rapidly increases and is localized with a full-width at half-maximum of 40 nm. Figure 4 plots the maximum field enhancement as a function of vertical separation between the tip and the particle.Exponential decay of the field enhancement as a function of increasing distance between a tungsten tip and a Si substrate was recently shown by Chen and Wang [10].Our tip dimensions were similar to their optimum tip geometry, though we used a Si tip and glass substrate.One would not expect a purely exponential dependence for the substate-NP-tip case, but Fig. 4(a) shows a similar rapid decrease in field as Ref [10].with an even larger field enhancement.As mentioned earlier, we did not take into account quantummechanical effects in our simulations.With tip-NP gaps of 1 nm or less, the purely classical approach leads to a monotonic increase in the field enhancement, yet tunneling effects may very well modify the optical response and reduce the electric-field enhancement at such small separations [27].Again for TE polarization, the tip has little effect.Fig. 4. Field enhancement as a function of vertical separation between tip and particle.Inset: Cross-sectional plot through the plane of symmetry for the norm of the electric field with a 5nm tip-particle separation.TM illumination is from below under TIR conditions at wavelength of 532 nm at 50° angle of incidence.Note the localization of the field between tip and particle. Other geometrical parameters affect absorption and field enhancement as well, in particular the relationship between tip radius and particle size.Thus, we simulated the absorption enhancement for particles with diameters ranging from 10 nm to 50 nm located 5nm below the same silicon tip at 532 nm incident wavelength.The absorption efficiency, Q abs = C abs /(πr 2 ), is plotted in Fig. 5.For a particle much smaller than the incident wavelength, C abs is proportional to the radius cubed [25]; thus, an approximately linear increase in the absorption efficiency (Q abs ) as a function of diameter is expected.This can be seen in Fig. 5 for both the TE and TM cases in the absence of the tip.In the TE case, the presence of the tip does not dramatically alter the absorption of the particle.For TM illumination, the tip-induced enhancement in Q abs is significant.The greatest enhancement in absorption efficiency is observed when the radius of the particle is smaller than the radius of the tip.This is not surprising because in this case the tip provides a strong perturbation on the local dielectric environment of the particle.As the particle radius becomes larger than the tip radius, the absorption enhancement lessens, as can be seen by the convergence of the "TM with tip" and "TM without tip" curves in Fig. 5. Thus, to selectively target particles for modification using a nanoscale tip, one must balance absorption enhancement, which requires a large tip-radius to particle-radius ratio, with spatial selectivity, which requires a smaller tip radius.Future multi-particle simulations are required to quantify this tradeoff. .Absorption efficiencies of AuNPs of different sizes with a constant 5-nm vertical separation between tip and particle.Illumination is at 50° (TIR) at 532 nm wavelength.The tip has little effect under TE illumination.For TM illumination there is significant enhancement of absorption efficiency.This effect becomes less pronounced as the particle radius grows with respect to the tip radius. Conclusions In this study, we have explored the effect of a nanoscale tip on the absorption and field enhancement for a metallic nanoparticle illuminated under TIR conditions.For an electric field polarized orthogonal to the tip axis little effect is observed.If the electric field is partially polarized along the tip axis, then both absorption and electric fields are strongly enhanced.The enhancement is accompanied by a redshift in the surface-plasmon resonance wavelength of the particle.These effects are observed when the vertical and horizontal separation of the tip and nanoparticle are significantly less than the wavelength of the illuminating light.Thus, the technique can be used to selectively excite nanoparticles, and thus drive modification processes, far below the diffraction limit.The changes in absorption and field enhancement with particle size indicate that there is likely to be a tradeoff between spatial localization of these effects and their maximum enhancement.Further studies are required to better understand this tradeoff; however, the 3D simulations presented here already improve the understanding of tip-particle interactions and should influence applications including tip-based nanomanufacturing, imaging, and sensing. Fig. 1 . Fig. 1.(a) Cross-sectional schematic of geometry of interest.A Si tip (length = 370 nm, radius = 10 nm, cone angle = 10°), is illuminated at an angle, θ, of 50°.For TE/TM polarization, the electric/magnetic field is transverse to the plane of incidence.(b) Cross-section of 3D simulation geometry with a truncated Si tip suspended above a gold nanoparticle (AuNP) on a glass substrate.100-nm-thick perfectly matched layers (PML) enclose the simulation domain. Fig. 2 . Fig. 2. (a) Validation against Lorenz-Mie theory at 60° angle of incidence for both TM and TE polarization (physically indistinguishable but numerically implemented using distinct equations and symmetry conditions).The worst-case error in Cabs is below 5%.(b) Effect of tip proximity on Cabs of AuNP at 532 nm.For TM illumination with a relatively large electric-field component along the tip axis, absorption increases rapidly as the tip approaches the NP.For TE illumination, the tip has little effect.(c) Under TE illumination, the increase and redshift of Cabs is due to the substrate only and varying the tip-NP vertical separation has little effect.As a result, all the curves overlap.(d) Under TM illumination, Cabs increases and the resonance wavelength redshifts by 5 nm as the tip approaches the AuNP.In all cases the y-axis scales were kept the same to allow direct comparison.The black vertical lines in (a), (c) and (d) indicate the resonant wavelength of a 50-nm-diameter AuNP in free space. Fig. 3 . Fig. 3. (a) Change of Cabs of AuNP as a function of lateral tip-NP separation.Positive values indicate the tip is to the left of the NP based on the perspective of Fig. 1(a).(b) Maximum electric-field enhancement as a function of tip-NP lateral separation.In both cases, the enhancement is localized to scales far below the diffraction limit of the incident light of 532 nm. Fig.5.Absorption efficiencies of AuNPs of different sizes with a constant 5-nm vertical separation between tip and particle.Illumination is at 50° (TIR) at 532 nm wavelength.The tip has little effect under TE illumination.For TM illumination there is significant enhancement of absorption efficiency.This effect becomes less pronounced as the particle radius grows with respect to the tip radius.
2017-09-28T12:26:46.285Z
2011-06-20T00:00:00.000
{ "year": 2011, "sha1": "788328d674085cc02f782aecbe7ac77cc71a1279", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1364/oe.19.012679", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "788328d674085cc02f782aecbe7ac77cc71a1279", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
158826546
pes2o/s2orc
v3-fos-license
Determinants of women entrepreneurs' firm performance in a hostile environment : This study examines the determinants of firm performance for women entrepreneurs in the context of an emerging economy affected by a turbulent political and socio-cultural environment. The study draws from the resource-based and institutional-based views embedded in the gender-aware 5M (money, management, market, macro/meso environments, and motherhood) model. A generalized structural equation model is used to analyze data from Egypt, the setting for this study. The study finds a positive relationship between women entrepreneurs' human capital and firm performance. However, no detectable relationship emerges between social capital and firm performance or between women's gender-related personal problems and firm performance. The findings suggest new boundary conditions in the domain of female entrepreneurship in a hostile environment, with important implications for practice and research. The results show that in a hostile institutional environment, only human capital matters. Social capital is not relevant. This finding has theoretical and practical implications. New theoretical approaches to studies of entrepreneurial processes, including gender-related studies, in hostile should be developed. Our findings also suggest that country Introduction Most research on women entrepreneurs focuses on developed countries, while limited knowledge exists on women entrepreneurs in emerging economies with inadequate regulations and inefficient systems (Kimosop, Korir, & White, 2016;Mas-Tur, Pinazo, Tur-Porcar, & Sánchez-Masferrer, 2015). Even less research exists on developing countries that have recently undergone dramatic political and socio-cultural unrest, leading to hostile environments for business activity. This study shows that the current state of female entrepreneurship requires better definitions of new boundary conditions in cases of volatile and hostile dynamic environments. The extant approaches to female entrepreneurship typically invoke the family embeddedness perspective (Aldrich & Cliff, 2003), though this perspective is usually applied in mature and developed economic and socio-cultural settings, such as the United States or Canada. Furthermore, the family embeddedness perspective better explains new business start-ups and their access to resources during the launch phase of the venture rather than the entrepreneurial processes taking place throughout all stages of firm development. Finally, the perspective mainly applies to nuclear families, not to the idiosyncrasies of female entrepreneurship. In particular, the family embeddedness perspective applies, by definition and purpose, to family venture start-ups headed by either male or female entrepreneurs operating in a stable institutional environment. In response to this lack of gender focus within the family embeddedness perspective, Brush, de Bruin, and Welter (2009) propose a context-dependent 5 M (money, management, market, macro/meso environments, and motherhood) model to better account for the real nature and intricacies of the dynamics inherent in female entrepreneurship. However, the 5M model draws from the institutional-based view (IBV), which assumes that institutions are reliable and remain stable over time. As such, the IBV may not be appropriate for less stable contexts found in developing and/or emerging markets. Therefore, Welter and Smallbone (2011) extend the institutional approach and tailor it to the dynamics of emerging economies. In their extension, they focus on the impediments entrepreneurs encountered in the former Soviet Republics, though they do not exclude its usefulness in other challenging environments. Absent in the extant literature is a systemic approach to understanding entrepreneurial processes in hostile environments experiencing different forms of social unrest, the effects and aftermath of war, and other revolutionary movements prevalent in some countries. That is, no specific or sufficient approach or theory exists to address business operations in these hostile environments. Thus, Welter and Smallbone's (2011) theory for understanding female entrepreneurship must be expanded still further to embrace hostile dynamics. The current study shows that the relationships between variables of interest typically found in the extant models, which are based on the data from mature, stable, or even challenging environments, are not valid in hostile settings. We show that what matters when operating in a hostile environment is human capital, not social capital. To examine women entrepreneurs in a hostile environment, we conduct our study in Egypt, where the Arab Spring, a revolutionary wave of demonstrations, took place in 2011-2012. The economic, political, and cultural environment that Egyptian women entrepreneurs face is unpredictable and constraining for launching and growing a business. In such volatile sociopolitical environments, the performance and sustainability of women-owned businesses face unique challenges that can negatively affect the business . In line with prevailing views that firm performance should be measured along multiple dimensions (Zhao, Seibert, & Lumpkin, 2010), we use a construct based on four performancerelated dimensions: business income, geographic sales expansion, years in business, and firm size. Because women-owned businesses operate in multi-dimensional, multi-layered, and gendered environments, this study adopts a multi-theoretical approach (Meyer, Estrin, Bhaumik, & Peng, 2009) by integrating the resource-based view (RBV) and the IBV within the aforementioned 5 M framework of female entrepreneurship . Specifically, the RBV incorporates the first three 5M concepts (money, management, and market), while the IBV taps into the institutional aspects of female entrepreneurship-namely, motherhood and the macro/meso environments. To the best of our knowledge, this study is the first to include the RBV and IBV within the framework of the 5 M model. The structure of the study is as follows: We first present the theoretical framework and hypotheses. Then, we discuss the methodology and results. Finally, we review the limitations of the study, suggest opportunities for further research, and outline our conclusions. Theoretical framework The 5 M framework seems most appropriate for the study of female entrepreneurs in Egypt, where both resources at the individual and firm levels and the country's institutions exercise a major impact on women's entrepreneurial performance simultaneously. Resources encompass the original, fundamental 3Ms (money, management, and markets) originating from the mainstream economics and management-driven view of entrepreneurship (Bates, Jackson, & Johnson, 2007). This study considers several such resources at the firm and individual levels. These resources are not easily imitated, are firm-specific, and are non-transferable (Eddleston, Kellermanns, & Sarathy, 2008). Therefore, the RBV is a relevant theoretical framework. Women entrepreneurs in Egypt are also affected by the country's institutions. The economic growth in developing countries is often marked by turbulence, as is the case of Egypt (Hampel-Milagrosa, Loewe, & Reeg, 2015;. Given the volatile socio-political nature of this region, the survival and long-term sustainability of women-owned enterprises are unpredictable . The national-level policies, culture, laws, and economy define a macro environment, while regional level organizations reflect the meso setting. Finally, macro/meso surroundings intermesh with a woman's family and domestic milieu, which is strongly gender-related and constitutes the last M of the 5M framework, motherhood. Because these institutions are important environmental factors that condition female entrepreneurship, the IBV of the firm is relevant in the discussion of factors affecting firm performance. According to the institutional approach (North, 1990), institutions that are stable and operate efficiently follow the rules of the game in society and comprise formal and informal frameworks. The formal dimension encompasses constitutional, legal, and organizational rules, while informal institutions include codes of conduct, values, and norms in a society. Stability and efficiency of institutions applies to developed and mature systems rather than to emerging and transition economies, which are characterized by uncertain, ambiguous, and turbulent institutional frameworks (Welter & Smallbone, 2011). As mentioned, the current study embraces Welter and Smallbone's (2011) extension of the institutional approach, which is tailored specifically to emerging economies. This modification assumes a two-way relationship between institutions and entrepreneurial actions: not only do institutions influence entrepreneurs, but entrepreneurs, through their actions, spur institutional changes. Furthermore, these entrepreneurial reactions to challenging institutional conditions are heterogeneous, depending on the environmental conditions, the firm's characteristics (e.g., firm age, size), and the entrepreneur's background (e.g., managerial skills, education level, networks, other forms of social capital). Welter and Smallbone (2011) suggest that their extension of institutionalist theory is appropriate for a wider range of contexts, including not only the former Soviet Republics but also other emerging market economies. In adopting this perspective, we extend it even further by including the context of a developing country, Egypt, that not only is undergoing challenging transformations but also is experiencing extraordinary hostile, political, and socio-cultural unrest. Brush et al. (2009) ground their 5M model exclusively in institutional theory in expanding the original 3M to a 5M model. The current study is the first to suggest integrating the 5M model with the RBV and IBV. Numerous international business scholars repeatedly call for more integration between the RBV and the IBV (e.g., Gaur, Kumar, & Singh, 2014;Meyer et al., 2009), and such integration finds support in research on entrepreneurship (e.g., Yamakawa, Peng, & Deeds, 2008). As Yamakawa et al. (2008, p. 64) succinctly note, "insightful as each of the perspective is, none of them is likely to be strong enough to sustain on its own; rather, it is the combination of their insights that lead to a better and more insightful understanding of the complex phenomenon." Thus, positioning the 5M model within the two integrated views provides a useful theoretical framework for analyzing women's entrepreneurial processes. Firm performance The performance of entrepreneurial firms is an important area of theoretical and practical debate, particularly for women-owned businesses . Eddleston et al. (2008) argue that multiple performance measures are warranted because of the underlying multidimensionality of the performance construct. Financial performance, market performance, and organizational performance are typical outcomes. This study uses four measures: business revenue, geographic sales expansion, years in business, and firm size. Business revenue is among the most frequent and valid indicators of firm financial performance (Mari, Poggesi, & De Vita, 2016). Geographic sales expansion serves as a proxy for market performance, depicting the entrepreneur's ability to move the business across market boundaries and seize opportunities . Several studies show that the first few years following the start of an enterprise are the most challenging period for its survival (Staniewski, Janowski, & Awruk, 2016). Therefore, this study uses the number of years the firm has been in operation as a proxy for business longevity, which is a reasonable indicator of firm performance because longevity generally indicates that a firm has been successful long enough to avoid liquidation ) and, as such, is related to firm survival (Zhao et al., 2010). Finally, firm size is another frequently used measure of performance (Jennings & Brush, 2013). We use number of employees to reflect size Zhao et al., 2010). The literature summarizes the importance of various factors for women's entrepreneurial success. These predictors include entrepreneurial resources (e.g., human capital), institutions (e.g., social capital), and socio-cultural factors (e.g., gender-related personal problems, the work-family interface) Hsu, Wiklund, Anderson, & Coffey, 2016;Jennings & McDougald, 2007;Loscocco & Bird, 2012;. The current study considers the entrepreneur's formal education, her management skills, and age as human capital. These dimensions are intertwined with social networks and family support-the components of social capital-as well as socio-cultural dynamics that shape women entrepreneurs' unique set of personal problems. The study's theoretical model (see Fig. 1) asserts that women entrepreneurs' human capital, social capital, and gender-related personal problems are all associated with firm performance. 3.2. Human capital and firm performance Cressy (1999) refers to the entrepreneur's education and professional experience and management skills, elements of the 5M model, as specific human capital and defines general human capital as socio-demographic characteristics, such age or marital status (Madsen, Neergaard, & Ulhoi, 2003). This section develops hypotheses linking human capitalexemplified here by education, management skills, and age-with firm performance. With regard to the relationship between the entrepreneur's age and firm performance, some studies find no link between the two variables (Akehurst, Simarro, & Mas-Tur, 2012;Lafuente & Rabetino, 2011;Lerner & Almor, 2002;Mas-Tur et al., 2015), while other studies find a positive link between age and performance (Pinazo-Dallenbach, Mas-Tur, & Lloria, 2016). Younger women entrepreneurs encounter greater difficulty in securing financing because creditors may question their creditworthiness, which translates into lower firm performance Pinazo-Dallenbach et al., 2016). Furthermore, because women tend to have greater responsibility for childcare activities than men (Sullivan & Meek, 2012), mature women entrepreneurs may find it easier to balance work-family conflicts as their children are likely older and require less attention. In addition, the overall family situation is more settled compared with that of younger women. This may also contribute to better firm performance. In summary, we posit that in challenging/hostile environments, human capital will have a positive effect on firm performance, in line with the RBV. A woman entrepreneur's unique human capital elements transfer from one environment to another, and she must use higher levels of human capital skills to run a business successfully in hostile environments. H1 Egyptian women entrepreneurs' level of education is positively related to their firms' performance. H2 Egyptian women entrepreneurs' management skills are positively related to their firms' performance. H3 Egyptian women entrepreneurs' age is positively related to their firms' performance. Social networks' support and firm performance With regard to entrepreneurs' social networks-an element of the meso environment in the 5M model-studies suggest that they are critical for firm performance Davidsson & Honig, 2003;Hanson & Blake, 2009;Haynes et al., 2015). For example, a key way for entrepreneurs to compensate for limited resources when starting a new business is to use their social networks (Jones & Jayawarna, 2010;Urbano, Ferri, & Noguera, 2014). Social networks play an especially important role in the success and survival of women-owned businesses (Apergis & Pekka-Economou, 2010;Berrou & Combarnous, 2012;Estrin & Mickiewicz, 2011;Gray & Finley-Hervey, 2005;Kwong, Jones-Evans, & Thompson, 2012;Lans, Blok, & Gulikers, 2015;Noguera, Álvarez, Merigo, & Urbano, 2015;, particularly in the developing countries. When women have access to networks, they are more likely to overcome the difficulties of obtaining funding for their ventures (Carter, Brush, Greene, Gatewood, & Hart, 2003;Hodges et al., 2015;Kuada, 2009), which may result in better performance. According to Hodges et al. (2015), Manolova, Manev, Carter, and Gyoshev (2006), Manolova, Manev, and Gyoshev (2014), and Xheneti and Bartlett (2012), in transition economies, access to networks is especially important because of resource scarcity and the unpredictable institutional environment. Similarly, and suggest that access to networks is beneficial for the performance of women entrepreneurs in Arab countries, mainly due to their highly contextual nature. In these countries, informal and social networks determine most economic outcomes (Cunningham & Sarayrah, 1993;El-Said & Harrigan, 2009). Lack of social and professional networks among women entrepreneurs in the Gulf countries is an obstacle for their firms' growth . Thus, networking is more relevant and plays an instrumental role in environments in which institutions are weak and trust in institutions is low, both of which are characteristic of developing economies (Danis, Chiaburu, & Lyles, 2010;De Clercq, Danis, & Dakhli, 2010;Prasad et al., 2013). H4 The use of social networks by Egyptian women entrepreneurs is positively related to their firms' performance. Family organizational support and firm performance Social capital includes the capital embedded in family relationships (Cetindamar, Gupta, Karadeniz, & Egrican, 2012;Chang, Memili, Chrisman, Kellermanns, & Chua, 2009). These relationships are part of the "motherhood" dimension of the 5 M model. Research shows that family is an important source of support to entrepreneurs Anderson, Jack, & Dodd, 2005;Chang et al., 2009). Family support for the business owner is part of so-called familiness (Chrisman, Chua, & Litz, 2003;Zaefarian, Eng, & Tasavori, 2016) and constitutes a fundamental element for business success . Family support in providing emotional sustenance of entrepreneurs is also important (Hoang & Antoncic, 2003;Liao & Welsch, 2005;Prasad et al., 2013). Women entrepreneurs benefit from family-to-business affective support to a greater extent than their male counterparts (Powell & Eddleston, 2013). Family members can provide support in the form of emotional encouragement, understanding, attention, and an overall positive attitude, which transfers from the family to the business domain (Eddleston & Powell, 2012;Powell & Eddleston, 2013) and contributes to family cohesiveness (Edelman, Manolova, Shirokova, & Tsukanova, 2016). This support, in turn, heightens an entrepreneur's creativity when responding to highly dynamic environments, which leads to improved business performance . Women entrepreneurs, when supported by their families, show greater entrepreneurial persistence and risk taking, which may be positively related to venture success (Bruderl & Preisendorfer, 1998;Prasad et al., 2013). Increased entrepreneurial self-efficacy, bolstered by family support, may raise expectations of venture performance and further contribute to growth potential . Positive emotions enhance general emotional well-being (Frederickson & Joiner, 2002), which also contributes to better firm performance . Therefore, family support is fundamental for business success Singh, Reynolds, & Muhammad, 2001). This study focuses on the organizational help that family members may provide during the venture preparation stage and later during the business creation stage . The family and the family's intermixing of resources with the business account for a substantial proportion of variance in business outcomes Stafford & Tews, 2009). When family members help launch a business, they are expected and more likely to exert some control over strategic decisions. This can affect performance (Gómez-Mejía, Haynes, Núñez-Nickel, Jacobson, & Moyano-Fuentes, 2007). However, research on the effects of family involvement in management is inconclusive (Kim & Gao, 2013). According to research, this relationship is positive (Anderson & Reeb, 2003;Mari et al., 2016;Powell & Eddleston, 2013;Prasad et al., 2013;, negative Filatotchev, Lien, & Piesse, 2005;Hatak, Kautonen, Fink, & Kansikas, 2016;Kellermanns, Eddleston, Sarathy, & Murphy, 2012;Koenig, Kammerlander, & Enders, 2013;Westhead & Howorth, 2006), or inconclusive (Villalonga & Amit, 2006). However, in Middle Eastern countries, the lack of family support for women entrepreneurs is a significant barrier . This lack of family support is strongly rooted in uncertainty avoidance, the masculine nature of the society, and collectivism, three elements of Hofstede's taxonomy of the cultural values in Arab countries . Thus, it stands to reason that the presence of any family organizational support will improve firm performance. H5 Family organizational support for Egyptian women entrepreneurs is positively related to their firms' performance. H6 Gender-related personal problems of Egyptian women entrepreneurs are negatively related to their firms' performance. Data collection The study uses a self-administered questionnaire developed by Hisrich, Bowser, and Smarsh (2006), which we then translated and back-translated according to the process described in Earley (1987). Data collection took place in 2014-2015. In total, 150 questionnaires were distributed via mail and through field visits to companies over this one-year period, and 117 completed questionnaires were returned, for a 78% response rate. The majority of respondents are under 40 years of age (74%) and have at least a college degree (62%). Only 14% are married. Their businesses are relatively mature (73% are at least three years old, and 39% have been in business at least five years). Women have a leadership role in the business (86%) and the majority ownership (54%). The firms are unevenly split between family businesses (15%) and non-family businesses (55%), with 30% of responses missing in this area. The businesses were started either with family members (51%), alone (21%), or with non-relatives (28%) and mostly with internal funds-that is, either their own savings or borrowed from the family (89%). Table 1 presents selected characteristics of the sample in greater detail. Measures Business income measures the entrepreneur's current business annual income, which is coded as 1 when business income exceeds the Egyptian national average income per person and 0 otherwise. Previous research also uses a categorical measure of firm performance though with more than two categories Diaz-Garcia & Brush, 2012;Mari et al., 2016). Geographical sales expansion measures the entrepreneur's ability to expand her current market scope, either from local to national or from national to international . It is coded as 1 when the scope of the business has expanded outside the current market boundaries and 0 otherwise. Years in business is a proxy for business longevity. It is coded as 1 when the entrepreneur has been in business for at least five years and 0 otherwise Staniewski et al., 2016). Business size is coded as 1 for a firm with at least 10 employees and 0 otherwise . We used the cutoff level of 10 employees according to a classification of small and medium-sized enterprises into micro enterprises (< 10 employees), small enterprises (10-50 employees), and medium-sized enterprises (50-250 employees). 1 Level of education indicates whether the respondent had an education level higher than high school (1) or otherwise (0) Management skills differentiates whether the respondent rated her management skills as good to excellent (1) or poor to fair (0) (Lerner & Haber, 2001;Nissan et al., 2012;Rey-Martí et al., 2015). Age indicates if the entrepreneur was 40 years of age or older (1) or otherwise (0). We used the benchmark of 40 years to separate mature women from younger entrepreneurs, similar to Mas-Tur et al. (2015). Social networks' support is coded as 1 when such support was acknowledged and as 0 when it was not acknowledged, based on types of networks mentioned (women's professional groups, community organizations, social groups, and/or close friends) (Greve & Salaff, 2003;Jones & Jayawarna, 2010;Jumaa & Sequeira, 2017). Family organizational support is coded as 1 if the business was started with family member(s) or 0 if it was started either alone or with non-relatives (Cooper & Saral, 2013). Gender-related personal problems is coded as 1 when a woman entrepreneur indicated the presence of any combination of emotional stress, family stress, loneliness, influence of business on family relationships, influence of business on personal relationships, poor or lack of institutional support, time management issues, and having to deal with male-centric discrimination and 0 when none of these problems existed . Financial business start-up is coded as 1 if the woman entrepreneur started the business with her own and/or family savings and as 0 if she financed the start-up with funds borrowed from nonrelatives and/or institutions . The choice of the cutoff levels for the variables' categories is based on theoretical considerations and their frequency distributions. Data analysis Descriptive statistics (including means and correlations) for the study variables appear in Table 2. We used a generalized structural equations model (GSEM) to analyze the data. The GSEM approach generalizes standard structural equation modeling (SEM) by allowing for binary responses in the estimation process (Rabe-Hesketh, Skrondal, & Pickles, 2004). The research model has one latent variable (firm performance), four observed indicators (business income, geographical sales expansion, years in business, and business size), and seven observed predictors (level of education, management skills, age, social networks' support, family organizational support, gender-related personal problems, and financial business start-up). Such a model belongs to the family of multiple indicators multiple causes (MIMIC) models, a special case of SEM. The MIMIC approach is attractive for our purposes because it allows for a representation of the output as a latent variable, which cannot itself be directly measured but has causes and effects that are observable. A MIMIC model comprises two parts: the measurement model, which depicts the links between the observed indicators and their underlying latent variable(s), and the structural part, which involves the connections between the predictors and the latent variable(s). First, the measurement model is tested, and then the structural part is added, with the resulting MIMIC model being estimated. n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a Min Common method bias Collecting behavioral and attitudinal data from self-reported questionnaires at one point in time can lead to common method bias (Podsakoff, MacKenzie, Lee, & Podsakoff, 2003). Therefore, we applied Harman's one-factor test on all observed variables. The exploratory factor analysis produced the (unrotated) factor solution with five factors, accounting for 63.13% of the total variance explained. If common method bias is present, a single factor is extracted and accounts for most of the variance. Because a single-factor solution did not emerge, common method bias is not likely to be a concern. Additional tests comparing the measurement model with the full model further confirm that common method bias is not a problem. Results The results appear in Fig. 2 in two phases: (1) estimation and evaluation of the measurement model and (2) estimation and evaluation of the MIMIC model. Note: The ⁎⁎ and ⁎ indicate p-values < 0.05 and 0.10, respectively. The figure presents the heteroscedasticity-corrected standard errors in the parentheses next to each GSEM path coefficient. Measurement model We assessed fit of the measurement model by running a confirmatory factor analysis. The study used the Huber-White Sandwich Estimator method, which is robust to heteroskedasticity of the errors ) (i.e., when the errors variances are not constant for all observations). The ordinary least squares standard errors are no longer valid in the presence of heteroskedasticity. When this occurs, the data are biased and inconsistent, and the estimates are inefficient. Therefore, the data must be tested for its presence, and if detected, a remedy must be applied. The most widely used procedure is the Huber-White estimation (Wooldridge, 2003, p. 258). To aid interpretation, the variance of the latent variable is constrained to be 1, and all the loadings are left unconstrained (StataCorp, 2015, p. 314). The factor loadings (the slopes) reveal how discriminating each indicator is with regard to the respective latent construct. All four indicators significantly loaded onto the hypothesized latent construct and in the anticipated direction, suggesting the appropriate structure for this latent construct measure. Business income is the most discriminating with regard to firm performance (b = 1.939, p = 0.097), followed by geographic sales expansion (b = 1.560, p = 0.055), firm size (b = 0.820, p = 0.026), and years in business (b = 0.644, p = 0.085). The only goodness-of-fit measures offered by the GSEM procedure are Akaike information criterion (AIC) and Bayesian information criterion (BIC) indicators, as well as log pseudo-likelihood. These measures are 577.56, 599.66, and − 280.78, respectively. MIMIC model After testing the measurement model, we used the GSEM procedure to estimate the MIMIC model. The AIC, BIC, and log pseudo-likelihood are now equal 526.92, 566.88, and − 248.46, respectively. Compared with the results for the measurement model, we observe a desired drop in the AIC and BIC measures, as well as an increase in the log pseudo-likelihood, which indicates that adding the structural part to the measurement model improved the MIMIC model's fit. Variance inflation factors (VIFs) tested for any unwanted presence of multi-collinearity among the predictors. The VIFs ranged from 1.04 to 1.109, indicating a lack of potential multicollinearity (Cenfetelli & Bassellier, 2009). Fig. 2 shows the results of the GSEM analysis. Each of the three components of human capital is positively related to firm performance. The regression coefficient for level of education, b1, is equal to 0.652 and is statistically significantly different from zero (p = 0.075), in support of H1. The management skills component is also significant (p = 0.090) and positively (b2 = 0.522) related to firm performance, in support of H2. The results also support H3 because the regression coefficient for age is positive (b3 = 0.741) and significant (p = 0.024). Neither of the social capital elements nor gender-related personal problems are related to firm performance. Specifically, the regression coefficient for social networks' support (b4 = − 0.097) is not significant (p = 0.833). Thus, H4 is not supported. In addition, family organizational support is not related to firm performance (b5 = 0.227, p = 0.434), contrary to our expectation (H5), nor is gender-related personal problems (H6) (b6 = − 0.111, p = 0.981). The control variable (financial business start-up) (b7 = − 0.086, p = 0.498) is also not related to firm performance. Discussion Two major findings emerge from the study. First, the study shows a positive relationship between human capital and firm performance in the context of women entrepreneurs in Egypt. Second, the study finds no relationship between women's social capital and gender-related personal problems and firm performance. These two findings suggest that new boundary conditions should be developed to better explain female entrepreneurial processes in hostile environments. Most extant literature suggests that the two links should be positive, regardless of the environment. This study finds evidence that only one of these links is relevant: human capital. The components of social capital are not related to the performance of firms led by women entrepreneurs in a hostile environment. Furthermore, gender-related personal problems are not linked to firm performance. As indicated, the three elements of human capital (level of education, management skills, and age) are positively related to firm performance, as hypothesized (H1-H3). The first two results have been reported in several studies, including those on developing economies (i.e., education , management skills Prasad et al., 2013]). However, the literature reports conflicting results with regard to the relationship between entrepreneurs' age and firm performance . This study finds a positive link between the two variables. Mature (at least 40 years of age) Egyptian women entrepreneurs seem better equipped to handle the country's hostile environment than their younger counterparts. In developing countries with highly challenging environments, younger women entrepreneurs encounter greater difficulty in securing financing because creditors often question their creditworthiness, which translates into lower firm performance . Mature women entrepreneurs may find it easier to balance work-family conflicts, as their children are likely older and require less attention, and the overall family situation is more settled. Finally, more mature women entrepreneurs may have developed more resilience, which allows them to better cope with the highly challenging environment of Egypt. Resilient entrepreneurs adapt quickly to change to take advantage of new situations and are able to learn from their mistakes (Bullough & Renko, 2013). Resilience allows entrepreneurs to cope with challenging and hostile conditions and destabilizing events and helps them bounce back from hardships and become stronger as a result (Ayala & Manzano, 2014). Contrary to expectations, the first of the social capital components, support from social networks, is unrelated to firm performance. This may be due to one of two explanations that support this study's argument that new boundary conditions should be defined for the female entrepreneurship domain in hostile environments. First, developing a social network is a common challenge that female entrepreneurs face in an emerging economy. Lack of social and professional networks among women entrepreneurs in Middle Eastern countries is an obstacle to firm growth . The existing networks are frequently weakened or annihilated by massive displacements of communities as the result of social unrest and sectarian violence outcomes . Second, a noticeable lack of trust (El-Said & Harrigan, 2009; occurs between companies and individuals in countries undergoing turbulent political and socio-cultural changes . Personal trust is a substitute for inefficient formal institutions. Personal trust comes from group characteristics such as kinship or ethnicity (Welter & Smallbone, 2011) but can also result from long-term business relationships. While personal trust can evolve with or without formal institutions, institutional trust can emerge only when there is stability and predictability (Welter & Smallbone, 2011). Personal trust can act as a substitute in situations in which little or no institutional trust exists. Entrepreneurs in Egypt (and women in particular) are more reluctant to rely on their social networks because they have had or heard about unpleasant experiences with the provision of mutual trust . This is partly due to deficiencies in the country's institutions and in the rule of law, in which legal procedures are lengthy and unreliable . Trust plays a major role in challenging environments as a substitute for or complement to the formal institutional framework (Welter & Smallbone, 2011). The other social capital element, organizational support from the family, is also unrelated to firm performance. The study's findings do not support the common notion of a positive link between family involvement in business and its success . reports that the lack of support from the families of women entrepreneurs in the Arab Emirates was the first barrier women encountered. Women entrepreneurs in such turbulent environments as Egypt after the Arab Spring have succeeded despite the lack of family support because of their maturity and resilience. However, in emerging economies such as India, research finds a positive relationship between family social support and firm performance . The difference may be attributed to Egypt's recent dramatic social unrest, whereas India has been relatively peaceful throughout this same period. In challenging/hostile environments, the woman entrepreneur's human capital (e.g., education, management skills) matters most, an attribute that women can take with them wherever they conduct their business. In other words, it is the quality of the entrepreneur herself that makes the largest difference . Social capital elements (e.g., networks, family support) are not guaranteed in such environments, as they are elusive or damaged and are not "movable." Successful women entrepreneurs must be able to survive and flourish in a challenging and hostile environment without external aid. Finally, gender-related personal problems are not related to the performance of firms owned and managed by women entrepreneurs in Egypt. This result can be explained along several dimensions. Hampel-Milagrosa et al. (2015) also report that most female entrepreneurs interviewed in Egypt did not indicate that being a woman was a constraint for their business or hampered its performance. De Vita, Mari, and Poggesi (2014) report that women in emerging economies tend to be more self-confident in their managerial capabilities and less fearful of failure than women in developed countries. Thus, their performance may not be as affected by personal problems. Oftentimes, these women are necessity entrepreneurs (De Vita et al., 2014). Finally, women in countries such as Egypt could simply be more resilient than their counterparts in developed economies, though this is only a conjecture. This also might be a result of the recent Arab Spring and the timing the surveys were completed. Regarding the method of financing the business start-up (the control variable), the study finds it to be unrelated to firm performance. This is not surprising, as mixed results have been found regarding financing and firm performance (Kim & Gao, 2013). Some studies suggest that when a family is highly involved in the business, this can create trust and a strong familial bond in the firm Hsu & Chang, 2011;Zahra, Hayton, Neubaum, Dibrell, & Craig, 2008). However, other researchers highlight the dangers inherent in having too much family involvement. Family members may impede the long-term growth of the business Renzulli, Aldrich, & Moody, 2000). Other studies find no relationship between the two variables (Cruz, Justo, & De Castro, 2012). In essence, a new approach to emerging economies that exhibit volatile and hostile conditions needs to be developed. This study uses a combination of RBV and IBV, intertwined in the 5 M model, as a solution to better understand the impact of volatile environments on women entrepreneurs' success. Limitations and future research This study is limited by the size of the sample and its use of a convenience sample conducted by mail and through networking and support organizations. Women entrepreneurs, despite the study's safeguards and anonymous responses, may have been hesitant to answer the questions because of fear of tax consequences, problems with officials, or being stereotyped in some way due to the strong masculine orientation of the country. Future studies might compare other emerging countries that have experienced similar dramatic political and socio-cultural changes specific to challenging and hostile environments. Longitudinal studies could investigate the impact of changes in the lifestyles and culture along with government initiatives on women entrepreneurs in challenging environments over time. Studies investigating female entrepreneurship dynamics in challenging environments are necessary to better understand coping mechanisms and empowerment skills. Conducting comparative studies through a different theoretical lens may offer additional insights into the performance of women entrepreneurs in turbulent environments and how public policy can effect positive change. Conclusion Current approaches to female entrepreneurship in emerging economies require additional attention to understand how turbulent environments affect the success of women-owned businesses. Studies need to be conducted to tap into the idiosyncrasies of environments that have undergone volatile and dramatic political and socio-cultural changes, including social unrest or war in countries such as Egypt, Brazil, Venezuela, Sudan, Ukraine, or Syria. Dynamics in such settings are different from those in other emerging economies, such as the former Soviet Republics, countries of Eastern and Central Europe, and China. However, understanding how they affect business success is imperative to the country recovering and the speed of recovery. The results show that in a hostile institutional environment, only human capital matters. Social capital is not relevant. This finding has theoretical and practical implications. New theoretical approaches to studies of entrepreneurial processes, including gender-related studies, in hostile environments should be developed. Our findings also suggest that country context matters. Results of studies from other countries may not be comparable. From a practical perspective, public policy makers could use the findings to shape their approach to promoting and fostering entrepreneurship in various settings. Specifically, in hostile environments, such as the one defined in this study, more emphasis should be put on the entrepreneurs' personal abilities (i.e., their human capital) rather than on their social skills. Entrepreneurship occurs around the world, and the environment in which entrepreneurs operate can vary dramatically. Therefore, more approaches, theories, and methodologies need to incorporate these variations so that entrepreneurship and entrepreneurial behaviors are better understood in context. Incorporating the social, political, and cultural environments helps clarify the attitudes of societies and how they change toward entrepreneurship as opportunities emerge and entrepreneurship becomes more prevalent. While entrepreneurship relies on individuals and teams that seize opportunities, the business environment has a major effect on the extent of entrepreneurship in a society and the behaviors of entrepreneurs. Combining the IBV and the RBV provides a framework for understanding the external and internal influences on individual behaviors of entrepreneurs and the role of these influences. Entrepreneurs are change agents rather than passive players. Institutional change plays a major part not only in affecting entrepreneurial behavior but also in being affected by entrepreneurship. Together, IBV and RBV theories help explain the role of the individual entrepreneur and organizational agents in the change process. Entrepreneurs are not the same; they are heterogeneous and possess different capabilities. Therefore, their behaviors are influenced by institutions as well as personal and business resources. There is an interplay between the structure and agency that can be explained best by the role of trust in relationships. The theoretical framework adopted in this study (RBV and IBV, in the context of the 5 M model) should be tested in a wider range of contexts, particularly those of emerging-market economies, but also in more mature economies. Cognitive theory identifies the behavior of entrepreneurs as being similar, despite differences in environment, time periods, and cultures. Behavioral responses are learned over time, and entrepreneurs learn strategic responses to environmental differences to survive and prosper. Under well-functioning institutional systems, entrepreneurs are more likely to respond and conform. When institutional frameworks are emerging, entrepreneurs are more likely to revert to avoidance and evasive behaviors (Welter & Smallbone, 2011). Environments make a difference in entrepreneurial behavior for individuals and firms. This study is a first step toward recognizing the impact of turbulent and volatile environments on emerging economies and on the vital success of women entrepreneurs who contribute to the economic well-being and stability of countries.
2019-05-20T13:03:27.578Z
2017-12-01T00:00:00.000
{ "year": 2018, "sha1": "ee4927b3087672a704d78f67d4e63efb21208b57", "oa_license": "CCBY", "oa_url": "http://libres.uncg.edu/ir/uncg/f/D_Welsh_Determinants_2018.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "b044c4db7bf35493dbe10642ddd817cb7fe727ef", "s2fieldsofstudy": [ "Business", "Economics", "Sociology" ], "extfieldsofstudy": [ "Economics" ] }
53399778
pes2o/s2orc
v3-fos-license
Understanding of droplet dynamics and deposition area in electrospraying process: Modeling and experimental Approaches Electrospraying is a widely-used technique for generating microspherical droplets in biomedical and chemical applications and considered as an effective approach for the deposition on substrate. However, studies on effects of controllable parameters on deposition area of electrosprayed droplets have not been reported yet. In this study, a simplified two-dimensional model is developed to study the dynamic process of droplets and electrosprayed area. The effects of distance between the needle tip and collector, and syringe feed rate on the size of sprayed area are quantified. Experiments have been conducted to validate the simulation results. Both modeling and experimental data demonstrate that the diameter of sprayed area increases with increased distance between the tip and collector as well as feed rate. This fundamental understanding should contribute to a wise choice of controllable parameters to achieve an efficient utilization of electrosparyed droplets or substrate for real applications. Homogeneously distributed droplets are feasible to be fabricated, and this is because electrostatic repulsive forces induced among the charged droplets, thus agglomerations of droplets are avoided. Up to date, efforts have been contributed to the application of electrospraying technique as well as the physical understanding of the droplet evolution in the electrospraying process [21][22][23][24]. The trajectories of droplets are affected by many controllable parameters, i.e. syringe feed rate, concentration of solution, applied voltage and needle gauge. The effects of those controlled parameters on size and morphology of droplets were previously studied [5,6,[24][25][26]. The size of the electrosprayed droplets was able to be tuned by adjusting controlled parameters [5,6] However, the effects of controllable parameters on deposition area of electrosprayed droplets has not been reported yet. The diameter of the electrosprayed area ranges from several millimeters to tens of centimeters under different controllable parameters [23]. The delicate tuning of the deposition area can greatly contribute to saving materials. When the size of the deposition area fits well with the targeted substrate, the electrosprayed droplets can be efficiently utilized. Vice versa, when the size of deposition is smaller or larger than the area of the targeted substrate, a full utilization of substrate or the electrosprayed droplets cannot be achieved. In this work, the electrosprayed area of ethanol was studied. The ethanol has been reported as a commonly-used solvent for the preparation of the electrosprayed precursor [2,3,9,[11][12][13]. A simplified twodimensional model has been developed to study the trajectories of droplets and their deposition area during the electrospraying process. The modeling results are validated by the experimental data. Experimental approach The electrospraying process is illustrated in Fig. 1. As a high voltage is applied to the needle nozzle of a syringe, the liquid surface at the tip of the nozzle quickly forms a pointed cone shape. Because the surface tension pulls the liquid back to the nozzle, and Coulomb repulsive force drives the liquid towards the grounded collector. This cone is called "Taylor cone" [21]. Once the surface tension is overcome by Coulomb repulsive force, the liquid jet is then emitted through the apex of the Taylor cone. Eventually, the highly charged liquid breaks into small droplets. Materials Ethanol (200 proof) was purchased from Decon Laboratories Inc., USA. Methylene blue (MB) aqueous solution with a concentration of 1.5 w/v % (1.5 g/ 100 mL) was purchased from Sigma-Aldrich, USA. All the materials were used as received without further purifications. Methylene blue was used to dye the ethanol solution for direct observations of electrosprayed droplets on target. Two drops of methylene blue were added into 25 mL ethanol. Syringe and needle were purchased from PrecisionGlide. Electrospraying process The ethanol-MB solution was electrosprayed as shown in Fig. 2. The parameters used were: 5, 7.5, 10, and 12.5 kV for applied voltages; 0.1, 0.5, 1, 2, and 5 mL h -1 for feed rates; 1, 3, 5 and 10 cm for the distance between the tip and collector. The inner and outer diameter of the needle is 0.337 mm and 0.641 mm, respectively. The nozzle of the needle was ground with a sand paper and finished with a blunt tip. The collector for electrosprayed droplets was a metal plate covered with a white paper sheet. The metal plate (length: 550 mm, width: 340 mm and thickness: 2 mm) was grounded during electrospraying. The environmental temperature of working condition was 20 o C. Modeling approach A simplified two-dimensional model is developed to simulate the trajectories of electrosprayed droplets. It is assumed that the electrosprayed droplets are ejected from the tip of Taylor cone with a velocity in a random direction towards the collector, and all the droplets are spherical and The tip of the needle has a wedge angle of 100°, which simulates Taylor cone in the cone-jet mode of electrospraying process [21]. Electrosprayed liquid is assumed to be fully stored in the syringe needle. A high voltage (5 -12.5 kV) is applied on the top and bottom side of the needle. The droplets are ejected from the tip of the needle. The metal collector is 340 mm long and 5 mm wide, and four sides of the collector are grounded. The #3 domain is 500 × 500 mm and filled with air. Force scaling analysis In the movement from the needle tip to the collector, the electrosprayed droplets in the air are subjected to five possible forces: electric force ( ), drag force ( ), Coulomb repulsive force ( ), gravitational force ( ) and buoyance force ( ) described as shown in Fig. 4. Force scaling analysis can capture dominant forces involved in droplet dynamics through the whole process of electrospraying and simplified the model. The formulas for the five forces are given: Where, q is the charge of the droplet, E is the intensity of electric field, air is the dynamic viscosity of the air, is the velocity of the droplet, d is the diameter of the droplet, 0 is the vacuum permittivity, r is the position of the droplet, rj is the position of any j droplet, m is the mass, V is the volume of the droplet and air is the density of air. The force scaling analysis was made to estimate the magnitude of each force. The magnitude ranges of five forces are estimated using the equations (1) -(5) and summarized in Table 1. Table 1: Results of force scaling analysis of five forces during electrospraying: The estimated velocity of droplets ranges from 1 to 40 m s -1 ; the distance between the tip and collector ranges from 1 to 10 cm; the applied voltage ranges from 5 to 12.5 kV and the assumed spacing between two droplets is 50m. It can be seen that the electric force, drag force and Coulomb repulsive force are the three main forces that affect the trajectories of droplets' motion through scaling analysis. Gravitational force and buoyance force are much smaller compared with the electric force and drag force, and it is about five to eight orders of magnitude smaller. Therefore, gravitational force and buoyance force contribute much smaller to the droplets' trajectories and will be ignored. Only electric force, drag force and Coulomb repulsive force are considered in this model. During the electrospraying, electric force accelerates droplets towards the collector, while drag force retards the movement of droplets in the opposite direction. Since the droplets are small in size and mass, electric force and drag force quickly reach a balance. Coulomb repulsive force is strongly affected by the distance between two droplets. The ejected droplets near the tip will strongly drive each other far away. Boundary and initial conditions The trajectories of ejected droplets subjected to three main forces: electric force, drag force and Coulomb repulsive force from needle tip to collector are simulated using COMSOL Multiphysics software package, which has been demonstrated as an effective tool to study various transport phenomena by others' work [27][28][29][30]. This model couples laminar flow module, electrostatics module and particle tracing module. The relative tolerance used is 5×10 -5 . The boundary conditions defined are summarized in Table 2. The mesh independent analysis is also studied. The diameter of the primary ejected droplets was estimated by the scaling law [26]. Where, d is the estimated diameter of primary ejected droplet, Q is the feed rate, is the surface tension of the liquid, 0 is vacuum permittivity, is the density of the liquid, K is the electrical conductivity of the liquid and µliq is the dynamic viscosity of liquid. For liquids with high enough conductivities and viscosities ( ≪ 1) [26], the diameter of the droplets can be estimated as Where, is the relative permittivity of the liquid. In addition, in order to calculate the electric force as describe in Eq. (1) and Coulomb repulsive force as described in Eq. (3), the electric charge that a droplet holds can be determined by the Rayleigh limit [21] = √8 0 3 The initial velocity of the primary droplet is determined with an assumption that the feed rate is conserved in the liquid flow [22,23] The droplets are released one after another in a sequence, and the release rate (R) of a droplet from the needle tip is based on the ratio of feed rate and droplet size and yields = 6 3 (12) Parameters of droplets' properties The parameters used in the electrospraying simulation are summarized in Table 3. The droplets' properties with different electrospraying parameters are summarized in Table 4. Trajectories of electrosprayed droplets The trajectories of electrosprayed droplets under a condition of an applied voltage of 10 kV, distance between the tip and collector of 5 cm and feed rate of 1 mL h -1 are shown in Fig. 5. A total of 836 droplets are ejected in 1 ms. After nearly 8 ms, all the droplets arrive at the collector. The droplets have the highest velocity immediately after the ejection at the tip of the needle. The velocity decreases as the droplets move towards the collector, because the drag force hinders the movement of droplets. At the first a few millimeters near the tip, the electric field intensity is the highest, which is due to a sharp edge of the needle tip with a high curvature. As a result, droplets are accelerated and the moving direction is mainly controlled by the electric field. After the electric field intensity gradually reduces to a smaller value, and then the drag force begins to play a role. Since the magnitude of the drag force is proportional to droplet's velocity and the force direction is opposite to droplet's moving direction, the velocity of the droplet is quickly reduced. After that, the electric force and drag force maintain a balance all the way until the droplets reach the collector. As the electric field intensity decreases towards the collector, the balanced velocity decreases as well. It is found that the Coulomb repulsive force helps to disperse droplets near the needle tip, because this force is strongly affected by the distance between two droplets. Since droplet cloud is denser near the tip, droplets can extend to a wider range with the existence of the Coulomb repulsive force. Compared with the trajectories of droplets without Coulomb repulsive force as shown in Fig. 6, the diameter of deposition area with the Coulomb repulsive force is 2 cm larger. Besides, it is clearly to conclude from Fig. 5 and Fig. 6 that the charged droplets are distributed separately with the Coulomb repulsive force, and easily form agglomerations without coupling Coulomb repulsive force. This explains one benefit of electrospraying of generating homogeneous and dispersed droplets by electrospraying technique. All of the modeling results below (Fig. 7 to Fig. 12) are simulated with electric force, drag force and Coulomb repulsive force. Fig. 7 shows the electric field and deposition area of electrosprayed droplets with a range of distance between the tip and collector from 1 to 10 cm. The controlled parameters used for simulation were 10 kV for applied volatge and 2 mL h -1 for feed rate. The simulated electric field shows that electric field intensity decreases with increased distance between the tip and collector. The average electric field intensity for 1 cm distance is roughly ten times higher than the one for 10 cm distance. The velocity of droplets hence is higher for a shortdistance electrospraying. The right column of Fig. 7 explans that the electrosprayed droplets can spread into a larger area with a longer distance between the tip and collecotr. At the initial stage of electrospaying, the droplets are accelarated under the electric force with the Y-direction Effect of distance between the tip and collector component, and the drag force also increases rapidly with an increased speed. The drag force then reduces the velocity of droplets along the Y-direction. At the same time, the electric field line gradully turns from an angle to the vertical direction of the collector, and this means no electric force along the Y-direction to balance the drag force. As a result, the velocity of the droplets along the Y-direction becomes zero at a later stage. For a short-distance, the electric field line is more denser to the collector, and it results in a smaller deposition area. Fig. 8 illustrates that the diameter of the electrosprayed deposition area always increases with increased distance between the tip and collector with a variety of parameter. Without taking evaporation effect of droplets into account, the deposition area has an approximately linear relationship with distance between the tip and collector. The simulated diameter of sprayed area and experimental data are compared as shown in Fig. 9. The dashed lines illustrate that the diameter of the electrosprayed deposition area increases with Fig. 10 and Fig. 11 show the effects of feed rate on diameter of electrosprayed area. It can be seen that the diameter of the deposition area increases with the increased feed rate. It was also reported from previous experiments that a higher feed rate can lead to a larger first-ejected droplet [6,25,26]. Larger droplets are capable to carry larger electric charges, which induce larger electric force and consequently larger deposition area. conditions. Both modeling results and experimental data show the same trend that the diameter of deposition area increases with increased feed rate. However, the increasing behavior is slightly different between the modeling and experimental results. The experimental data shows that the diameter of sprayed area increases rapidly with increased feed rate at a lower feed rate range, and increases much slower at a higher feed rate range. This increased characteristic is not demonstrated in the simulations. The discrepancy between the modeling results and experimental data seems to be smaller under a smaller or a higher feed rate. A possible explanation for the discrepancy under a smaller feed rate is concluded. A slower feed rate generates smaller droplets, which have lower evaporation rates. With less evaporations, the simulated results may match better with experimental data at a lower feed rate. Also, there is a possible reason for the decreasing discrepancy under a higher feed rate. A higher feed rate generates larger droplets and the splitting velocity for a larger droplet at the moment when a droplet splits is smaller than the one of a smaller droplet. A smaller splitting velocity will results in a much smaller deposition area. Fig. 13 shows an example of splitting process. Assume two spherical stagnant droplets with a radius of 1 m and 10 m, respectively. At the critical moment, they split into two identical spheres. Before splitting, the droplets have electrostatic and surface energies. After splitting, some electrostatic and surface energies are converted into kinetic energy. The total energy is assumed to be constant during the splitting process, and the splitting velocity is given [22] Effect of feed rate Where, r1 is the radius of a droplet before splitting and r2 is the radius of a droplet after splitting. For a much smaller stagnant ethanol sphere (r1 = 1 m), the splitting velocity of the droplet is estimated to be 8.95 m s -1 and the splitting velocity is estimated to be 2.83 m s -1 for the larger stagnant ethanol sphere (r1 = 10 m). Thus, for a higher feed rate, the ability of increasing the deposition area may dramatically decrease due to a smaller splitting velocity and this may explain a smaller discrepancy between the modeling results, which do not consider the splitting process, and experimental data for a higher feed rate. Conclusions A simplified two-dimensional model was developed to study droplet dynamics and deposition area during the electrospraying process. The deposition area of electrosprayed droplets is tunable with various controllable parameters. It was found that the diameter of electrosprayed area increases with an increased feed rate, and an increased distance between the needle tip and collector for both modeling and experimental results. The discrepancy between modeling and experimental data may be due to the evaporation and splitting, which are not considered in the model. This fundamental understanding should contribute to the optimized choice of controllable parameters for the efficient utilization of electrosparyed droplets or substrate in reality.
2018-07-23T01:53:38.000Z
2018-07-23T00:00:00.000
{ "year": 2018, "sha1": "e925a9872643b0777ebc7cff89fc9d8e23fd4134", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e925a9872643b0777ebc7cff89fc9d8e23fd4134", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
1255908
pes2o/s2orc
v3-fos-license
Is Diffusion-Weighted MRI Useful for Differentiation of Small Non-Necrotic Cervical Lymph Nodes in Patients with Head and Neck Malignancies? Objective To evaluate the usefulness of measuring the apparent diffusion coefficient (ADC) in diffusion-weighted magnetic resonance imaging to distinguish benign from small, non-necrotic metastatic cervical lymph nodes in patients with head and neck cancers. Materials and Methods Twenty-six consecutive patients with head and neck cancer underwent diffusion-weighted imaging (b value, 0 and 800 s/mm2) preoperatively between January 2009 and December 2010. Two readers independently measured the ADC values of each cervical lymph node with a minimum-axial diameter of ≥ 5 mm but < 11 mm using manually drawn regions of interest. Necrotic lymph nodes were excluded. Mean ADC values were compared between benign and metastatic lymph nodes after correlating the pathology. Results A total of 116 lymph nodes (91 benign and 25 metastatic) from 25 patients were included. Metastatic lymph nodes (mean ± standard deviation [SD], 7.4 ± 1.6 mm) were larger than benign lymph nodes (mean ± SD, 6.6 ± 1.4 mm) (p = 0.018). Mean ADC values for reader 1 were 1.17 ± 0.31 × 10-3 mm2/s for benign and 1.25 ± 0.76 × 10-3 mm2/s for metastatic lymph nodes. Mean ADC values for reader 2 were 1.21 ± 0.46 × 10-3 mm2/s for benign and 1.14 ± 0.34 × 10-3 mm2/s for metastatic lymph nodes. Mean ADC values between benign and metastatic lymph nodes were not significantly different (p = 0.594 for reader 1, 0.463 for reader 2). Conclusion Measuring mean ADC does not allow differentiating benign from metastatic cervical lymph nodes in patients with head and neck cancer and non-necrotic, small lymph nodes. INTRODUCTION Lymph node metastasis is an important prognostic factor in patients with head and neck squamous cell carcinoma Is Diffusion-Weighted MRI Useful for Differentiation of Small Non-Necrotic Cervical Lymph Nodes in Patients with Head and Neck Malignancies? (HNSCC). An accurate assessment of lymph node metastasis is an important prerequisite for staging and proper treatment planning. Computed tomography (CT) and/or magnetic resonance imaging (MRI) are frequently used to preoperatively assess lymph node status in patients with HNSCC using morphologic criteria (1)(2)(3)(4)(5)(6). Among them, sizerelated criteria offer the easiest way to assess malignancy with high reproducibility and a wide range of diagnostic sensitivities and specificities depending on the cut-off value used. Van den Brekel et al. (2) reported that a minimum axial diameter > 11 mm showed very high specificity (95-100%) in a CT scan study. The presence of central necrosis is considered the most reliable and specific finding suggesting nodal metastasis kjronline.org (1,4,5). However, CT and MRI may fail to depict areas of necrosis < 3 mm and are unable to distinguish tumor necrosis from other elements of malignant nodes (4). This finding suggests that assessing nodal status in patients with HNSCC depending on morphological criteria is limited considering the clinical significance of nodal status for determining patient prognosis. These problems are not always solved by [ 18 F]-fluorodeoxyglucose positron emission tomography, for which the main diagnostic limitations include lower spatial resolution and lower specificity due to the false-positive uptake of reactive lymph node hyperplasia (7,8). An increasing number of studies have reported the ability of diffusion-weighted imaging (DWI) to distinguish benign from metastatic lymph nodes in patients with head and neck cancer (9)(10)(11)(12)(13)(14)(15). The main advantage of DWI is its sensitivity to microscopic pathological alterations before they become visible on conventional MRI; thus, DWI could remedy the morphological criteria limitations. Except for a study by Sumi et al. (10), metastatic lymph nodes have consistently been described with significantly lower apparent diffusion coefficient (ADC) values (0.59-1.09 x 10 -3 mm 2 /s) than those of benign lymph nodes (1.21-1.64 x 10 -3 mm 2 /s) (9,(11)(12)(13)16). DWI distinguishes benign from metastatic lymph nodes with sensitivities of 84-100% and specificities of 84-94% with different ADC thresholds of 0.94-1.38 x 10 -3 mm 2 /s. The contradicting results of the study by Sumi et al. may be related to inclusion of a high number of necrotic metastases (up to 48%). DWI was superior to conventional MRI for nodal staging of HNSCC in a recent meta-analysis (17). However, no study has investigated the added value of ADC measurements using DWI to diagnose nodal metastasis in patients with HNSCC by focusing on small non-necrotic lymph nodes. Therefore, we investigated the usefulness of ADC measurements to assess cervical lymph nodes showing no specific signs of metastasis, including central necrosis or an increased minimum axial diameter > 11 mm to detect metastasis in patients with HNSCC. MATERIALS AND METHODS This retrospective study was approved by our Institutional Review Board for human investigation, and informed consent was waived. Patients and Lymph Node Selection We initially included 26 consecutive patients with biopsy-proven HNSCC between January 2009 and December 2010, who had undergone a head and neck MRI examination as well as surgical treatment, including neck dissection, within 1 month following their MRI examination. None of these patients underwent preoperative chemotherapy or radiotherapy. Two head and neck radiologists, who were blinded to the clinical and surgical information, reviewed the T1-, T2-, and contrast-enhanced T1-weighted images with consensus regarding lymph node size and the presence of necrosis. Lymph node size for the ADC measurements was set at a minimum axial diameter of 5 mm to reduce the effects of partial volume artifacts. Lymph nodes with a central, nonenhancing area suggesting necrosis or those > 11 mm in minimum axial diameter were excluded. Among the 135 lymph nodes fulfilling the size criteria, 11 were excluded due to central necrosis. One patient with eight lymph nodes was excluded due to poor DWI image quality. Therefore, 116 lymph nodes in 25 patients (mean ± standard deviation [SD] age, 55 ± 13 years; range, 25-84 years; female/male, 17/8) were finally included in our study. The mean ± SD time interval between MRI examination and surgery was 12 ± 8 days (range, 1-29 days). The locations of the primary tumors and the pathologic nodal staging are shown in Table 1. MRI Examinations Magnetic resonance imaging was obtained on a 1.5-T MR scanner (Achieva, Philips Medical Systems, Best, the Netherlands or Intera, Philips Medical Systems) (n = 16) or on a 3-T MR scanner (Achieva, Philips Medical Systems) (n = 9) using a 16-channel neurovascular coil (SENSE NV coil, Philips Medical Systems). Among the 116 lymph nodes, 25 (22 benign and three malignant) were imaged with the 1.5-T MR scanner, and 91 lymph nodes (69 benign and 22 malignant) were imaged with the 3-T MR scanner. A transverse T2-weighted turbo spin-echo (TSE) sequence was performed with a repetition time (TR)/echo time (TE) of 3090/100 ms, acquisition time of 2 minutes 22 seconds ADC Measurements Two neuroradiologists, who were blinded to both the TSE imaging findings and the clinical data, independently analyzed the DW images. Reader 1 was a faculty radiologist in the neuroradiology division, who had interpreted at least 1000 head and neck MRI examinations over 8 years; reader 2 was a fellow neuroradiologist, who had interpreted approximately 150 head and neck MRI examinations images over 6 years. The ADC measurements were determined 1 month after selecting the lymph node to minimize bias. The MRI data were transferred to DiffusionLab TM commercial software (Clinical Imaging Solution Inc., Seoul, Korea) for ADC measurements, and the regions of interest (ROI) were drawn freehand over the lymph nodes, containing the entire lymph node volume at every section identified on the b value of 800 s/mm 2 images. The mean ADC value and ROI volume were automatically calculated by combining the b values of 0 and 800 s/mm 2 images. Topographic Correlation of Lymph Nodes The reference standards were the neck dissection surgical and pathology reports. The lymph nodes were excised en bloc along with adjacent reference structures, including muscles, salivary gland, and veins to ensure the exact nodal station. A node-by-node correlation was done between the MRI and the surgical specimens. To ensure that the lymph node obtained by neck dissection was the same node as that seen on MRI, the nodal station and its size were matched with consensus between MRI and the pathology by two authors. Final decisions regarding the correlation were reached by consensus between a radiologist and a pathologist. All lymph nodes matched clearly between the imaging and pathology, and none of those lymph nodes were excluded from the analysis. The histopathological and radiological findings were correlated after all image analysis was completed. Statistical Analysis Statistical analyses were performed using SPSS 19.0 software (SPSS Inc., Chicago, IL, USA). Numerical data are reported as mean ± SD. The independent two-sample t test was performed to compare the minimum axial diameters of benign and metastatic lymph nodes and also to compare mean ADC values of benign and metastatic lymph nodes measured separately by each reader. The paired t test was used to separately compare the measured ADC values of the benign and metastatic lymph nodes between the two readers. To ensure that there was no difference in the ROI measurements between the two readers, we separately compared mean ROI volumes between the readers using the paired t test for benign and metastatic lymph nodes. A p value < 0.05 was considered significant. We also evaluated inter-observer agreement between the two readers regarding No significant difference in the mean ADC values was observed between benign and metastatic cervical lymph nodes, as determined by the two readers (p = 0.594 for reader 1 and 0.463 for reader 2) ( Table 2). Figure 1 is a box-and-whisker plot of the benign and metastatic lymph nodes ADC values, as determined by the two readers. No significant differences were observed in mean ROI volume between the two readers (p = 0.110 for benign lymph nodes and 0.107 for malignant lymph nodes) ( Table 3). Inter-observer agreement between the two readers was moderate for the measured ADC values (ICC = 0.530; 95% confidence interval [CI], 0.320-0.675) and almost perfect for the ROI volumes (ICC = 0.964; 95% CI, 0.947-0.975). Figures 2 and 3 show representative cases of metastatic lymph nodes with relatively low and high mean ADC values, respectively. DISCUSSION In this study, we compared mean ADC values of benign and metastatic lymph nodes in patients with HNSCC, targeting small, non-necrotic lymph nodes of minimum axial diameter < 11 mm with manually drawn ROIs. We found that the mean ADC values of the two groups were not significantly different. This result indicates that ROI measurements of mean ADC values did not distinguish benign from metastatic lymph nodes among small, nonnecrotic lymph nodes in patients with HNSCC. The purpose of this study was to evaluate the role of ADC measurements using DWI to diagnose nodal metastasis in lymph nodes without specific morphological signs of metastasis, such as increased minimum axial diameter or central necrosis. Although a study has evaluated the value of ADC measurements of subcentimeter-sized lymph nodes, the authors did not mention whether central necrotic lymph nodes were excluded or not from their study (9). Previous studies have reported that DWI is superior to conventional MRI for nodal staging of head and neck cancer, and they analyzed the ADC values of lymph nodes with necrosis in the solid portion to distinguish cervical lymph nodes in patients with head and neck cancer (9,12). An increased minimum axial diameter > 11 mm and central necrosis are highly specific imaging criteria for nodal metastasis in patients with HNSCC. Therefore, in a case of definitely enlarged or necrotic lymph nodes in a patient with HNSCC, an ADC measurement using DWI is not necessary to distinguish benign from metastatic lymph nodes. Small lymph nodes, i.e., < 11 mm in the minimum axial diameter, without necrosis remain a diagnostic dilemma when staging patients with head and neck cancer. Therefore, we attempted to investigate whether ADC measurements could increase the diagnostic ability of MRI to differentiate benign from metastatic lymph nodes of small size and nonnecrotic appearance in this study. Diffusion-weighted imaging is an MRI technique that measures the motion of water molecules in the extracellular, Table 2 . ADC Value of Benign and Metastatic Lymph Nodes as Assessed by Two Readers Benign LNs (n = 91) Metastatic LNs (n = 25) P Reader 1 1.17 ± 0.31 x 10 -3 1.25 ± 0.76 x 10 -3 0.594 Reader 2 1.21 ± 0.46 x 10 -3 1.14 ± 0.34 x 10 -3 0.463 Note.-Values are presented as mean ± standard deviation (mm 2 /s). ADC = apparent diffusion coefficient, LN = lymph node (19). In metastatic lymph nodes from patients with HNSCC, the different environment of the water protons, such as the decreased extracellular space, increased cellularity, and higher nuclear-to-cytoplasmic ratio results in limited motion of water molecules and is represented as an area of hyperintensity on DWI acquired at a high b value of 800-1000 and low signal intensity on the corresponding ADC map. However, a diagnostic problem occurs with such a small, non-necrotic lymph node when the mean ADC value is applied to distinguish between benign and metastatic lymph nodes because of the small size of the metastatic foci within the lymph node. In this study, we hypothesized that the percentage of metastatic foci within a small metastatic lymph node without necrosis or morphological change is relatively small and that small and dispersed metastatic deposits in an otherwise normal lymph node are less likely to create sufficient architectural change to affect the mean ADC value; thus, resulting in no significant difference in the mean ADC value compared to that of a benign lymph node. Figures 2 and 3 show representative cases of metastatic lymph nodes with relatively low and high ADC values, which support our hypothesis. Based on our results, we speculate that the mean ADC value is wide ranging due to Axial T2-weighted (A) and gadolinium-enhanced, axial T1-weighted fat saturated (B) images show 6.3-mm size lymph node at right level III (arrows). This lymph node (arrows) shows high signal intensity on diffusion-weighted imaging at b = 800 s/mm 2 (C), as well as low apparent diffusion coefficient (ADC) value on ADC map (D). Regions of interest (red color) were drawn freehand over lymph node on diffusion-weighted imaging (E, arrow). Measured ADC value is 0.54 x 10 -3 mm 2 /s according to reader 1 and 0.74 x 10 -3 mm 2 /s according to reader 2, and which are under threshold for malignancy in previously published studies. On histopathologic examination (F), almost all of area of this lymph node was seen to be covered with metastatic deposit (asterisk) (original magnification, x 150). Korean J Radiol 15(6), Nov/Dec 2014 kjronline.org ADC heterogeneity within the lymph node based on the proportion of metastatic infiltration within the lymph node. Considering the variability of ADC values within a lymph node, the minimum ADC value, rather than the mean ADC, might be useful to detect the smallest focal metastasis within a lymph node and serve as a better imaging marker for lymph node metastasis (20). In contrast to our results, a study by Vandecaveye et al. (9) reported that even in subcentimeter node analysis, the ADC values of metastatic lymph nodes were significantly lower than those of benign lymph nodes, and DWI had higher sensitivity for distinguishing benign from metastatic lymph nodes. This difference from our study might be attributed to a difference in DWI parameters including b values and the numbers of patients enrolled. The b value used in our study was slightly lower than that of Vandecaveye's. We chose the highest b value of 800 s/mm 2 to obtain images with less susceptibility to artifacts and a higher signal-to-noise ratio. Our study had several limitations. First, it was retrospective. To overcome this limitation, we carefully included lymph nodes without morphologically malignant criteria according to the consensus of two readers, as well as a relatively homogeneous patient group (only squamous cell carcinoma). Second, because of the retrospective design, complete topographic matching of lymph nodes between MRI and the pathological specimens was impossible. Third, there was a potential limitation when drawing the ROIs for small lymph nodes. However, this limitation made no significant difference in our study based on the results of the inter-observer variation regarding ROI volumes and ADC values between the two readers. Fourth, because DWIs were obtained with two MRI scanners with different magnetic field strengths (1.5-T vs. 3-T), different spatial resolution between the two machines may have influenced the ADC values. Last, the number of metastatic lymph nodes included was relatively small. Further Axial T2-weighted (A) and gadolinium-enhanced, axial T1-weighted fat saturated (B) images show 8.2-mm size lymph node at left level II (arrows). This lymph node shows high signal intensity on diffusion-weighted imaging at b = 800 s/mm 2 (C), and high apparent diffusion coefficient (ADC) value on ADC map (D). Regions of interest (red color) were drawn freehand over lymph node (E, arrow). Measured ADC value is 1.22 x 10 -3 mm 2 /s according to reader 1 and 1.10 x 10 -3 mm 2 /s according to reader 2, and which are above threshold for malignancy in previously published studies, and, therefore, are thought to be benign lymph node. However, on histopathologic examination (F), partially involved metastatic foci (asterisk) are observed (original magnification, x 150). Korean J Radiol 15(6), Nov/Dec 2014 kjronline.org large-scale studies are necessary to confirm our results. Regardless of these limitations, the strength of our study was that we excluded enlarged and necrotic lymph nodes, which was different from previous studies. As a result, we were free from a potential interpretation bias based on morphological criteria. In conclusion, we found that mean ADC values were not different between benign and metastatic cervical lymph nodes of small size and non-necrotic appearance in patients with HNSCC, suggesting that DWI is of no additional value for morphological criteria such as size or central necrosis. Future research is required that focuses on detailed assessment methods such as voxel distribution analysis of the ADC value, particularly to detect small metastatic foci within normal lymph nodes.
2018-04-03T00:28:24.507Z
2014-11-07T00:00:00.000
{ "year": 2014, "sha1": "d4effa27c93f94cf3b8f6d0c0d29f05cfcf3d6f4", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4248638?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "d4effa27c93f94cf3b8f6d0c0d29f05cfcf3d6f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
89837224
pes2o/s2orc
v3-fos-license
Optimization of Process Conditions for Effective Degradation of Azo Blue Dye by Streptomyces DJP15 The present study was carried out to optimize the degradation process of textile azo blue dye by the potential strain Streptomyces DJP15 isolated from dye contaminated soil in and around Palakad Textile Industry, Palakad District, Kerala state, India. The decolourizing activity of the potential isolate Streptomyces DJP15 was measured spectrophotomterically at every 6 h over a period of 54 h in starch casein broth amended with 50 mg/L of the test dye, azo blue. It was noticed that, there was a decrease in the optical density (OD) indicating the degradation of the test dye by the potential isolate Streptomyces DJP15. Different incubation conditions like shake condition, static condition, dye concentration, pH and temperature were used in the present study to investigate their effect on the rate of decolorisation. The potential isolate Streptomyces DJP15 exhibited significant decolourisation activity at 48 h of incubation for all the degradation condition studied. The conditions optimum found for degradation of the azo blue dye by the potential isolate Streptomyces DJP15. The highest degradations were noticed at static conditions, 50 mg/L of dye concentration, 3% v/v inoculum concentration, 7 pH and 35 °C temperature respectively. The results of the present study confirms that the isolate Streptomyces DJP15 was effective in degrading the textile dye azo blue under optimized conditions. Azo dye is the largest and most versatile class of synthetic dyes widely used in the textile industries which accounts for more than half of the annually produced synthetic dyes.Azo dyes are classified as monoazo dyes (e.g., acid orange 52, reactive yellow 201, disperse blue 399), diazo dyes (reactive brown 1, brown 2, acid black 1, amido black), trisazo dyes (direct blue 78, direct black 19) and poly azo dyes (direct red 80) depending on the number of azo groups.On the basis of application, azo dyes are classified as reactive, disperse, direct, cationic, anionic and metalized azo dyes 1 .Amongst the azo dyes, reactive dyes the only textile colourants designed to bind covalently with cellulosic fibers and are extensively used in textile industry.Reactive dyes are highly water soluble due to high degree of sulphonation and non degradable in typical aerobic conditions found in conventional biological treatment systems 2 .Sulfonated azo dyes characterized by the presence of a -SO 3 H-group are commonly found in industrial effluents.Most of the azo dyes are stable to light, temperature, and highly resistant to degradation 3 .Persistence of the azo dye is mainly due to sulfo and azo groups which do not occur naturally making the dyes xenobiotic and recalcitrant to oxidative degradation 4,5 .The dyes without an appropriate treatment can persist in the environment for extensive periods of time and are deleterious not only for the photosynthetic processes of the aquatic plants but also for all the living organisms since the degradation of these can lead to carcinogenic substances 6 .These compounds PILLAI: DEGRADATION OF AZO BLUE DYE BY Streptomyces DJP15 tend to bioaccumulate in the environment, and have allergenic, carcinogenic, mutagenic and teratogenic properties for humans.Release of dyes into the aquatic system reduces the dissolved oxygen content, which ultimately causes the death and putrefaction of aquatic fauna 7 .In recent years, bioremediation has been considered as effective, specific, less energy intensive and environmentally benign process, since it results in partial or complete bioconversion of pollutants to stable nontoxic end products 8 .Microbial bioremediation process involves the improvement of natural degradation capacity of the microorganism 9 .Biodegradation using microorganisms is gaining importance as it is cost effective, environmental friendly technique producing less sludge 10 and complete degradation would lead to non toxic end products 11,12,13 .Many microorganisms belonging to different taxonomic group of bacteria, fungi, actinomycetes and algae have been reported for their ability to decolourize azo dyes 14,15 .Environmental factors are known to play a crucial role affecting the decolorization activity of microorganisms 16 .The physicochemical parameters may affect the stability of enzyme system involved in dye degradation, resulting in decreased performance in decolorization activity at extreme pH and temperature, which may affect the viability of strain 17 .Parameters such as various carbon source, nitrogen source, dye concentration, aeration, temperature, pH, incubation period, and inoculum size influence the decolorization efficiency of the bacteria 18,19 .The present investigation is an effort to optimize the biodegradation process of azo blue dye by previously isolated potential isolate Streptomyces DJP15. Decolurisation Experiments The previously isolated potential strain of Streptomyces DJP15 20 was grown and maintained on enrichment media 21 amended with 50 mg/L of azo blue dye at a temperature of 37 °C under agitation at 180 rpm.Decolurisation experiment were carried out in 50 mL of starch casein broth the medium (soluble starch 10.0g, K 2 HPO 4 2.0 g, KNO 3 2.0g, NaCl 2.0 g, Casein 0.3 g, MgSO 4 0.05 g, CaCO 3, 0.02 g, FeSO 4 0.01g, Distilled H 2 O 1000 mL, pH 7.0) amended with 50 mg/L of the test dye. The efficiency of degradation percentage of the azo blue dye by Streptomyces DJP15 was studied with respect to the varying effects of shake condition, static condition, dye concentration, inoculum size, pH and temperature for optimization of the degradation process.All experiments were done in triplicates. Analytical methods for dye decolourisation studies Aliquots (5 mL) of the culture media were withdrawn at time intervals of 6 h over 54 h and centrifuged at 7000 rpm for 15 min.Decolourisation was quantitatively analyzed by measuring the absorbance of the supernatant using a UV-visible spectrophotometer (Spectronic® GENESYS TM 2 PC; at maximum wavelength, lmax, of 620 nm for azo blue dye.The decolourisation rate was calculated using the equation 22 . Optimization of process conditions for effective dye degradation Optimization of important conditions such as shake condition, static condition, dye concentration, inoculums size, pH and temperature for effective dye degradation by the potential isolates of Streptomyces was carried out 23 .The potential Streptomyces DJP15 strain was examined for the degradation of azo blue dye.Effect of one parameter at a time, keeping others constant was followed.Influence of shake condition and static conditions on the maximum degradation of dye, at empirical conditions was carried out primarily before the examination of other mentioned conditions.Influence of shake and static conditions 50 mL of starch casein broth was added into 100 mL Erlenmeyer conical flask and sterilized. 1 mL of azo blue dye, at the concentration of 50 mg/L was added to the broth independently. 1 mL of 3 days old cultures of test isolate Streptomyces DJP15 was inoculated to the broth and incubated at 35 °C for 3 days, under shake condition (on shakers at 180 rpm) as well as static conditions.5 mL of the incubated broth was drawn at every 6 h and centrifuged at 7000 rpm for 15 min.Absorbance of the supernatant was recorded using UV vis spectrophotometer at 620 nm for azo blue dye.The percent degradation of the dye was calculated as mentioned earlier. Optimization of dye concentration The maximum dye degradation under static state by the potential isolate Streptomyces DJP15at different concentrations of dye was assessed following broth culture method as mentioned earlier.Degradation of azo blue dye was examined at the concentrations of 50, 100, 150, 200, 250 and 300 mg/L.The percent dye degradation was calculated as mentioned earlier. Optimization of inoculum size Inoculum size was optimized for effective dye degradation by the test isolate Streptomyces DJP15, following broth culture method as mentioned above.Inoculum size of 3 days old test isolate at 1, 2, 3, 4 and 5 % (v/v) were assessed for maximum dye degradation. Optimization of pH Various levels of pH were optimized for effective dye degradation by the test isolate Streptomyces DJP15, following broth culture method as mentioned above.pH 6.0, 6.5, 7.0, 7.5 and 8.0 of the medium were adjusted using dilute acidic and alkaline solution of hydrochloric acid and sodium hydroxide respectively. Optimization of temperature Various ranges of temperatures were optimized for effective dye degradation by the test isolate Streptomyces DJP15, following broth culture method as mentioned above.The effect of temperature on the maximum dye degradation was examined by keeping the inoculated broth at 25, 30, 35, 40 and 45°C respectively.The percent dye degradation by the test isolate at different ranges of temperatures was calculated as mentioned earlier. Optimization of process conditions for effective dye degradation In the present study, an attempt was made to optimize the degradation of azo blue dye by the potential Streptomyces strain DJP15.Effect of various process parameters like shake condition, static condition, dye concentrations, inoculum size, pH and temperature were studied.The efficiency of Streptomyces DJP15 isolate was evaluated for the degradation of azo blue dye.The effect of shake condition, still condition, dye concentration, inoculum size, pH and temperature was studied with an aim to determine the optimal conditions required for degradation of the azo blue dye in starch casein broth. Influence of static and shake conditions The percent degradation of azo blue dye by potential isolate Streptomyces DJP15 at shaking and static conditions were as shown in the Figure 1.The strain Streptomyces DJP15 showed maximum degradation of 65.26% for azo blue dye under continuous shaking conditions at 48 h of incubation time.Under still condition, a sudden increase in the percent degradation of 12.63 % by Streptomyces DJP15 for blue dye was observed.The isolate Streptomyces DJP15 exhibited 77.89 % of maximum degradation at a incubation time of 48 h under static conditions.It was found that the isolate Streptomyces DJP15 showed more percent degradation under static conditions than shaking conditions.These results showed that the isolate Streptomyces DJP15 was more effective and potential in degrading azo blue dye under still conditions than shaking conditions. To the best of our knowledge, it is the first report on degradation of sulfonated reactive di azo textile dyes (azo blue) by Streptomyces strains. Optimization of dye concentration Figure 2 shows the effect of initial concentration of dye ranging from 50-300 mg/L.The percent degradation of azo blue dye after 48 h of incubation by Streptomyces DJP15 was found to be 76.66, 71.25, 68.42, 64.78, 61.53 and 45 % at initial dye concentrations of 50, 100, 150, 200, 250 and 300 mg/L respectively.It was further noted that the degradation of the dye was concentration dependent.It was clear from the observation that, percent degradation of dye increased with an increase in time, irrespective of initial dye concentration.Further, percent degradation of dye decreased with an increase in dye concentration.i. e. lower the concentration higher the degradation efficiency and vice-versa.In our study, the diazo dye, reactive blue 222 (azo blue) degraded up to 76.66 % at 48 h of incubation with an initial dye concentration of 50 mg/L by the isolate Streptomyces DJP15. Optimization of inoculum size Effect of inoculum size (1 -5% v/v) with time on degradation of azo blue dye was represented in the Figure 3.The result in Figure 3 depicts that, at every dose of inoculum, dye degradation increased with time during 6 to 48 h 4. In our study, it was noticed that, an increase in pH from 6 to 7 enhanced the rate of degradation significantly.However, degradation rate was the highest between 7-7.5 pH.Highest degree of degradation occurred at optimum pH 7.0 at 48 h of incubation.The results further revealed that, any deviation in the pH from optimum, decreased the extent of dye degradation.From the Figure 4 it was clearly noted that, the percent degradation of azo blue dye increased with increase in time irrespective of pH.The maximum percent degradation (76.31 %) of dye was found at pH 7 after 48 h of incubation period.Good percent degradation (72.10 %) was observed at pH 7.5.Further, increase in pH from 7.5 to 8.0, decreased the percent degradation of azo blue to 67.89 %.Least percent degradation (57.36 %) was recorded at pH 6.0.62.63 % degradation was noticed at pH 6.5.It was clearly understood that, degradation was lower in acidic pH than alkaline pH. Optimization of temperature Figure 5 shows degradation of azo blue dye by Streptomyces DJP15 with time at different temperatures (25, 30, 35, 40 and 45°C).The percentage degradation of azo blue dye at 25, 30, 35 40 and 45 °C was found to be 57.89,74.21, 79.47, 68.94 and 64.73 % respectively.It was clear that, percent degradation of dye increased with an increase in temperature from 25 to 35 °C.The percentage removal of dye was decreased with further increase in temperature up to 45 °C.Degradation activity was significantly suppressed at 25 °C than other temperatures, which might be due to the loss of cell viability or deactivation of the enzymes responsible for degradation at 25 °C (Cetin 2006).Further, increase in the temperature resulted in the decrease in the percent degradation.This may be due to the that at higher temperatures, thermal deactivation to the enzyme responsible for degradation may occur. Extended period of incubation further to 6 h (54 h), decreased the percent degradation may be due to the decline phase of the isolate growth curve in all the conditions subjected for the study. DISCUSSIONS Microbes posses more than one mechanism for dye degradation 24 .Decolorisation of dye was enhanced by static condition as previously reported by the researchers 25 .Other researchers also reported that, static conditions were suitable for dye degradation process 26,27,28 .Generally stationary culture condition dominates over shake culture condition 29,30,31 .The present study supports that still / static condition is suitable for dye degradation process as other researchers reported 32,33 .The more efficient decolorisation of similar structurally complex dyes under shaking condition was reported 34 .Azo dye degradation of 20% in shake culture and more than 95% in still culture by Proteus mirabilis was reported 35 .The initial biodegradation of azo dye occurs under anoxic condition leading to reductive cleavage of azo bond which causes decolorization of the dye.During shaking condition, presence of oxygen leads to deprive the azoreductase required for azo bond cleavage.In the present study degradation was noticed both in shaking and static conditions by both the isolates but effective degradation was recorded only under static condition than shaking condition.The competition between azo dyes and oxygen for reduced electron carriers under aerobic condition was the reason for decreased decolorization at shaking condition 32 .This reveals that the enzyme azoreductase involved in the initial step of azo bond reduction must be an oxygen insensitive 9,36 .The dye concentration can influence the efficiency of microbial decolourization through a combination of factors including the toxicity imposed by dye at higher concentration 16,37 and the dye degradation efficiency depends on the initial dye concentration 38 .The degradation of 80% of the synthetic dyes by Pseudomonas sp at 50 mg/L concentration in more than 7 days was reported 39 .The 80% decolorisation of navy blue 3G at 50mg/L by Brevibacillus laterosporus MTCC2298 within 48 h under static condition was also reported 40 .It was noted that beyond certain size of inoculum there was no proportionate increase in degradation with further increase in size of inoculum 41 .Rate of terasil black effluent decolorization enhanced with increase in inoculum size of B. cereus from 2.5 to 10 %; however, further increase of inoculum up to 20 % did not cause any change in the intensity of color 42 .The pH has a major effect on the efficiency of dye decolorization, and the optimal pH for color removal is often between 6.0 and 10.0 for most of the dyes 35 .The effect of pH in degradation of the dye may be due to the transport of dye molecules across the cell membrane, which is considered as a rate limiting step for dye decolorization 43 .The high decolourization of Reactive Black 5 by Enterobacter EC3 at pH 7 was reported 29 .The maximum decolourization of Reactive Red 195 by Georgenia sp at pH 7 was also observed 44 .E. coli and P. luteola both exhibited best decolorization rate at pH 7.0 45 .Our findings also in accordance with these reports, where maximum dye degradation was noticed at pH 7.0.Inhibition of Klebsiella pneumoniae RS-1, and Alcaligens liquefaciens S-1 biodecolourisation activity at 45°C was reported 45 .Optimum temperature of 37 °C was observed for the decolorization of acid orange 10 and disperse blue 79 by Bacillus fusiformis kmk 5 46 .The optimum temperature of 30-40°Cas for decolorization of crystal violet by Shewanella Sp NT0Vl was observed 47 .The decrease in the decolorization activity at higher temperature can be attributed to the loss of cell viability or to denaturation of the azoreductase enzyme 48 .The pH and temperature exert major effect on the efficiency of dye decolorization and that optimal conditions vary between pH 7.0-10.0and 30-40 °C, respectively 18,26,35 . CONCLUSIONS The isolate Streptomyces DJP15 found to be very effective and potential in degrading the textile dye azo blue.The significant and striking observation of the resistance to higher levels of azo blue dye toxicity by the strain Streptomyces DJP15 enables their use for in situ bioremediation because it indicates the ability of strain to withstand shock loads of dye during the bioremediation process.The results of incubation temperature showed no deactivation of the degradation ability of the isolates up to 45°C which indicates the thermo tolerance ability of the isolates.Therefore, the isolate Streptomyces DJP15 could be useful for on-field process in a country like India, where temperatures reaches to above 40°C in some parts of the country during summer season.However, there is a need for further investigation to understand the enzymes and other mechanisms involved in the degradation of the azo blue dye by the isolate Streptomyces DJP15 in order to harness its property for bioremediating the dye contaminated habitats for clean environment and clean nature for all life. Fig. 1 .Fig. 4 .Fig. 5 . Fig. 1.Degradation of azo blue by potential isolate Streptomyces DJP15 at shake and static condiotnsincubation.After 48 h, the percent degradation of azo blue dye was found to be 76.03, 77.68, 80.99, 81.81 and 81.81 % at inoculum sizes of 1, 2, 3, 4, and 5 % v/v respectively.When the inoculum size was increased up to 3.0 % (v/v), the extent of degradation increased to 80.99% at 48 h of incubation.No drastic or considerable increase or decrease in the percent degradation was observed when the inoculum size was increased to 4.0 and 5.0 % (v/v).The maximum dye degradation (80.99 %) was attained at 3.0 % (v/v) inoculum at 48 h.Therefore, 3.0 % (v/v) dose of Streptomyces DJP15 inoculum was selected as optimum for the degradation of azo blue dye.Optimization of pHEffect of pH (6.0 -8.0) on the degradation of azo blue dye by Streptomyces DJP15 was shown
2019-04-02T13:12:55.591Z
2017-12-30T00:00:00.000
{ "year": 2017, "sha1": "2c9a7dce782e56dcc00a3df55b99b4ab5ec9c521", "oa_license": "CCBY", "oa_url": "https://doi.org/10.22207/jpam.11.4.14", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f69737a7d3b914ff1e26f4d750e84e7adacb6381", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
233848455
pes2o/s2orc
v3-fos-license
Influence of sintering parameters on the microstructure and mechanical properties of nanosized 3Y-TZP ceramics The fracture toughness of 3Y-TZP ceramics obtained from a nanocrystalline powder with an optimized microstructure and highly transformable tetragonal grains was investigated. Samples of ZrO 2 -3 mol% Y 2 O 3 were sintered at temperatures between 1250 and 1400 °C, with isothermal holding times of up to 16 h. Samples sintered at 1250 °C exhibited relative densities ranging between 92% and 98%, which increased with increasing isothermal duration, while samples sintered at 1300, 1350, or 1400 °C achieved densification higher than 98% for all isothermal treatments. Crystallographic analysis indicated the presence of a highly transformable ZrO 2 -tetragonal phase (c/a√2= 1.0148-1.0154) for all conditions studied. The average grain size ranged from 0.18±0.04 m m (1250 °C-0 h) to 0.64±0.08 m m (1400 °C-16 h), indicating activation energy of 141.3 kJ/mol for grain growth and a growth exponent of 2.8. Both Vickers hardness (1025 to 1300 HV) and fracture toughness (4.0 to 7.8 MPa.m 1/2 ) increased with increasing sintering temperature and time due to increased densification, reduced porosity, and maintenance of potentially high fracture toughness by the t m phase transformation. INTRODUCTION The use of yttria-stabilized tetragonal zirconia-based ceramics, ZrO 2 (Y 2 O 3 ), also called Y-TZP, is widespread in the field of dentistry, among others, due to its biocompatibility, aesthetic characteristics and, in particular, its excellent mechanical properties [1,2]. In particular, a high fracture toughness results from the tetragonal to monoclinic (t m) phase transformation that occurs in the wake field of a propagating crack and which is accompanied by a volumetric expansion of around 4% to 5% [3,4]. This phase transformation in turn, generates stress fields in the ceramic matrix, which hinders crack propagation, thus improving fracture toughness. In general, ceramics with an average grain size of less than 100 nm are called nanocrystalline ceramics. With reduced grain sizes in the sintered body, nanocrystalline ceramics may have different properties and considerable merits compared to conventional ceramics [5]. In some cases, nanocrystalline ceramics are used as a base material for the development of components and sometimes are applied as a secondary phase (reinforcement) added to ceramic matrices, with the intention to improve the fracture toughness and sinterability of these materials. However, there are some problems associated with the use of nanocrystalline ceramic powders, in particular the difficulty of eliminating aggregates and agglomerates, as well as the difficulty of compaction and control of grain growth during the sintering process. Starting in the 1990s, a number of companies developed easily compressible ZrO 2 nanosized powders, stabilized with 3 mol% Y 2 O 3 , with added binders [6,7]. Due to this scenario, a growing research interest exists to control grain growth during sintering and to evaluate the effects on the mechanical properties. Complete densification of 3Y-TZP ceramics using micrometric or sub-micrometric powders can be achieved at sintering temperatures of 1500 °C or higher and isothermal holding periods. As a result, the increased diffusion may cause grain growth of the ZrO 2 grains and lead to a loss of stability of the tetragonal phase, even allowing for heterogeneous grain growth. In consequence, a population of low tetragonality t'-ZrO 2 grains is generated [8,9] with a concomitant loss of transformation ability (fracture toughness) when the material is subject to tensile stresses at the crack tip of a propagating crack. The use of nanosized ceramic powders is of great scientific and technological interest due to their increased sinterability, which may reduce the sintering temperature and/or time, also resulting in extremely fine-grained microstructures with improved mechanical properties. In the case of ZrO 2 -based ceramics, using nanosized particles, an improvement of the fracture toughness by microcracking has been reported [6][7][8][9]. Therefore, the sintering cycle has to be carefully chosen in order to take advantage of the unique properties of nanostructured materials. Previous studies [9,10] indicate that the effectiveness of the t m phase transformation of Y-TZP ceramics is associated with the transformability of the tetragonal grains present in the sintered material. This transformability is associated with shearing stresses present in the tetragonal anisotropic phase, which is more pronounced for structures with a high ratio of the lattice parameters 'c' and 'a'. Thus, maximizing the fraction of transformable tetragonal grains in the region ahead of the crack tip allows for increased fracture toughness, and specifically for 3Y-TZP ceramics, grain sizes between 0.1 and 1.0 µm should be targeted [9], as grain sizes outside this range notably present low transformability or spontaneous transformation [9]. In this work, the aim was to specifically investigate the effects of the sintering parameters on a 3Y-TZP nanoparticulate powder that allows for the unique metastability of highly transformable ZrO 2 -tetragonal grains, developing microstructures with an average grain size smaller than 1 µm and correlating the densification, amount of tetragonal ZrO 2 phase, and microstructure (grain growth) with the resulting mechanical properties of lowtemperature sintered 3Y-TZP ceramics. EXPERIMENTAL PROCEDURE The starting powder used was a commercial nanosized 3Y-TZP powder (TZ-3YE, Tosoh) with a specific surface of 16.2±2.0 m 2 /g, an average crystallite size of 40 nm, and containing 3.6 wt% of binder. Dilatometry and sintering of Y-TZP samples: 3Y-TZP bars (4x4x8 mm) and discs (Ø12x3 mm) were compacted by cold uniaxial pressing under applied pressures between 12 and 74 MPa for 60 s. The samples were sintered under two different conditions: a) the sinterability of the Y-TZP specimens was evaluated by dilatometry using a dilatometer (DIL-402C, Netzsch) under argon flux throughout the heating and cooling cycle, adopting a heating rate of 2 °C/min up to the maximum sintering temperature of 1250, 1300, 1350, or 1400 °C; the shrinkage was measured by a linear variable differential transducer (LVDT) with a sensitivity of 0.01 mm; and b) solid-state sintering, using an electrically heated furnace (F1650, Maitec), with a heating rate of 2 °C/min up to the maximum sintering temperature of 1250, 1300, 1350, or 1400 °C with various isothermal holding times up to 16 h. Cooling to room temperature was done at a rate of 10 °C/min. Density and phase analysis: the relative green density of the compacted samples was determined by the ratio of the geometrical density of the samples and the theoretical density of the material (ρ th ) of 6.05 g/cm 3 . The density of the sintered samples was determined by the immersion method, using Archimedes' principle. The residual porosity was calculated by the following equation [11]: The phase composition of the starting powders and sintered samples was determined by X-ray diffraction analysis using a diffractometer (XRD-6000, Shimadzu). The analysis was conducted with CuKα radiation in the 2θ range of 20° to 80° with a step width of 0.05° and a counting time of 3 s/position. The X-ray diffraction peaks were identified by comparison with the JCPDS files [12]. The monoclinic zirconia content at the sample surface (F M ) was calculated by the integrated peak areas of the planes ( 11) M and (111) M planes of the monoclinic phase, in relation to the peak area of the (101) T plane of the tetragonal phase, according to [13]: The calculations of the lattice parameters were done with the Rietveld refinement technique, using the FullProf Suite 3.0 software [14][15][16]. The tetragonality, expressed by the c/a ratio of the ZrO 2 grains, of samples sintered at 1250 °C/0 h and 1400 °C/16 h was determined by the model proposed by Krogstad et al. [17,18] valid for ZrO 2 stabilized with different amounts of Y 2 O 3 . After sintering, both surfaces of the specimens were ground and polished successively with 9, 6, and 1 mm diamond suspensions. The microstructures of the sintered samples were observed using a scanning electron microscope (SEM, JSM-5310, Jeol). Samples were thermally etched at 1250 °C for 15 min with a heating rate of 25 °C/min. The grain size distributions of the sintered samples were determined using the ImageJ software. A population of at least 400 grains was analyzed for each sintering condition studied. Mechanical properties: Vickers hardness of the sintered 3Y-TZP specimens was measured using a hardness testing equipment (Time Hardness). The polished surface of the samples was indented with a load of 1000 gf for 30 s, conducting 30 measurements (n=30) for each sintering condition studied. Furthermore, the fracture toughness (K Ic ) was also measured by the crack length emanating from the Vickers indentation marks, as proposed by Niihara et al. [19]: where K Ic is the fracture toughness (MPa.m 1/2 ), l is the length of the crack, measured from the tip of the indentation until the tip of the crack (mm), a is the half-length of the indentation diagonal (mm), HV is the Vickers hardness (GPa), E is Young's modulus of 3Y-TZP (195 GPa), and F is the indentation load used in the Vickers hardness test (N). RESULTS AND DISCUSSION Characterization of starting-powder: Fig. 1 shows the X-ray diffraction (XRD) pattern, the compaction curve, and an SEM micrograph of the zirconia powder used. A majority of the tetragonal (t-ZrO 2 ) phase was found in the starting powder, as well as 18 vol% of the monoclinic (m-ZrO 2 ) phase ( Fig. 1a). It can be noted from Fig. 1b that the material reached its maximum green density level for compaction pressures above 60 MPa, remaining constant for higher pressures. This behavior is believed to be the result of the binder present in the commercial powder. Furthermore, the SEM micrograph (Fig. 1c) shows that the starting powder consisted of spherical agglomerates with sizes ranging between 20 and 50 mm, which were possibly formed during a spray-drying process of the powders with 3.6 wt% of binder added in order to facilitate the compaction process of this powder. Dilatometry: Fig. 2a presents the linear shrinkage and shrinkage rate of a ZrO 2 sample as a function of the sintering temperature up to 1400 °C, while Fig. 2b presents a comparison of the shrinkage rate of two samples compacted at the lowest and highest pressure of 12.3 and 73.5 MPa investigated in this work and sintered at 1400 °C with an isothermal holding time of 60 min. It was observed that at temperatures between 200 and 400 °C, region I, a first shrinkage step occurred, corresponding to the elimination of the organic binder in the starting material. The highlighted region II comprised the effective onset of the densification process, where the solid-state sintering mechanisms acted eliminating porosity. In microparticulate Y-TZP ceramics, this region where densification usually begins is close to 1150 °C [20], while in nanocrystalline materials with a particle size of about 40 nm, neck formation, and consequent densification and shrinkage occurred at temperatures close to 980 °C. It can be noted that before the sintering isotherm begins, the effective shrinkage gain with increasing compaction pressure was about 2-3%. Thus, for further studies in this work, a compaction pressure of 73.5 MPa for the preparation of samples was chosen. Characterization of sintered bodies: the relative density and residual porosity results of the Y-TZP samples as a function of the sintering temperature are shown in Fig. 3. The extremely fine, nanometric particle size of the zirconia starting powder used allowed for high densification to be reached during sintering. Under all sintering temperatures studied, relative densities higher than 92% were achieved, and for the sintering temperatures of 1350 and 1400 °C, all samples exhibited relative density higher than 98%, even without isothermal holding (Fig. 3a). At lower temperatures, 1250 and 1300 °C, relative densities exceeding 98% were only obtained after an isothermal treatment for 16 or 8 h, respectively. Similarly, the residual porosity (Fig. 3b) was progressively reduced with increasing temperature and isothermal holding time. The X-ray diffraction patterns of samples sintered at different temperatures and isothermal holding times are shown in Fig. 4. Similar diffraction patterns are observed in Fig. 4 for all samples showing only diffraction peaks of the tetragonal ZrO 2 phase, independently of the sintering temperature or holding time investigated. Furthermore, no monoclinic ZrO 2 was detected after sintering, indicating the complete conversion of this phase present in the starting powder (Fig. 1a) into tetragonal ZrO 2 . The results of the Rietveld refinement for the t-ZrO 2 lattice parameters and the calculated tetragonality of samples sintered at 1250 °C/0 h, 1350 °C/8 h, and 1400 °C/16 h are summarized in Table I. Scott [21], reporting on the ZrO 2 -Y 2 O 3 binary system, indicates that ZrO 2 ceramics stabilized with 3 mol% Y 2 O 3 present the tetragonal phase with some associated cubic phase. However, in recent studies [17,18], mathematical models associated with the X-ray diffraction technique propose that an intermediate phase called t'-ZrO 2 , composed of grains containing network parameters 'c' approaching 'a', can coexist with the stabilized t-ZrO 2 phase when 3Y-TZP is sintered at temperatures and times that allow for the migration of a part of Y 2 O 3 to specific grains t', reported as 'semi-cubic'. In this way, a modification of the original unit cell structure may occur, which changes the relationship between the parameters c and a of certain grains. Thus, after determining the crystallographic parameters of sintered materials at different temperatures and holding times, the results shown in Table I indicated that the tetragonality (c/a ratio) of the materials underwent little variation, with values of c/a√2 of 1.015, regardless of the sintering conditions. As a result, the tetragonality of the grains was not significantly altered by the sintering temperature and holding time studied. Fig. 5 shows representative micrographs of samples sintered at 1250 and 1400 °C with isothermal holding times of 0, 2, 4, 8, and 16 h. Furthermore, Fig. 6a presents the average grain size as a function of the sintering temperature and isothermal holding time, and Fig. 6b shows a correlation between the grain size and the relative density of the sintered samples. The increase of the average grain size of the zirconia grains was observed with the increase of the sintering temperature and isothermal holding time (Fig. 6a). The grain size varied between 0.18±0.04 and 0.45±0.07 mm for samples sintered at 1250 °C without and 16 h isothermal holding time, respectively, and between 0.22±0.06 and 0.64±0.08 mm for samples sintered at 1400 °C without and 16 h isothermal holding time. The calculated grain growth exponent (n) and the activation energy for grain growth (Q gg ) were n=2.8 and Q gg =141.3 kJ/mol for the Y-TZP powder studied, indicating that grain growth was controlled by grain boundary diffusion [7,20,22]. In Fig. 6b, the average grain size is correlated to the relative density. The graph indicates that grain growth occurred simultaneously with the elimination of residual porosity at all sintering temperatures studied. Analyzing the shrinkage behavior obtained by dilatometry (Fig. 2), it was observed that at about 980 °C, the densification process began, due to the high reactivity of the nanocrystalline powder, with crystallite sizes of around 40 nm. From this temperature, until the final sintering temperatures (1250 to 1400 °C) were reached, diffusional mechanisms acted in the material, reducing the free internal surface area and eliminating porosity, besides initiating grain growth. On the other hand, the materials reached high densification levels, with relative densities higher than 92% and presenting average submicron grain sizes between 0.18 and 0.22 mm, with low size variations between sample groups. In general, when the relative density exceeds 90% [23][24][25][26][27][28] during the sintering of submicron-sized particles, the bonding between the particles due to grain boundary formation is well developed, and most pores in the sintered body are closed. Furthermore, grain growth occurs until reaching the final sintering temperature and also during the isothermal holding time, as was the case in this work. Densification also continues to increase, although the few and well dispersed residual pores may still hinder grain growth kinetics during this stage. In consequence, first, the final average grain sizes of the sintered samples were very close and with only small variations due to the very small ZrO 2 grains formed in the early sintering stage of nanocrystalline Y-TZP powders; second, the residual porosity (Fig. 3b) was low at the moment the isothermal threshold began (t=0); and, third, the sintering temperatures adopted in this study (1250-1400 °C) allowed only a moderate diffusivity compared to the usual sintering temperatures of 1500 to 1600 °C [6,20]. Therefore, the sintering of these nanoparticulate powders results in normal grain growth during the isothermal holding time, and refined microstructures were obtained under all sintering conditions studied. Mechanical properties: in Fig. 7a, the Vickers hardness, and in Fig. 7b, the fracture toughness of sintered samples are presented as a function of the sintering parameters adopted. Furthermore, the relationship between the number of grains per unit area (grain density) as a function of temperature and isothermal holding time is shown in Fig. 7c. A slight increase in hardness was observed with increasing duration of the isotherm for all sintering temperatures (Fig. 7a). This behavior was associated with the increase in densification and reduction in porosity, as illustrated in Fig. 3. Porosity exponentially reduces the hardness of the material, and therefore, its elimination improves the final hardness of the sintered body. Finally, maximum hardness ranged from 1250 to 1300 HV, which are typical values for Y-TZPbased ceramics [29,30]. The fracture toughness (Fig. 7b) also increased with increasing sintering temperature and isothermal holding time. The fracture toughness of zirconia based-ceramics is associated with three material characteristics: i) tetragonal phase content, ii) ZrO 2 grain size, and iii) residual porosity. Microstructural analysis revealed average grain sizes ranging from 0.18 to 0.22 mm for sintered materials without an isothermal threshold and 0.45 to 0.64 mm for prolonged isothermal treatments. Different authors claim, based on previous studies, that dense, mostly tetragonal 3Y-TZP ceramics may have grains with different degrees of toughening effect by the t m phase transformation [31][32][33][34]. Specifically, when the grain size is less than or near 100 nm, the grains are highly metastable, tending to remain tetragonal even when exposed to a stress field generated by the crack propagation, thus without undergoing the t m transformation toughening mechanism. On the other hand, materials with zirconia grains larger than 1 mm tend not to exhibit thermodynamic metastability, leading to a spontaneous phase transformation or, in some cases, to a further transformation into the cubic phase, with depletion of Y in neighboring grains and generation of monoclinic ZrO 2 . Consequently, materials exhibiting an average grain size close to these two limits tend to have reduced mechanical properties, especially with regard to fracture toughness. Fig. 7c shows the grain density as a function of sintering temperature and time. It was noted that without isotherm or for short isothermal periods, the number of grains per area was higher, regardless of the temperature. However, as already shown, many of these grains were of a very small size. For isothermal holding times longer than 4 h, the grain density tended to stabilize, with grain sizes ranging from 0.35 to 0.64 mm. The third factor that significantly influences the fracture toughness of Y-TZP ceramics is the residual porosity (Fig. 3b). In the case of our work, an inverse proportional behavior of fracture toughness with the reduction of porosity was found, as confirmed by other works [35][36][37]. It is well established that the fracture toughness of tetragonal zirconia (Y-TZP) based ceramics is the result of its peculiar t m phase transformation mechanism [9,38]. This transformation occurs basically in two stages: transition of the tetragonal into the monoclinic structure due to the displacement of the Zr 4+ ions, and the diffusion of oxygen ions to the oxygen sites in the monoclinic structure, causing monoclinic, longitudinal sheets grow within the tetragonal grain, just as they grow laterally due to the lateral migration of O 2ions [39]. As a crack propagates, tensile stress is associated with the crack opening. The tetragonal grains adjacent to the crack tip undergo compression and create a stress zone associated with the applied mechanical stress. The variation of the total free energy for the transformation to occur is the energy balance between the variation of the free chemical energy, variation of surface free energy, and density of interaction energy (associated with the application of external energy) [40,41]. In the specific case of this work, for all sintered samples studied, the mechanical tests were carried out at room temperature, the chemical composition referring to the alloying oxide (3 mol% Y 2 O 3 ) remained constant, and the energy of interaction was considered constant because the same indentation load was adopted in the tests. Thus, the transformation occurred mainly due to the factor related to the free surface energy; in other words, the tetragonal microstructure was the dominating factor in the transformation. Considering that the t-ZrO 2 grains were the main parameter responsible for the fracture toughness of Y-TZP ceramics, the extent of the transformation region (shielding) around the crack tip was directly linked to the volumetric fraction of transformable t-ZrO 2 grains. In the specific case of this work, we adopted a microstructural design that allowed for a complete formation of transformable tetragonal grains, confirmed by the X-ray diffraction analysis that indicated a stable tetragonality of the grains, regardless of the sintering conditions adopted. As the tetragonality was not influenced within the limits of this study, the shear stress and consequent deformation induced by twinning did not significantly affect toughness. The results indicated that porosity was progressively reduced in the sintered Y-TZP ceramics due to the increase in temperature and isothermal holding time. Furthermore, it is expected that the elastic modulus undergoes proportional reduction with increasing porosity and that porosity reduces the effective phase transformation zone around the crack tip (shielding) because they represent voids which retain part of the cascade effect of shearing or compressive stresses resulting from transformed grains located in this region. The increment of toughness by plane deformation (∆K C ) can be expressed by [42,43]: where f is the volume fraction of the tetragonal phase in the transformation zone, E is Young's modulus, ε t the dilatational deformation involved, h the size of the transformation zone, and ν the Poisson's ratio. According to this model, considering the microstructure and the porosity of the Y-TZP ceramics developed in this work, it can be stated that sintering at lower temperatures and shorter holding times results in higher porosity and smaller average grain size and therefore a reduction in the parameters f, E, and h (Eq. E). In addition, as there are no variations in the tetragonality and, therefore, the transformability of tetragonal grains can be considered similar, the parameter ε t can be considered constant, as well as the Poisson's ratio. On the other hand, the increase in the size of the tetragonal grains, on this dimensional scale, leads to a small increase in the effective transformation zone around the crack tip. Thus, within the experimental limits of this article, the toughening and, consequently, the fracture toughness of these materials containing smaller grain size and greater porosity is theoretically lower than dense materials with larger zirconia grains, corroborating the results presented in Fig. 7b. CONCLUSIONS The fracture toughness of ceramics obtained from a nanocrystalline 3Y-TZP powder and sintering between 1250 and 1400 °C was investigated. Samples sintered at 1250 °C showed relative densities between 92% and 98%, depending on the isothermal holding time. For sintering at 1350 and 1400 °C, all samples exhibited relative density higher than 98%, independent of an additional isothermal holding. The analysis by Rietveld's refinement indicated that the only crystalline phase present under all sintering conditions studied was t-ZrO 2 , whose tetragonality enabled high transformability during activation of the t m transformation toughening mechanism. In addition, the microstructural analysis revealed that the average grain sizes ranged from 0.18 to 0.22 mm for sintered materials without an isothermal threshold and 0.45 to 0.64 mm for prolonged isothermal treatments of 16 h. The hardness increased proportionally with the reduction of the residual porosity, while the fracture toughness presented a progressive increase related to the increase in the sintering temperature as well as in the isotherm sintering (4 to 8 MPa.m 1/2 ). This behavior was due to the increase in the effective transformation zone around the crack tip, which is reported to increase with increasing average grain size, without losing its transformability, as well as the gradual reduction of residual porosity. Thus, an efficient strategy to densify nanoparticulate powders at low temperatures, obtaining a high fracture toughness, may be the use of isothermal holding times that result in refined and highly transformable microstructures.
2021-05-07T00:04:18.462Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "6b77de67a0f62f6c5c080e948ff29b54989dd5d0", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/j/ce/a/tvzXLMW8hRYG43Pym4Lzp8S/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d68d4d4f47dfcaa85657292227bf5978825fc131", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
3491240
pes2o/s2orc
v3-fos-license
Targeting the cancer-associated fibroblasts as a treatment in triple-negative breast cancer Increased collagen expression in tumors is associated with increased risk of metastasis, and triple-negative breast cancer (TNBC) has the highest propensity to develop distant metastases when there is evidence of central fibrosis. Transforming growth factor-β (TGF-β) ligands regulated by cancer-associated fibroblasts (CAFs) promote accumulation of fibrosis and cancer progression. In the present study, we have evaluated TNBC tumors with enhanced collagen to determine whether we can reduce metastasis by targeting the CAFs with Pirfenidone (PFD), an anti-fibrotic agent as well as a TGF-β antagonist. In patient-derived xenograft models, TNBC tumors exhibited accumulated collagen and activated TGF-β signaling, and developed lung metastasis. Next, primary CAFs were established from 4T1 TNBC homograft tumors, TNBC xenograft tumors and tumor specimens of breast cancer patients. CAFs promoted primary tumor growth with more fibrosis and TGF-β activation and lung metastasis in 4T1 mouse model. We then examined the effects of PFD in vitro and in vivo. We found that PFD had inhibitory effects on cell viability and collagen production of CAFs in 2D culture. Furthermore, CAFs enhanced tumor growth and PFD inhibited the tumor growth induced by CAFs by causing apoptosis in the 3D co-culture assay of 4T1 tumor cells and CAFs. In vivo, PFD alone inhibited tumor fibrosis and TGF-β signaling but did not inhibit tumor growth and lung metastasis. However, PFD inhibited tumor growth and lung metastasis synergistically in combination with doxorubicin. Thus, PFD has great potential for a novel clinically applicable TNBC therapy that targets tumor-stromal interaction. Breast cancer is the most commonly diagnosed cancer among women and the second-most frequent cause of cancer death. Of the various classes of human breast cancer, triple-negative (ER -PR -HER2 -) breast cancer (TNBC) is the most aggressive type, and no targeted therapy is available. In addition, TNBC has the highest propensity to develop distant metastases and show poor prognosis when there is evidence of central fibrosis [12]. TGF-β ligands are often enriched in the TNBC tumor microenvironment [13][14][15][16]. Research Paper This suggests that targeting the desmoplasia/fibrosis and TGF-β signaling in TNBC could be of value. In the present study, we have evaluated TNBC tumors that have enhanced collagen expression to determine whether we can reduce metastasis by targeting the CAFs with Pirfenidone (PFD). PFD is an orally administered pyridine (5-methyl-1-phenyl-2-[1H]pyridone) that exhibits antifibrotic properties in a variety of in vitro and animal models of fibrosis as a TGF-β antagonist, and has been clinically developed for the treatment of idiopathic pulmonary fibrosis (IPF) [17,18]. TNBC xenograft tumors exhibit accumulated collagen and activated TGF-β signaling, and metastasize to lungs To determine fibrosis and TGF-β activation in TNBC as a model, we used patient-derived xenograft (PDX) models that retain the essential features of the original patient tumors and metastasis to specific sites (HCI-001 and HCI-002) [19,20], and thus are authentic experimental systems for studying human cancer metastasis. In these models, the tumors engrafted in the mammary glands of immunodeficient NOD/SCID mice grew to approximately 1 cm in 5-8 weeks (Supplementary Figure S1A). Enhanced collagen accumulation was exhibited in the primary tumors ( Figure 1A). Lung metastasis was also detected in HCI-001 model ( Figure 1B and Supplementary Figure S1B). Although some reports show collagen deposition in metastatic sites of mice and patients [21,22], we did not detect marked collagen accumulation in the lung metastases ( Figure 1B, right panel). To evaluate TGF-β signaling, we determined expression of phospho-SMAD2 and phospho-SMAD3 as intracellular markers of TGF-β signaling [23][24][25]. Phospho-SMAD2 was widely expressed in primary tumors and stroma, while phospho-SMAD3 was sporadically expressed in primary tumors, but not in stroma ( Figure 1C). Phospho-SMAD2 and phospho-SMAD3 were expressed in the lung metastatic tumors and also in the stroma around the large metastases, but not in the stroma around the micrometastases ( Figure 1D and Supplementary Figure S1D). These observations of our TNBC xenograft models are consistent with TNBC-related fibrosis and TGF-β signaling showed previously [12][13][14][15][16]. CAFs promote primary tumor growth and lung metastasis in TNBC mouse model CAFs have been reported to promote breast tumor progression in vitro and in vivo, although it has yet to be determined whether normal mammary fibroblasts suppress or promote breast cancers [26][27][28][29]. We isolated CAFs from tumor specimens of luminal-type breast cancer patients ( Figure 2A, left panel). Vimentin and fibroblast activation protein (FAP) are markers commonly used for identification of CAFs, as FAP is not expressed in adult normal tissue [1,6,10,29,30]. The cultured cells had an elongated appearance and reduced cell-cell contact and expressed vimentin and the CAF marker FAP, but not pan-cytokeratin, an epithelial tumor marker (Figure 2A, middle and right panels and Supplementary Figure S2A and S2B). We then cultured CAFs derived from the TNBC xenograft tumors (Supplementary Figure S2C). Since the CAFs did not express human-specific vimentin (Supplementary Figure S2D), these data suggest that the dominant fibroblast population in the xenograft models is derived from the mouse. We next determined the effects of CAFs on TNBC in vivo. We cultured CAFs derived from 4T1, a mouse TNBC cell line, homograft tumors (Supplementary Figure S2E and MATERIALS AND METHODS). When we transplanted 4T1 cells with or without those CAFs into mammary glands of BALB/c mice, CAFs promoted primary tumor growth ( Figure 2B) and increased lung metastatic tumor size and numbers ( Figure 2C). Significantly, CAFs enhanced collagen accumulation and increased the expression levels of phospho-SMAD3 in primary tumors ( Figure 2D and 2E). These results suggest that CAFs may enhance TNBC progression through TGF-β activation, as described previously [1,10,11]. PFD has inhibitory effects on cell viability and collagen production in CAFs Our data from the PDX tumors indicate that some TNBC have increased fibrosis and TGF-β. We therefore hypothesized that an anti-fibrotic agent as well as a TGF-β antagonist may be effective for TNBC treatment. PFD exhibits antifibrotic properties in a variety of in vitro and animal models of fibrosis [31][32][33][34][35], and has shown efficacy and safety in patients with liver fibrosis, renal fibrosis and idiopathic pulmonary fibrosis (IPF) [36][37][38][39][40]. In animal models of fibrosis in the lung, liver, kidney and heart, PFD reduces fibrosis and downregulates TGF-β and other molecules [33,[41][42][43][44][45]. We first evaluated toxicity of PFD in normal mammary organoids by 3D assay and confirmed PFD was not toxic at 100μM (Supplementary Figure S3A). According to previous reports, higher concentrations than those in our tests do not cause death of normal fibroblasts [31,33]. However, PFD decreased the number of live CAFs and increased the number of dead CAFs from tumor specimens of luminal-type breast cancer patients ( Figure 3A). PFD inhibited growth of mouse CAFs isolated from the TNBC xenograft tumors in 2D culture (Supplementary Figure S3B). We then dissociated TNBC xenografts and sorted them into CD49f + (tumor) and CD49f -(mainly stroma) cell population by flow cytometry. Cell viability of both CD49f + and CD49fcells, as measured by the MTT assay, decreased with increasing concentrations of PFD ( Figure 3B). Since those cells did not proliferate in the assay, this result We transplanted 4T1 cells (1x10 4 ) without or with CAFs (2x10 4 ) into mammary glands of BALB/c mice (n=5). Representative photographs of 4T1 primary tumor after transplantation with or without the CAFs are shown (left panel). CAFs promoted primary tumor growth. Tumor volume (mm 3 ) was measured by "V=0.52xW 2 xL." W=width (mm), L=length (mm). *p<0.02 (right panel). C. Representative photographs of lungs showed that CAFs promoted lung metastasis. Arrows indicate visible lung metastatic tumors and a broad arrow indicates a metastatic lymph node in the right brachium (left panel). Volume (mm 3 ) of single metastatic tumor was measured by "V=0.52xW 2 xL." W=width (mm), L=length (mm). *p<0.05 (middle panel). H&E staining showed that CAFs increased lung metastatic tumor number (*p<0.02) (right panel). n=5. D. Representative photographs of picro-sirius red staining showed that CAFs promoted primary tumor fibrosis (left panel). Collagen deposition marked by picro-sirius red staining was quantified by using ImageJ software. CAFs enhanced collagen accumulation in primary tumors. n=5, *p<0.01 (right panel). E. Primary tumors were immunostained with an anti-phospho-SMAD3 antibody (red). DAPI (blue) stained nuclei. Representative photographs showed that CAFs enhanced expression level of phospho-SMAD3 in primary tumors (left panel). Expression levels of phospho-SMAD3 in primary tumors were quantified by using ImageJ software. n=3, *p<0.02 (right panel). www.impactjournals.com/oncotarget (tumor cells) and CD49fcells (mainly stromal cells) were sorted. 4,000 CD49f + cells or 10,000 CD49fcells were plated with each PFD concentration for culture and MTT assay was performed on day 15. Cell viability of CD49f + and CD49fcells decreased with higher concentration of PFD. *p<0.05, **p<0.01, ***p<0.001 compared to the control conditions. C. We cultured CAFs derived from HCI-002 TNBC xenograft tumors. Immunofluorescence of the cultured CAFs used an anti-collagen I (green) antibody. DAPI (blue) stained nuclei. Collagen production was decreased by PFD. indicates that PFD promotes cell death. In addition, PFD inhibited collagen production by mouse CAFs ( Figure 3C). These results show that PFD is an effective regulator of both CAF viability and collagen production in culture. PFD inhibits TNBC growth induced by CAFs Since CAFs can promote cancer progression [1,[7][8][9][10], we examined the effects of PFD on tumor-stromal interaction in 3D co-culture assays of the mouse CAFs (see Supplementary Figure S2C) and 4T1 cells. We observed that CAFs enhanced tumor growth. Interestingly, PFD had little inhibitory effect on the 4T1 tumor without CAFs, but strongly inhibited the tumor growth induced by CAFs ( Figure 4A). We then characterized the nature of the inhibition by immunofluorescence of tumor cells and CAFs in the 3D co-culture. Using phospho-histone H3 and cleaved caspase-3 antibodies we observed that PFD induced apoptosis of both tumor and CAFs ( Figure 4B), but did not inhibit tumor cell mitosis (Supplementary Figure S4). Since TGF-β is important for the tumor-stromal interaction [1,10,11,46] and PFD can inhibit TGF-β Figure 4: Pirfenidone inhibits TNBC growth induced by CAFs. We cultured CAFs derived from TNBC xenograft tumors (HCI-001), and 3D co-culture assayed the CAFs and aggregated 4T1 cells (tumor cluster) in Matrigel. A. We examined the effects of PFD (triplicate). CAFs increased the tumor cluster size and PFD inhibited the size increased by CAFs (left panel). Also, CAFs increased the number of tumor clusters and PFD decreased the tumor cluster number increased by CAFs (right panel). *p<0.05 compared to tumor (0 μM). **p<0.05 or ***p<0.02 compared to 0 μM (Tumor + CAFs). B. We conducted immunofluorescence of the 3D Matrigel cultures by using anti-pan-cytokeratin (pCK, green, left panel), anti-vimentin (Vim, red, middle panel) and cleaved caspase-3 (cCSP3, red, left and green, middle panels, respectively) antibodies. DAPI (blue) stained nuclei. 4T1 tumor cells expressed both pan-cytokeratin and vimentin, and CAFs expressed vimentin. Therefore, pan-cytokeratin + cells were 4T1 cells, and vimentin + cells were either 4T1 cells or CAFs. Double-negative cells were regarded as other cell type. We counted the numbers of those cells (except 4T1 tumor clusters) with cleaved caspase-3 (representative photographs in left and middle panels) and quantified each apoptotic cells (right panel). We found that PFD induced apoptosis of 4T1 tumor cells and CAFs. C. We examined the effects of a TGF-β inhibitor (SB431542) in the 3D co-culture assay (triplicate). PFD decreased the tumor cluster size (left panel) and the tumor cluster number (right panel). *p<0.05 compared to tumor (0 μM). **p<0.01 compared to 0 μM (Tumor + CAFs). [33,[41][42][43][44][45], we hypothesized that TGF-β inhibition by PFD is the mechanism of the suppressive interaction. We found that a specific TGF-β inhibitor, SB431542, inhibited the tumor growth induced by CAFs, but not growth without CAFs in the same 3D co-culture assay ( Figure 4C). Those findings suggest that PFD inhibits TNBC growth by targeting TGF-β in tumor-stromal interaction. PFD inhibits primary tumor growth and lung metastasis in combination with doxorubicin in TNBC mouse model We next tested the effects of PFD on TNBC in vivo using the 4T1 mouse model. Prior to in vivo experiments, we verified that CAFs from 4T1 homograft tumors enhanced tumor growth, while PFD inhibited the tumor growth induced by the CAFs in 3D co-culture assay (Supplementary Figure S5A). We then transplanted 4T1 cells and CAFs into the mammary glands of BALB/c mice. We administered PFD (50 mg/kg) or water orally two times per day. We also tested the interaction of PFD treatment with chemotherapy by injecting doxorubicin (4 mg/kg) in the tail vein on days 0 and 19. While PFD alone did not inhibit the primary tumor growth ( Figure 5A), doxorubicin alone inhibited primary tumor growth, and PFD together with doxorubicin inhibited tumor growth synergistically ( Figure 5A). PFD or doxorubicin alone did not reduce the number of lung metastatic tumors ( Figure 5B). However, PFD in combination with doxorubicin inhibited lung metastasis significantly ( Figure 5B). In a second experimental protocol, we administered an increased concentration of PFD (100 mg/kg) two times per day in combination with doxorubicin and found an even more marked decrease in lung metastatic tumor numbers and weight compared to control (Supplementary Figure S5B). We next determined the effect of PFD inhibition of CAFs on apoptosis, collagen accumulation and phospho-SMAD3 expression in the primary tumors. Although treatment of the mice with PFD had no effect on apoptosis in α-SMA + CAFs and tumor cells (data not shown), it decreased CAFs significantly (Supplementary Figure S5C). Treatment of the mice with doxorubicin enhanced collagen accumulation ( Figure 5C). However, treatment of the mice with PFD or PFD plus doxorubicin inhibited collagen accumulation significantly ( Figure 5C). Phospho-SMAD3 expression levels paralleled the collagen accumulation levels ( Figure 5D). Therefore, we suggest that simultaneous inhibition of TGF-β by PFD along with chemotherapy with doxorubicin may overcome the activation of TGF-β and enhance therapeutic effects of treatment for TNBC. Taken together, our findings indicate that downregulating TGF-β signaling pathway with PFD in combination with doxorubicin can inhibit tumor-stromal interaction, collagen accumulation and suppress TNBC progression. DISCUSSION The importance of the microenvironment for the response to cancer therapy is an emerging field. While many studies have targeted angiogenesis and inflammation/immune function, several investigations recently have focused on stromal collagenous extracellular matrix and CAFs as potential targets [27,[47][48][49][50][51][52][53][54][55][56][57][58]. In this study, we showed that PFD inhibited tumor growth of TNBC in vitro by targeting CAFs. In vivo, PFD inhibited the tumor growth and lung metastasis synergistically in combination with doxorubicin. We observed that mouse CAFs promoted tumor progression in vitro and in vivo as previously reported [1,[6][7][8][9][10]. This occurred in both PDX models in NOD/ SCID immunosuppressed mice and 4T1 tumors in immunosufficient Balb/C mice. TGF-β signaling was activated both in the tumor cells and the stroma of the xenograft tumors ( Figure 1C). This is in keeping with previous studies showing that TGF-β signaling in fibroblasts is important to promote tumor growth [46]. Since 4T1 cells express high levels of TGF-β [59], it is likely that tumor-induced TGF-β promoted transformation of mouse fibroblasts into CAFs in vivo. This hypothesis is supported by our finding that PFD, which inhibits TGF-β, promoted cell death and suppressed collagen production in cultured CAFs, as seen previously in in vitro studies of fibroblasts [31][32][33][34][35]. Our 3D co-culture assays show that PFD strongly inhibited tumor growth promoted by CAFs but had little inhibitory effect on tumor growth without CAFs. These findings suggest that PFD inhibits TNBC growth more effectively by targeting tumor-stromal interaction than by targeting the tumor itself. PDGF-A and HGF are also reported to be molecular targets of PFD for pancreatic tumor-stromal interactions [60]. However, we focused on TGF-β signaling in TNBC progression and found that CAFs activated the TGF-β signaling pathway and promoted tumor growth. While SB431542, a TGF-β antagonist, inhibited the tumor growth promoted by CAFs, it did not inhibit tumor growth without CAFs. Taken together, these results suggest that TGF-β pathway regulated by CAFs is a molecular target of PFD. PFD monotherapy at 50 mg/kg in mice (equivalent to the dose used in human) inhibited the CAF number significantly and tumor fibrosis and TGF-β signaling strongly, but had no effect on tumor growth or lung metastasis. Since TGF-β inhibitors can suppress primary tumor growth and metastasis in vivo [59,61], it is possible that PFD monotherapy might inhibit cancer progression at a higher dose. Indeed, 500 mg/kg/day of PFD monotherapy suppresses the growth of pancreatic tumors transplanted with stellate cells orthotopically into mice [60]. Whether this effect occurs in other tumor types or at other doses has yet to be determined. PFD (50 mg/kg) or water was orally administered two times per day and doxorubicin (4 mg/kg) or PBS was injected from the mouse tail vain on day 0 and 19. A. Representative photographs of primary tumors after the treatments (Day 37) (left panel). Tumor volume (mm 3 ) was measured by "V=0.52xW 2 xL." W=width (mm), L=length (mm). PFD had no effect on primary tumor growth, but inhibited the tumor growth synergistically in combination with doxorubicin. *p<0.05, **p<0.01, ***p<0.001 (right panel). B. Representative photographs of lung after the treatments (Day 37) (left panel). Visible lung metastatic tumor numbers in five lobes were counted. PFD decreased the lung metastatic tumor numbers in combination with doxorubicin, but PFD monotherapy did not decrease the numbers. *p<0.02 (middle panel). Total lung weight was measured. PFD decreased tumor weight in combination with doxorubicin though PFD monotherapy did not decrease the weight. **p<0.002 (right panel). C. Representative photographs of primary tumors by picro-sirius red staining showed that doxorubicin enhanced and PFD inhibited collagen accumulation in primary tumors (left panel). Collagen deposition visualized by picro-sirius red staining was quantified by using ImageJ software. n=3, *p<0.01, **p<0.02, ***p<0.05 (right panel). D. Primary tumors were immunostained with an anti-phospho-SMAD3 antibody (red). DAPI (blue) stained nuclei (left panel). Expression levels of phospho-SMAD3 were quantified by using ImageJ software. Doxorubicin enhanced and PFD inhibited phospho-SMAD3 levels in primary tumors. n=4, *p<0.02, **p<0.01 (right panel). www.impactjournals.com/oncotarget Doxorubicin has antitumor activities by the disruption of topoisomerase-II-mediated DNA repair and the generation of free radicals [62] and also activates TGF-β signaling ( Figure 5D and [61]). In clinical treatment for breast cancer, doxorubicin (60 mg/m 2 ) is administered intravenously every 3 weeks for 4 cycles [63,64]. Since we observed that doxorubicin monotherapy inhibited primary tumor growth but not lung metastasis of TNBC, our data support the hypothesis that activation of TGF-β signaling by doxorubicin can lead to collagen accumulation but cannot suppress lung metastasis [21]. However, the combination therapy of doxorubicin and PFD inhibited primary tumor growth synergistically and lung metastasis significantly as seen previously for doxorubicin in combination with a competitive TGF-β RI inhibitor [61]. We suggest that simultaneous inhibition of TGF-β by PFD along with TGF-β activation as well as antitumor activities by doxorubicin may be important as mechanisms of synergistic effects of the combination therapy. Interestingly, chemotherapy-induced TGF-β signaling activation and TGF-β inhibitors prevent the development of drug-resistant cancer stem-like cells in TNBC [65]. TGF-β promotes breast cancer cell outgrowth from dormancy in metastatic sites, and our PDX models implicate TGF-β signaling activation in metastasisinitiating cells [20,66], suggesting that combination therapy of doxorubicin and PFD may have additional inhibitory effects on metastasis-initiating cells. Since the combination therapy of doxorubicin and PFD may have inhibitory effects on primary tumor growth and metastasis, it has great potential for a novel clinically applicable TNBC therapy that targets tumor-stromal interaction. Mouse transplantation models All animal protocols were reviewed and approved by the UCSF IACUC. Mice were maintained under pathogen-free conditions in the UCSF barrier facility. PDX tumor tissues from TNBC patients were acquired from the laboratory of A. Welm [19] and engrafted in the mammary glands of immunodeficient NOD/SCID mice (Charles River Laboratories). After the engrafted tumors grew, they removed and cells separated for CAF culture. We transplanted 4T1-GFP TNBC cells into cleared mammary fat pads of BALB/c mice (Simonsen Laboratories, Inc.). After three weeks, mammary tissues near the tumors were isolated, digested with collagenase I and IV and trypsin, and plated on dishes for culture. Cells grew in 2 weeks and GFPcells (CAFs) were isolated by flow cytometry to remove the contaminated 4T1-GFP tumor cells. 4T1 cells (1x10 4 ) without or with the CAFs (2x10 4 ) were injected in a 10-μl volume of 1:1 v/v Matrigel:DMEM/F12 medium into the inguinal mammary glands of BALB/c mice. Two dose protocols were used as indicated: 50 mg/kg or 100 mg/kg pirfenidone (Cipla Pharmaceuticals Ltd. Pirfenex) was orally administered two times a day. 4 mg/kg doxorubicin (LC Laboratories) or PBS was injected into the mouse tail vain on days 0 and 19 in the first protocol and on day 1 and 23 in the second protocol. Tumor volumes (mm 3 ) were calculated using the formula: V = 0.52×W 2 ×L. W=width (mm), L=length (mm). Cell culture Pirfenidone (Sigma-Aldrich #P2166) was used for in vitro experiments. Tumor specimens of breast cancer patients from UCSF Medical Center (courtesy of Dr. H. Rugo) and xenograft tumors were digested with collagenase I and IV and trypsin, and plated on dishes for CAF culture. CAFs were grown in ACL4 + 5% FBS medium [67]. The cells were stained with trypan blue to detect live and dead cells. MDA-MB231 cells were obtained from UCSF Cell Culture Facility. Murine TNBC 4T1 cell line was obtained from the American Type Culture Collection (ATCC) and labeled with GFP for transplantation. The cells were fixed with 4% PFA for immunocytochemistry. Histology Tumor and lung tissues were fixed in 4% PFA overnight and paraffin processed. We cut 5-μm sections from paraffin-embedded blocks. Standard hematoxylin and eosin (H&E) staining was performed for routine histology. Picrosirius red staining was performed as previously described and fibrillar collagen visualized using crossed polarizers [54,68]. Immunohistochemistry was performed as described below. Flow cytometry analysis and cell sorting TNBC xenograft tumors were digested with collagenase. Organoids were collected by brief centrifugation and digested with trypsin to dissociate into single cells. The cells were stained with antibodies against CD49f and EpCAM (eBioscience) for flow cytometry as described previously [72]. Cell sorting was performed on FACS Aria II (Becton Dickinson), and analysed using FACSDiva software (BD Biosciences). Cell viability assay Cell viability was measured using the CellTiter MTT Assay according to the manufacturer's instructions (Promega). Sorted cells were plated in triplicate and incubated with PFD for 15 days, and attenuance at 590nm was read on sequential days using a plate reader (Bio-Rad). Lung metastasis analysis To determine whether CAFs increased lung metastatic tumor frequency in vivo, lung tissue blocks were sectioned into 5-μm sections and stained by H&E. For each mouse analyzed, one section was scored for number of metastases per lobe. To test the effects of PFD on lung metastasis in vivo, lungs were harvested from each mouse, and then the weight was measured and the visible tumor number was counted. Statistical analysis Statistical analysis was conducted using Prism 4 software (Graph Pad Software, Inc.). Statistical significance between two groups was calculated using Student's t test and P values lower than 0.05 were considered significant.
2018-04-03T04:50:34.888Z
2016-10-14T00:00:00.000
{ "year": 2016, "sha1": "2962df45c5521d47a7c48f4bae633c81d8d1ca09", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=12658&path[]=40105", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2962df45c5521d47a7c48f4bae633c81d8d1ca09", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248524707
pes2o/s2orc
v3-fos-license
Anomalous Fluctuations of Extremes in Many-Particle Diffusion In many-particle diffusions, particles that move the furthest and fastest can play an outsized role in physical phenomena. A theoretical understanding of the behavior of such extreme particles is nascent. A classical model, in the spirit of Einstein's treatment of single-particle diffusion, has each particle taking independent homogeneous random walks. This, however, neglects the fact that all particles diffuse in a common and often inhomogeneous environment that can affect their motion. A more sophisticated model treats this common environment as a space-time random biasing field which influences each particle's independent motion. While the bulk (or typical particle) behavior of these two models has been found to match to high degree, recent theoretical work of Barraquand, Corwin and Le Doussal on a one-dimensional exactly solvable version of this random environment model suggests that the extreme behavior is quite different between the two models. We transform these asymptotic (in system size and time) results into physically applicable predictions. Using high precision numerical simulations we reconcile different asymptotic phases in a manner that matches numerics down to realistic system sizes, amenable to experimental confirmation. We characterize the behavior of extreme diffusion in the random environment model by the presence of a new phase with anomalous fluctuations related to the Kardar-Parisi-Zhang universality class and equation. A viral or bacterial infection is spread by the first few pathogens to enter a host and the first host to enter a new region [9,10]. A species is evolved by the fittest mutations [11][12][13]. Scientific revolution is sparked by the first new idea. In all of these contexts the precipitating action is driven by the extremes among a great number of agents (varying from N ∼ 10 2 to N ∼ 10 60 depending on the context) evolving in a complex but shared environment. How does the nature of the shared environment affect these outlier behaviors? Conversely, can we infer the nature of the shared environment from the behavior of these outliers? Despite their obvious importance, these overarching questions are still unanswered. The classical model for many-particle diffusion as independent homogeneous random walks provides an easily calculable solution, but entirely neglects the effects of the shared and likely inhomogeneous environment. This model is the basis for diffusion coefficients [14][15][16], which succinctly describe the behavior of typical particles in a many-particle diffusion. A more sophisticated model treats the shared environment as a space-time random biasing field with short-range space-time correlations. Each particle thus articulates independent random walks subject to forcing by the common biasing field. While this refined model does not affect typical particle diffusion behavior [17], it drastically impacts the behavior of extreme particles. In this work, we provide predictions for the behavior of extreme particles moving in a random and inhomogenous environment. We find that the variance in the position of the extreme particle is a robust and sensitive measurement of the nature of the environment and show how this variance can be understood as the sum of two contributions: the randomness present in the environment, and the sampling of random walks in that environment. We show that by subtracting out the variance due to sampling we can produce direct measurements of the environment, inaccessible from measurements of the motion of a typical particle or of the bulk. This residual environmental variance is characterized by a novel power law that we demonstrate holds even when the number of particles is as small as a few hundred. Time Distance FIG. 1. A system of N = 10 5 particles evolving in a given random environment. The heat map records the site occupancy density. We also plot in green the asymptotic theory mean location for the maximum particle location. Around this is a shaded region with a width of two standard deviations based on the asymptotic theory variance. This region generally contains the extreme-most particle over time. The zoomed-in inset shows the spatial locations of N = 10 2 particles over time. Color indicates the bias (red is biased down and blue is biased up) and is chosen independently at each space-time box. The location of particles within each box is chosen for ease of visualization. Probing the effectiveness and limitations of Einstein's diffusion model has remained a challenge. On short time scales, particle motion is ballistic, dominated by inertia [27][28][29][30][31][32]. Many physically relevant situations require the addition of new concepts to accurately model them. Certain diffusive processes are better modeled by Levy flights [33] or other types of anomalous diffusions [34,35] instead of simple random walks. Other work has focused on active particles which inject energy into their environment [36,37]. Further, in environments which are slowly mixing, Einstein's theory may also break down due to the presence of quenched disorder [33,38]. Unlike the above deviations from the classical model, our approach is intended to describe generic many-particle diffusions. Models for diffusion-Although physical diffusion is continuous in time and (typically) occurs in threedimensional space, here we work with discrete models in one spatial dimension. The principal reason for this choice is that it is the setting for the exactly solvable Beta RWRE [61] (a continuous sticky Brownian motion limit of this model exists [74]) that will enable us to compare numerical results to exact theoretical predictions. Beyond that, discretization is common for numerical simulations and higher dimensions are more challenging numerically due to anisotropy issues arising from the choice of lattice and due to the lack of exactly solvable models, c.f. [65]. In real diffusion in a common environment, there will be length and time scales on which the environment decorrelates. Our discrete model can be thought of as coarse-graining the environment in space and time onto a lattice and thus we do not expect discrete and continuous models to differ greatly for long-times and largescales. Our model ignores any higher order interactions as we expect them to be less present in the behavior of extreme particles, for which the local density is necessarily low. Additionally, there are physical settings where particles take discrete states [83,84] or evolve in quasione-dimensional spaces [85,86]. We study the Beta RWRE introduced in [61] (see Fig. 1). We model the environment by a collection B = B(x, t) : x ∈ Z, t ∈ Z ≥0 of independent identically distributed random variables all drawn from the uniform distribution on [0, 1]. At time t = 0 we start with N particles all at site 0. Given an instance of the environment B the particles proceed as follows. Each particle at x and t independently flips the same weighted coin which has probability B(x, t) of heads (moving the particle to site x + 1 at time t + 1) and 1 − B(x, t) of tails (moving to x − 1 instead). Thus, while particles do not interact with each other, those at the same place and time are all influenced by the common environment. This model is exactly solvable when B(x, t) are distributed according to the Beta distribution, Beta(α, β) [61]. For simplicity, we focus on the special case α = β = 1 corresponding to the uniform distribution. The classical simple symmetric random walk (SSRW) model arises in the limit α = β → ∞ where all B(x, t) ≡ 1/2 and the environment is deterministic. We focus on the behavior of the right-most particle at time t. We denote this by Max N t , with N the number of particles in the system. Two types of randomness affect Max N t : that of the environment and that of sampling the random walks in that environment. The effect of the environment is via the transition probability p B (x, t), the probability that a single random walker initially at 0 will end up at x at time t for a given environment B. This satisfies the recursion relationship, with initial condition p B (0, 0) = 1 and p B (x ̸ = 0, 0) = 0. Since each random walker is independent, conditional on the environment, the distribution of the ensemble of N walks is determined by p B (x, t). Given the environment B, the probability that a single random walker is at or above x at time t is given by the tail probability, This and the independence of random walkers, conditional on the environment, imply where the left-hand side is the probability, given the environment B, that Max N t ≤ x. We study how Max N t varies upon sampling a new environment and random walkers therein. Eq. (4) suggests that a good proxy for Max N t is the location Env N t of the 1/N -quantile of P B (x, t), i.e., Env N t equals the maximal x such that P B (x, t) > 1/N . Notice that Env N t only accounts for the variation due to the environment. The variation due to sampling in that environment is denoted Sam N t and defined by Max N t = Env N t + Sam N t . We use the notation Mean (•) and Var (•) for the mean and variance of a quantity • (e.g. Max N t , Env N t , Sam N t ) averaged over both the environment and the sampling of random walkers in that environment. Numerical Methods-We numerically simulate our models for system sizes varying from N = 10 2 to N = 10 300 . We consider such large and physically unrealistic system sizes like 10 300 in order to see how asymptotic theory applies for as wide a range as possible of finite system sizes. We evolve the system for times from t = 0 to t = 5000 log(N ). As explained below, log(N ) and (log(N )) 2 set key timescales and our range of times ensure that for all choices of N , we encompass these scales. We simulate such large systems by tracking occupation variables instead of individual particle trajectories. In particular, if there are N (x, t) particles at site x at time t, then the number that move to site x + 1 is binomially distributed with N (x, t) samples and success probability B(x, t) (the remainder move to site x − 1). We sample these binomial distributions utilizing quadrupleprecision floating point numbers and making approximations to the binomial distribution when dealing with sizes beyond our precision limits, as described in [87]. The rightmost particle location (identified by the maximal x with N (x, t) ≥ 1) at each time represents a sample of Max N t . By repeatedly sampling new environments along with random walk occupation variables N (x, t) therein we numerically measure Var Max N t . To distinguish from the true value we denote this numerically measured variance by Var num Max N t and plot it in Fig. 2. In like fashion, we measure Var num Env N t for each sampled environment by using Eq. 3 to compute p B (x, t). Fig. 3 shows Var num Env N t as a function of time (see [87] for Mean num Max N t and Mean num Env N t ). The data presented in Fig. 2 and 3 took approximately three weeks to run in parallel on 500 cores of the University of Oregon high performance computing cluster, Talapas. (transparent solid) computed over 500 environments, Var asy Env N relationship between t and log(N ) such as t/ log(N ) =t or t/ log(N ) 2 =t fort ort fixed, we write f (N, t) ≫ g(N, t) if f (N, t)/g(N, t) tends to infinity as N and t do subject to their relationship. We use the notation Var asy (•) to denote the asymptotic theory formula for the variance of •, interpolated back to finite N and t. SSRW theory follows from Stirling's formula while asymptotic results for the RWRE rely on tools from quantum integrable systems [61,66,78] and are derived first for Env N t and then for Max N t and Sam N t . SSRW Max N t : For t/ log(N ) =t with fixedt < (log 2) −1 , we have N ≫ 2 t and hence with very high probability every reachable site in the lattice at time t is occupied, hence Var Max N t ≈ 0. Whent > (log 2) −1 , we show in [87] that Max N t is asymptotically a Gumbel random variable. Fort large, Var asy Max N t ≈ π 2 12 t log(N ) . RWRE Env N t : For t/ log(N ) =t with fixedt < 1, Var Env N t ≈ 0. To see this, note that P B (t, t) = B 0,0 · · · B t−1,t−1 . Taking logs and applying the central limit theorem shows that log P B (t, t) ≈ −t + t 1/2 G for G a standard Gaussian. This implies that P B (t, t) ≈ e −t ≫ 1/N . Thus the RWRE stops saturating the lattice when t = log(N ) plus a order (log(N )) 1/2 Gaussian fluctuation. For the SSRW this happens at time log 2 (N ) plus order one fluctuations. Var Env N t displays two asymptotic regimes. For fixed t/ log(N ) =t > 1, Var Env N t takes asymptotic form, where σ 2 χ ≈ 0.813 is the variance of the GUE Tracy-Widom distribution [64,88]. As shown in [87], this follows from the result of [61]: (1 − I(v))) 1/3 and χ t is random converging to the GUE Tracy-Widom distribution as t goes to infinity. For t/(log(N )) 2 =t, Var Env N t takes asymptotic form where h(0, s) is the height at 0 and time s of the narrow wedge solution to the KPZ equation (2). As shown in [87], this follows from [66]: . Interpolating between these regimes, and extrapolating past (log(N )) 2 (see also [78]) we find two power-laws, (with erf(x) = 2/ √ π x 0 e −t 2 dt the error function) interpolates from 1 to 0 over an interval of width (log(N )) 4/3 around (log(N )) 3/2 . RWRE Sam N t and Max N t : We identify the additional contribution from sampling the many-particle diffusion given an environment. Using Eq. (4) and Taylor expansion of the results of [61] and [66] quoted above, [87] shows that for t/ log(N ) =t > 1 the sample fluctuation Sam N t is of Gumbel type with variance ast grows. This limit matches the behavior of the SSRW model. In [87] we also show that Sam N t is asymptotically independent of Env N t , thus Fig. 2 and 3 show that the asymptotic theoretical predictions for Var Max N t and Var Env N t are in excellent agreement with the numerical measurements. Fig. 3 further shows that we reliably recover Var Env N t using Var num Max N t − Var asy Sam N t , as expected from Eq. 9. Notably, while these results were derived for asymptotically large log(N ) and t, they hold nearly perfectly down to N = 10 2 . Fig. 3 reveals that while we readily see the long-time t 1/2 power-law for Var Env N t from Eq. 7, the t 1/3 power-law is elusive. Although the full characterization of the short-time regime is in excellent agreement with the numerical results, the t 1/3 power-law is difficult to capture since the transitional window of log(N ) to (log(N )) 2 is too narrow for realistic sizes of N , even up to N = 10 300 . By measuring the long-time t 1/2 power law, we measure the short-time scaling behavior of the KPZ equation up to a prefactor using Eq. 6. Fig. 4 shows the tight matching of the asymptotic theory curves and numerically measured values for the variance of Max N t , Env N t and Sam N t for a given value of N = 10 7 . Notice that for t ≈ log(N ) the asymptotic theory and numerical values for the variance of Sam N t do not fit as well as for large t. This is likely a result of finite-size effects and quickly goes away at larger values of t or when N increases. The fit for N = 10 300 in Fig. 2 and 3 remains tight over the entire range of t. Comparison of Numerical and Theoretical Results- Conclusion-The link between RWREs and KPZ universality with its wealth of theoretical, numerical and experimental evidence strongly suggests that aspects of the picture presented here will persist beyond discrete and solvable models, even to experiments. When t is of order log(N ), variances should be non-universal, depending in a difficult to determine way on the nature of the environment. By contrast, when t ≫ log(N ), we anticipate that the scaling exponents and functional forms we have identified for the variances of Env N t , Sam N t and Max N t will be universal, as will the relation (9). The leading coefficients in Eq. (7) should be nonuniversal and hold within them all of the accessible information about the correlation structure of the environment -we call these extreme diffusion coefficients. Further theoretical study, such as for the general α, β Beta RWRE model, should provide a natural first test of this universal picture and an understanding of how the extreme diffusion coefficients relate to the microscopic environment. A continuum model that should provide an even wider testing-ground amenable to numerics involves particles x i (t) for i = 1, 2, . . . satisfying dx i (t) = F (x i (t), t)dt + D(x i (t), t)dB i (t) where F (x, t) and D(x, t) are random forcing (as in [65]) and diffusivity (generalizing diffusing diffusivity, c.f. [89]) fields common to all particles while B i are Brownian motions independent between different i. Changing the correlation structures of F and D will probe the transition between temporally mixing versus quenched environments, which should have very different behavior (c.f. [90,91]) and warrants further study. Considering higher dimensions as in [65] may lead to further theory that better model real physical systems. A study of higher order cumulants may reveal other ways to probe the hidden environment, although they may be harder to observe numerically or experimentally. In physical systems it is impossible to directly measure the environmental variance. However, an indirect measurement can be performed via the approach presented here by using Var Env N t ≈ Var Max N t − Var Sam N t . The sample variance Var Sam N t is now computed using Var Sam N t = π 2 D 6 t log(N ) where D is the diffusion coefficient. One could repeatedly track the motion of the leading edge of diffusing particles in a system of colloids confined to a quasi-1D channel thereby directly measuring Var Max N t for system sizes ranging from N 10 2 to N 10 10 . Further, one can also perform complementary measurements on the time of first passage of diffusing objects, which opens the door to experiments done on all manner of diffusing objects, including light or sound diffusing through a scattering medium, dye molecules in a fluid, or any other object whose first passage can be measured. By measuring the environmental variance and extreme diffusion coefficient we will gain a new microscope through which to probe the hidden nature of the underlying environment in which the diffusion occurs. Our work should serve as a guide in the development and analysis of novel experimental measurements of the extreme behavior of many-particle diffusion. Acknowledgements-We thank G. Barraquand Non-universal tracer diffusion in crowded media of noninert obstacles. Physical Chemistry Chemical Physics,
2022-05-06T01:15:58.197Z
2022-05-04T00:00:00.000
{ "year": 2022, "sha1": "9331e544ab953f08fa2c0d5adc2fe3c65cf23bac", "oa_license": null, "oa_url": "https://arxiv.org/pdf/2205.02265", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "642b219b8741d2371190e3f4e985af9b8872d413", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
31899004
pes2o/s2orc
v3-fos-license
Phytochemical Studies of Fractions and Compounds Present in Vernonanthura Patens with Antifungal Bioactivity and Potential as Antineoplastic The wonderful plant diversity of South America and more specifically from the Amazon region has around 30-50% of the worlds biodiversity therafor it is an important source for this type of study. Beside the significant undiscovered resources from these regions, ancestral knowledge of indigenous peoples is another relevant and complementary source for biodiscovery programs. Traditional healers guard centuries of accumulated knowledge about natural medicinal resources of this region. These ancient “physicians” hold the key to discovering new drugs that could benefit millions of people around the world. The Amazon forest has contributed dozens of substances to western medicine. Among the best known are the “curare”; a key component of modern anesthetics and quinine, the first contribution of "natural medicine" to treat malaria1. Introduction Phytochemical research is closely related to the needs of finding new and effective pharmaceuticals. Searching for plant substances that are capable forbeing used to develop new therapeutic drugs against catastrophic recognized illnesses such as cancer, diabetes and AIDS is one of the main topics that researchers around the world have been focusing. The wonderful plant diversity of South America and more specifically from the Amazon region has around 30-50% of the worlds biodiversity therafor it is an important source for this type of study. Beside the significant undiscovered resources from these regions, ancestral knowledge of indigenous peoples is another relevant and complementary source for biodiscovery programs. Traditional healers guard centuries of accumulated knowledge about natural medicinal resources of this region. These ancient "physicians" hold the key to discovering new drugs that could benefit millions of people around the world. The Amazon forest has contributed dozens of substances to western medicine. Among the best known are the "curare"; a key component of modern anesthetics and quinine, the first contribution of "natural medicine" to treat malaria 1 . Study of new plant species and the structural elucidation of its bioactive molecules are the most important aims of phytochemical research which is in constant technological development. Initial phytochemical screening and further isolation, purification and identification of molecules structure have made a major breakthrough with the development of new methods of chromatography and spectroscopy. The establishment of new and more effective bioassays is also one of the essential aspects that support biodiscovery programs today. This chapter contains the main results on the phytochemical study of Vernonanthura patens leaves which according to ancestral knowledge, have been used to treat different diseases in humans. Botanical classification, general characteristics and ethnobotanical knowledge on Vernonanthura patens Vernonanthura patens is a wild plant broadly distributed throughout America. It grows from 0 to 2200 meters above sea level in the Ecuadorian coastal region. Folk medicine uses its leaves cooked to combat malaria, postpartum treatment and for healing infected wounds of animals by washing with a plant mixture which includes V. patens leaves (Blair, 2005). It is also used against headaches, to clean and heal wounds (Kvist et al., 2006); treatment of leishmaniasis (Gachet et al., 2010); preparation of antivenom (Tene et al., 2007) and as a poultice of leaves to combat athlete's foot (Valadeau et al., 2009). Its usefulness for treating certain types of cancer has also been referred by indigenous healers. There is however there are few chemical studies about this species Vernonanthura patens (Kunth) H. Rob. botanical classification and general characteristics Species V. patens belongs to the Asteraceae family, quoting 60 synonyms and one basionym (Vernonia patens Kunth) (ARS- GRIN, 2009). Referred to Vernonia patens HBK in the list of lignocellulose species investigated in Ecuador, it is a source of raw material for pulping and papermaking (Acuña, 2000). It is also commercially important in the beekeeping industry, and is ranked as one of the most important honeybee plants from Tundo, Olmedo and Loja (Camacho, 2001) for its excellent production and availability of nectar and pollen (Ramirez et al., 2001). In the Ecuadorian province of Zamora it is one of four ecologically important species belonging to the typical families of disturbed forests that are been regenerated (Camacho, 2001;REMACH, 2004). It is now registered as representative tree species of secondary forests in Ecuadorian coastal zone (Aguirre, 2001). The species has the following synonyms (Blair, 2005): Habitat V.patens grows wild in the inter-Andean forest located in the south of Ecuador; its maximum height is 3-6 meters and its altitudinal distribution is between 0 and 2000 meters above sea level (Tobías, 1996;León, 2006). This species has been identified in the vegetal community of dry forests at the south-west of Ecuador 3 . This species is sometimes grown or kept in farms after its spontaneous appearance. Generally it can be found near the forest trail and on the edge of the rivers. Flowering and fruiting occurs between May and October. Botanical information V. patens (Figure 1), is a small branched shrub, growins up to six meters high with furrowed stems and ferruginous trichomes. Alternate leaves are petiolate, narrowly lanceolate, petiole tomentose with ferruginous trichomes, 4-11mm long; the leaves are entirely or weakly serrate, rounded base with a sharp or acuminate apex leaves are 7-15 cm long and 1.3 -1.2 cm wide, the adaxial surface is bright and the abaxial is pubescent or puberulent, subcoriaceous, penninerved. Inflorescence is paniculate, terminal, extended branched with the endings scorpioid, provided with leaves and bracts, capitates sessile and very shortly pedicellate, with numerous bell-shaped flowers, 8 mm long, 4-5 sets bracts imbricated, tomentose and of dark brown color, corolla glabrous, about 5 mm long, weakly pubescent achenes, pappus hairs-layered irregular shaped edges that are about 7 mm long. A detailed description of the botanical characteristics of this species has been published by Blair (2005). Ethnomedical information In Ecuador the inhabitants of the south-west of Loja and the Marcabelí region of El Oro province recognize both its healing power and analgesic action. They use the leaves of V. patens to wash wounds and to relieve headaches. It is also employed as anti-inflammatory to soothe coughs and against certain types of cancers. In addition, a veterinary practice is described as it can heal infected wounds by washing with a mixture of plants that includes leaves from this species (Blair, 2005). Other interesting uses have been also reported. Gacheta et al., (2010) informed its usefulness for leishmanianis treatment; Tene et al (2007) indicating its use in the preparation of antivenon and the use of "laritaco" leaves in poultices to combat athlete's foot is referred by Valadeau et al., (2009). Different uses of V. patens have been registered in other South American countries. In the Bolivian community of Tacama, the juice of the plant stem is applied against conjunctivitis (Tacana, 1999) and in Colombia the watery brews of the aerial parts mixed with "panela" 4 , white wine and rosemary are used against malaria. It is also used to relieve pain due to labor and to purge (Blair, 2005). Biological and chemical activity There are very few biological and chemical studies of the specie V. patens. The only results published so far refer to the antimalarial activity against Plasmodium falciparum, Itg2 strain (Blair, 2005) ,anti-Leishmania activity (Valadeau et al., 2009) of the leaves of this species and no antiprotozoal activity against different strains of Leishmania (Fournet, 1994). On the chemical composition of the species, reports lack of sesquiterpene lactones and sesquiterpenes present in the aerial parts (Mabry, 1975;Jakupovic, 1986). There are some references on genus Vernonanthura that show the presence of diterpenes compounds (Portillo et al., 2005;Valadeau et al., 2009), flavonoids (Borkosky et al., 2009;Mendonça et al., 2009), triterpenes (Tolstikova et al., 2006, Gallo et al., 2009, saponins (Borkosky et al., 2009) and sesquiterpene lactones. In addition, different biological activities have been described assuming that certain chemical groups could be responsible for the therapeutic properties attributed to species of this genus (Pollora et al., 2003(Pollora et al., , 2004Portillo et al., 2005;Bardon et al., 2007). These were the main factors that led to the Laboratorio Bioproductos Centro de Investigaciones Biotecnológicas del Ecuador to undertake a chemical-pharmacological study of Vernonanthura patens leaves from plants growing in Ecuadorian areas. Such investigations are part of the Biodiscovery Program developed by this center. Phytochemical screening As an initial step of thephytochemical screening research allows to determine qualitatively the main groups of chemical constituents present in a plant. This screening can guide the subsequent extraction and / or fractionation of extracts for the isolation of groups of interest. The phytochemical screening routine is performed by extraction with suitable solvents of increasing polarity and the application of color reactions (Miranda & Cuellar, 2001). These reactions are characterized by their selectivity to types or groups of compounds, their simplicity, short time consuming and capacity to detect small amount of compounds using a minimum requirement of laboratory equipment. The results are recorded by the presence (+) or absence (-) of the color reactions. The general outline of steps followed for performing the phytochemical screening of V. patens' leaves is presented in Figure 2, while the analysis of the extracts obtained at different polarities is schematically shown in Figure 3. This methodology has been referred previously (Miranda & Cuellar, 2000;Manzano et al., 2009). The plant material of adult leaves of Vernonanthura patens (laritaco) were used from plants at the vegetative state which were growing in the citadels "July 25", "Imbabura" and "June 24" and all belonget to the Canton Marcabelí, province El Oro, Ecuador. Leaves were collected at early morning at different dates during the months of December to February in 2009 and 2010. Botanical identification was performed and voucher specimens of the herbs were prepared and deposited at the National Herbarium of Ecuador (QCNE) and a duplicated sample (CIBE37) was kept as herbal witness in the laboratory of the CIBE-ESPOL Bioproducts. Prior www.intechopen.com Fig. 3. Chemical reactions carried out in each type of V. patens' leaf extracts obtained from using solvents of different polarity. consent was obtained and authorized by the corresponding agencies of the government. The fieldwork and data collection were conducted in accordance with the institutional, national and international principles and guidelines for using and conserving plant biodiversity. For conducting the phytochemical screening, extraction and fractionation, leaves samples were dried using an automatic dryer (45 °C, 8 hours) and then pulverized in a blender and screened. The fraction that remained in the sieve of 2 mm in diameter was collected and kept in polyethylene bags of low density at 24 ˚C. The result of phytochemical screening is presented in Table 2. This reveals moderate to low concentration of essentials oils, alkaloids, reducing compounds, phenols, tannins, flavonoids, quinones, saponins, triterpenes and steroids. Some of these chemical compounds have been associated to antibacterial, antifungal, antiprotozoal and citotoxicity properties and thus have a potential therapeutic use (Nweze et al., 2004;Reuben et al., 2008;Vital et al., 2010). Plant extracts, fractions and compounds The dry plant material (67 g of leaves of V. patens) was subjected to successive extractions with HPLC grade methanol by maceration in a closed container and in the absence of light. The extraction time was eight days and was conducted until total depletion of plant material; agitator and a rotary evaporator were used for solvent recovery. The extract was evaporated to dryness, yielding 7g (10.44%) of methanol extract. The methanol residue was subjected to fractionation by successive column chromatography (CC) packed with activated silica from 60 to 200 mesh; elution was performed with solvents of increasing polarity using mixtures of hexane and ethyl acetate (10, 9:1 , 8:2, 3:7, 10) ( Table 3). The extracts were analyzed by thin layer chromatography (TLC) on 60 F254 silica gel cromatofolios (Merck) with fluorescent indicator and a solvent system hexane / ethyl acetate (9:1). Plates were observed under UV light at 254 and 366 nm wavelengths. The isolated fractions with different solvents from methanol extract of leaves of V. patens by column chromatography, have not been referred to this species, resulting in a high mass in the hexane fraction (79mg) compared with other extracted fractions. Nevertheless, methanol, ethyl acetate and hexane extracts from other plant species had showed a relevant antimicrobial activity (Ramya et al., 2008). Bioassays Assays for screening the bioactivity of natural products has had an impressive history of development and is one of the keys for discovering new natural bioactive compounds. In this study, a qualitative preliminary evaluation of the antifungal capacity of fractions and pure compounds isolated were conducted in order to select the most active. Those selected were re-evaluated to quantify their ability to inhibit fungal growth. The diffusion method (Avello et al., 2009) in potato dextrose agar (PDA) was used to determine the antifungal activity of fractions and pure compounds isolated from V. patens leaves at 100 and 200 µg mL -1 . Dilutions were made with dimethylsulfoxide (DMSO) 10%. Strains of Fusarium oxysporum and Penicillium notatum, isolated from infected Pinus radiata and Citrus sinense fruits and maintained in the Collection of Fungi at University of Concepcion were used. Holes of 5 mm Ø were made in the agar with a sterile cork borer and filled with 20 µL of each concentration of fractions and pure compounds. DMSO 10% was used as negative control in each plate. A disc (5 mm Ø) of already grown fungus was placed in the center of Petri dishes and incubated at 22 °C. Evaluations were made during two weeks. Experimental design was completely randomized and each assay was performed in triplicate. Descriptive statistics of the experimental data was made in order to represent and point out its most important features. Most relevant antifungal activity was observed in fraction 1 (100% hexane) and pure compounds 1 and 3 at the both concentrations tested. The hexane fraction inhibited the growth of both fungal species tested. Highest inhibition exerted against Penicillium notatum (80.2%) and Fusarium oxysporum (81.5%) occurred when using 200 g mL -1 of this fraction. Statistical differences (P≤0,05) with negative controls indicated that DMSO did not influence the results of biological evaluation. Pure compounds showed selective inhibition properties and a certain concentration dependence in its antifungal activity. Compound 1 showed a rate of inhibition of 50 and 90% (100 and 200 µg mL 1 respectively) against Penicillium notatum while compound 3 was capable to inhibit 80 and 100% of the Fusarium oxysporum growth for each assayed concentrations. Screening for antifungal activity of fractions and pure compounds of V. patens has been conducted for the first time. The potential of these results is relevant. Chemical characterization of the fraction with antifungal activity The isolated fraction with antifungal activity were analyzed for structural identification by gas chromatography-mass spectrometry (GC-MS) using an Agilent 7890A gas chromatograph with an Agilent 5975 detector (Avondale, PA.USA) equipped with a column HP-5MS of 5m long (0.25 mm in diameter and 0.25 cm inside diameter). Helium was used as the carrier gas; the analytical conditions were: initial temperature: 100 ° C (increasing 8 ° C per minute to a final temperature of 250 º C); inlet temperature and mass detector: 250 o C and 300 °C respectively. The mass detector was used in scan mode ("scan") with a range of 100 to 400 amu. According to this technique and the analytical conditions described, this chromatogram was obtained and is as shown in Figure 8. Using the library computer and taking into consideration those compounds that exceeded the 90% of confidence, structures of 33 components could be assigned (Table 4). The compounds identified are mostly hydrocarbons, a logical result given the solvent used. There was a relative abundance of possible bicyclical sesquiterpenos (peaks 1-5) and of the acyclic triterpeno squalene (peak 30). For the sesquiterpenos exist antecedents of antimicrobial activity (Gregori et al ., 2005) and for the escualeno reports of activity antioxidant, antitumor and antimicrobial activities, in addition to its beneficial effect for preventing cardiovascular diseases by reducing cholesterol and triglycerides (Garcia et al., 2010). For this reason, it is possible to hypothesis that antifungal activity of V. patens against F. oxysporum and P. notatum which has been determined could be directed related to the squalene presence despite not being the main component of the fraction tested. The remaining compounds, individually or collectively, could also be involved in the bioactivity demonstrated. The results described here have not been reported previously for V. patens. Structural identification of isolated compounds The structures of the three compounds isolated from the hexane soluble fraction by column chromatography were identified by their spectroscopic patterns as compared with references. These pure compounds were identified as Lupeol (compound 1), Acetyl Lupeol (compound 2) and Epi Lupeol (compound 3) (Figure 9). Spectroscopy was performed in the Laboratory of Organic Chemistry at the University of Lund. 1 H NMR (500 MHz) and 13 C NMR (125 MHz) were recorded at room temperature with a Bruker DRX500 spectrometer with an inverse multinuclear 5 mm probe head equipped with a shielded gradient coil. The spectra were recorded in CDCl 3 , and the solvent signals (7.26 and 77.0 ppm, respectively) were used as reference. The chemical shifts (δ) are given in ppm, and the coupling constants (J) in Hz. COSY, HMQC and HMBC experiments were recorded with gradient enhancements using sine shaped gradient pulses. For the 2D heteronuclear correlation spectroscopy the refocusing delays were optimized for 1 J CH =145 Hz and n J CH =10 Hz. The raw data were transformed and the spectra were evaluated with the standard Bruker XWIN-NMR software (rev. 010101). The results that are shown in this chapter are unpublished and have not been previously registered for the species V. patens. Even though, the elucidated structures of the pure compounds have been found in other vegetal species, and recognize their diverse biological activity which includes antineoplastic action against certain types of cancer (Gallo & Sarachine, 2009 Concluding remarks Phytochemical screening of V. patens has showed the presence of essentials oils, alkaloids, reducing compounds, phenols, tannins, flavonoids, quinones, saponins, triterpenes and steroids, of which some have been previously associated to important biological activities. Fractions and pure compounds of this species were screened for the first time for antifungal activity. Hexane fraction and two pure compounds further identif i e d a s L u p e o l a n d Epilupeol, were active against two important fungal pathogens at high rate (80-100%). Hexane fraction reduced the growth of Fusarium oxysporum in 80% and Epilupeol completely inhibited the Fusarium oxysporum growth. Thirty-three chemical compounds in the hexane fraction from V. patens leaves were determined, Of which must are hydrocarbons. Antifungal activity of this fraction can be related to presence of squalene and/or combined activity of others identified compounds. Further research must be done for determining specific bioactivity of identified compounds. Chemical structures of three isolated compounds were elucidated, corresponding to Lupeol, Acetyl Lupeol and Epi Lupeol. These compounds are recognized for their significant and diverse biological activities, including antimicrobial and antineoplastic actions. Results of this study show that V. patens can be considered as important potential candidate for further chemical and biological researches and justify its inclusion in the biodiscovery program of CIBE. Acknowledgements This study was supported by grants from SENESCYT and ESPOL (Ecuador) References Acuña O. (2000). Valoración de las características físico químicas de especies lignocelulósicas y subproductos agroindustriales en la obtención de pulpa y elaboración www.intechopen.com Phytochemicals are biologically active compounds present in plants used for food and medicine. A great deal of interest has been generated recently in the isolation, characterization and biological activity of these phytochemicals. This book is in response to the need for more current and global scope of phytochemicals. It contains chapters written by internationally recognized authors. The topics covered in the book range from their occurrence, chemical and physical characteristics, analytical procedures, biological activity, safety and industrial applications. The book has been planned to meet the needs of the researchers, health professionals, government regulatory agencies and industries. This book will serve as a standard reference book in this important and fast growing area of phytochemicals, human nutrition and health.
2017-09-16T16:13:56.767Z
2012-03-21T00:00:00.000
{ "year": 2012, "sha1": "c3f70cf1011a0c1ff1c89489cbc888bce97397c7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/28961", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "042f708cef75b4a844410adab07ef7074e988797", "s2fieldsofstudy": [ "Chemistry", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology" ] }
254765897
pes2o/s2orc
v3-fos-license
Achieving vibrational energies of diatomic systems with high quality by machine learning improved DFT method When using ab initio methods to obtain high-quality quantum behavior of molecules, it often involves a lot of trial-and-error work in algorithm design and parameter selection, which requires enormous time and computational resource costs. In the study of vibrational energies of diatomic molecules, we found that starting from a low-precision DFT model and then correcting the errors using the high-dimensional function modeling capabilities of machine learning, one can considerably reduce the computational burden and improve the prediction accuracy. Data-driven machine learning is able to capture subtle physical information that is missing from DFT approaches. The results of 12C16O, 24MgO and Na35Cl show that, compared with CCSD(T)/cc-pV5Z calculation, this work improves the prediction accuracy by more than one order of magnitude, and reduces the computation cost by more than one order of magnitude. Introduction Diatomic molecules and their corresponding energy spectra are widely used in astrophysics, ultracold molecules, fundamental physical constants and thus on. [1][2][3][4][5] Various experimental techniques have been developed for high-precision spectra measurement such as velocity modulation laser spectroscopy (VMS), 6 noise immune cavity enhanced optical heterodyne molecular spectroscopy (NICE-OHMS), 7 laser-induced breakdown spectroscopy (LIBS) 8 and so forth. However, limited by experimental conditions, generally only part of the energy levels corresponding to lower quantum numbers can be measured accurately. In the theoretical side, there are two main options: (1) ab initio methods based on the principles of quantum mechanics, such as Hartree Fock and its extension, DFT, etc. [9][10][11][12][13] The post-Hartree Fock methods like multireference conguration interaction methods have decent performance in accuracy, whereas the steep computation cost and lengthy time make them limited to small systems. 12,14,15 The DFT method is compromised in accuracy, but it has rapid calculation speed, which makes it the rst choice for the calculation of large systems. [16][17][18] In order to improve the performance of DFT, both general and accurate exchange-correlation functionals and basis set are required, 16,[18][19][20] which is still a challenge to us. 18,21,22 (2) Data driven algorithms, such as empirical potential energy function and direct parameter formulas for energy levels. 23,24 The accuracy of data driven algorithms are higher than ab initio methods except that they are only applicable to some molecular systems which have superior-quality experimental data. Recently, the machine learning algorithm has made many achievements in spectroscopy study. [25][26][27] In this work, machine learning algorithm was applied to improve the performance of DFT in the study of diatomic vibrational spectrum. To obtain the best prediction, three widely used machine learning regression algorithms were tested with H 35 Cl as an example. Then, the best performing algorithm was used to predict the vibrational energy levels of 12 DFT for vibrational energies For the purpose of obtain the vibrational energy spectrum of diatomic molecules systems, it is inevitable to solve the Schrodinger equation (r and R denote the electronic and the nuclear coordinate, respectively), þ /: While J = 0, E nJ degenerates to G n , namely vibrational energy level. 30 A conclusion is drawn that obtaining the vibrational energy levels of molecules, a large quantity of approximations need to be included, and various approximations affect each other. Error is oen unavoidable and difficult to predict in advance. Combining machine learning algorithm and DFT 2.2.1 Machine learning algorithm. Among a variety of machine learning (ML) regression algorithms, articial neural network (ANN), 31,32 random forest (RF) 33,34 and extreme gradient boosting (XGBoost) 35 algorithms were widely used and usually found successful. All three algorithms have been tested in this work for vibrational energy prediction. The results were compared in the Fig. 1. The absolute error is the predicted values minus the experimental values. Clearly, ANN performs the best. Thus, it is the algorithm used in this work. An ANN consists of an input layer of neurons, followed by many hidden layers (two, three or more layers are all ne), and a nal layer of output neurons. Neurons are connected by weights V ij . Given the input, x j , the output, h i , of neuron i is, where s(*) is called activation function, N is the number of input neurons, and T hid i is the threshold term of the neurons. 31 It is worth noting that activation function not only introduces nonlinearity into the neural network, but also constrains the value of neurons to prevent the ANN from being paralyzed by divergent neurons. And a common example of the activation function is the sigmoid function, 31 dened as The architecture is shown in Fig. 2. When ANN is used to establish a high-dimensional functional relationship between input and output variables, the data samples will be divided into three groups, namely training set, validation set, and test set. For convenience, the training set and the validation set are also called sub-sample set uniformly. Similar to how humans learn through feedback, neural networks obtain training errors through their performance on the training set. Then, the weights between the connected neurons are adjusted for learning, which reduces the training error. The performance of ANN on the validation set is tracked during the learning process. And the one that performs the best is selected as the chosen model. Finally, the test set is used to determine the performance level of the ANN. 31 2.2.2 Prediction of vibrational energies. By analyzing the error data between DFT and experimental results, a denite and clear trend is found. It is illustrated in Fig. 3 (take B3PW91/def2-QZVP as an example) that higher vibrational quantum number means greater error, and the error trajectories of different molecules are similar. The absolute error here is the theoretical vibrational energy minus the experiment value. This trend law with abundant details can be learned by ML method. Therefore, aer getting the theoretical value E ab v of the DFT methods and the corresponding experimental results, the systematic error E sys v of DFT methods can be obtained through ANN. Ultimately, the predicted vibrational energy E n is dened as where a represents the kind of diatomic molecular system. It's worth noting that E sys v (a) is an error function associated with molecules. So that it is not a xed constant. Obtain initial sample set The potential energy curves of 39 molecules, such as H 2 , 12 def2-XVP (where X = QZ, TZ or S). Then E ab v was obtained by solving eqn (2) by LEVEL. 37 The corresponding data of partial experimental vibration energy levels are displayed in Table 1 as the initial sample set. Finally, the predicted vibrational energy and relative deviation d of the molecules are obtained through ANN. And the expression of d is: where E * n represents the theoretical energy (the value of DFT or ANN), E n represents the experimental value. The input characteristic variables are shown below: (1) Vibrational energy of B3PW91/def2-QZVP: E QZ v ; (2) Vibrational energy of B3PW91/def2-TZVP: E TZ v ; (3) Vibrational energy of B3PW91/def2-SVP: E S v . The output variable is written as E B v . A case in point is 24 MgO in Table 2. The task of the ANN is to learn the correct mapping relationship between input characteristic variables and system deviation E sys v . Prediction results of vibrational energies The initial sample set (39 molecules in all) was assigned to the sub-sample set (36 molecules) and the test set (3 molecules). Aer plenty of parameter set tests, and considering the calculation time and accuracy comprehensively, the most balanced set of parameters was chosen as the training parameters, which are listed in Table 3. The relative deviation (see eqn (9)) is used to measure the performance of ANN. For the test set consists of Na 35 Cl, 24 MgO and 12 C 16 O, the average relative deviation of the nal ANN model is 0.42%, the maximum relative deviation is 4.65%, the minimum relative deviation is 0.0000099%. On the sub-sample set, the average relative deviation is 1.10%, the maximum relative deviation is 15.92%, the minimum relative deviation is 0.0000038%. The result shows consistent performance on the training and sub-sample set, which means that the learned model is reliable. Comparison and analysis The comparison with CCSD(T)/cc-pV5Z results are listed in Table 4. As shown in Tables 2 and 4, it can be found that ANN can effectively improve the performance of B3PW91/def2-XVP, even better than the more complex ab initio method (CCSD(T)/cc-pV5Z). In detail, the error of ab initial methods increases signicantly at high energy levels and easily exceeds 100 cm −1 , not to mention the maximum error of B3PW91/def2-QZVP exceeds 1000 cm −1 . However, the maximum error of ANN does not exceed 70 cm −1 , and the minimum error is only 0.006 cm −1 . In addition, the error of the current method is smaller than that of CCSD(T) at each vibrational energy level. In order to further illustrate the reliability of the current method, many more diatomic molecules have been studied and compared with CCSD(T)/cc-pV5Z. Some are shown in Fig. 4, the height of red pillar is the average relative deviation of CCSD(T)/ cc-pV5Z and the height of blue pilar is the average relative deviation of ANN. It shows that the improvement in prediction introduced by ANN over the ab initio method is better than that obtained by expanding the basis set. It should also be emphasized that this work also considerably reduces the computational cost. Taking Na 35 Cl for an example, it takes more than 40 hours to obtain the results of CCSD(T)/cc-pV5Z, compared to less than one hour for the current method, which includes preparing DFT data and executing the ANN algorithm. Conclusion In this work, a general method is presented to obtain vibrational spectra of diatomic molecules of high quality by starting from conventional DFT calculations and modifying them with articial neural network models. This approach provides a different path to improve DFT results without introducing sophisticated models (such as specic hybrid functionals) and large basis sets. Compared with the results of CCSD(T)/cc-pV5Z, the current work reduces the vibrational energies prediction error for diatomic systems from hundreds to dozens, even to tenths, and takes less than a tenth of the time. Since the strategy employed in this paper is a general data-driven approach, it can be easily extended to calculations of other molecular properties. For example, current DFT calculations of uorescence spectra of macromolecular systems can be easily exceed 1000 cm −1 . 60,61 In future work, it is expected that the uorescence spectral prediction capability of DFT can be improved by building a uorescence spectral data set and adopting a correction method similar to that used in this work. There are several keys that should be attention: (1) collect accurate experimental (or computational) data of macromolecular system properties to establish a data set; (2) from simple to complex, try a variety of DFT methods for these properties, so that the calculation error on the data set can show a certain trend (similar to Fig. 3); (3) build a high-dimensional function through ANN and learn the rule of calculation error; (4) combine DFT and ANN error model to achieve higher prediction quality. Conflicts of interest There are no conicts to declare
2022-12-17T16:13:23.530Z
2022-12-12T00:00:00.000
{ "year": 2022, "sha1": "ef3224392b2af558f1f9a17d385340e33498bc8a", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "bd3fcd82c6a0b9af94b8f4a875a80b0ec9828042", "s2fieldsofstudy": [ "Computer Science", "Chemistry", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
7949885
pes2o/s2orc
v3-fos-license
Therapeutic efficacy of chloroquine for the treatment of Plasmodium vivax malaria among outpatients at Hossana Health Care Centre, southern Ethiopia Background Plasmodium vivax accounts for about 44 % of all malaria infection in Ethiopia. Chloroquine (CQ) is the first-line treatment for vivax malaria in Ethiopia. Chloroquine-resistant (CQR) P. vivax has been emerging in different parts of the world to compromise the efficacy of the drug and pose both health and economic impact in the developing world. The current study was aimed at assessing the therapeutic efficacy of CQ for the treatment of vivax malaria among outpatients at Hossana Health Care Centre, southern Ethiopia. Methods A one-arm, 28-day follow-up, in vivo therapeutic efficacy study was conducted from 5 April to 25 June, 2014. Sixty-three patients aged between four and 59 years were enrolled with microscopically confirmed P. vivax infection. All patients were treated with CQ 25 mg/kg for 3 days. Recurrence of parasitaemia and clinical conditions of patients were assessed on days 1, 2, 3, 7, 14, 21, and 28 during the 28-day follow-up period. Haemoglobin (Hb) level was determined on day 0, day 28 and on day of recurrence of parasitaemia by using portable spectrophotometer. Results Of the total 63 patients included in the study, 60 (95.2 %) completed their 28-day follow-up; three patients were excluded from the study: one patient due to vomiting of the second dose of drug, one patient due to Plasmodium falciparum infection and one patient lost to follow-up during the study. During enrolment, 35 (53.3 %) had a history of fever and 28 (46.7 %) had documented fever. The geometric mean of parasite density on day of enrolment was 3472 parasites/μl. Among these, two patients had recurrent parasitaemia within the 28-day follow-up. CQ was found to be efficacious in 96.7 % of the study participants except two treatment failures detected. The failure might be due to late parasitological failure among these two patients who had recurrent parasitaemia within the 28-day follow-up. Conclusion The current study revealed that CQ showed a high rate of efficacy (96.7 %) among the study participants even though some reports from previous studies elsewhere in Ethiopia showed an increase in CQR P. vivax. Thus, CQR molecular markers and regular monitoring of the pattern of resistance to CQ is needed for rapid and effective control measures of possible spread of drug resistance in the study area. Electronic supplementary material The online version of this article (doi:10.1186/s12936-015-0983-x) contains supplementary material, which is available to authorized users. respectively, has worsened the problem of control [3]. Chloroquine (CQ) is the cheapest anti-malarial drug available at peripheral level (sub-health post and health post without laboratory facilities). It is used for treatment of laboratory-confirmed Plasmodium vivax and symptomatic malaria [4]. However, some studies revealed that the emergence of chloraquine-resistant (CQR) strains of P. vivax has been alarming and affected endemic countries, such as Ethiopia [5][6][7][8], Madagascar [9], Myanmar [10], and Indonesia [11]. High levels of CQR on the northern part of the islands of New Guinea and Sumatra, Indonesia have been well documented, as well as sporadic reports from other locations [12]. The World Health Organization (WHO) recommended that drug efficacy be regularly assessed. Failure to detect the emergence of anti-malarial drug resistance could lead to a drug-resistant malaria epidemic which would have major public health and economic consequences for an area, province and country [3,13,14]. Therefore, monitoring of drug resistance is essential for timely changes to treatment policy, which should be initiated when the treatment failure rate exceeds 10 % at the end of followup [13]. However, a decision to change treatment policy may be influenced by a number of additional factors, including the prevalence and geographical distribution of reported treatment failures, health service provider and/ or patient dissatisfaction with the treatment, the political and economic context, and the availability of affordable alternatives to the commonly used treatment [15]. Emergence and the spread of CQR P. vivax strains is becoming a major public health problem in different countries, and requires regular monitoring to control its spread [1]. There are a few reports of CQR P. vivax in different parts of Ethiopia [6][7][8]. Such evidence necessitates a need for urgent assessment to obtain information in order to develop or change treatment policies [3,13]. This study was conducted to assess the therapeutic efficacy of CQ for the treatment of vivax malaria among outpatients at Hossana Health Care Centre, southern Ethiopia. Study site and population The study was conducted in Hadiya Zone, Hossana Town, which is located in the Ethiopian Rift Valley (Fig. 1), in the north-western part of Southern Nation's Nationalities and Peoples' Region (SNNPR), 230 km south of Addis Ababa and 145 km from Hawassa. The town lies at a longitude of 10°060 N 39°590 E and latitude of 10.1°N 39.983° E. The town has a total of 104,208 inhabitants. The area has a short rainy season from March to May and a high rainfall during the main season (June-September) and is characterized by unstable and seasonal malaria, one of the main diseases in the town and its surroundings, that occurs with peak transmission following the rainy seasons [17]. The study participants were patients with confirmed P. vivax mono-infection on thick and thin blood film preparations and who fulfilled the inclusion criteria [13] and were seeking treatment at Hossana Health Centre during the study period. The source population were those clinically malaria-suspected individuals with fever or history of fever seeking treatment at Hossana Health Care Centre during the study period. The inclusion criteria included age over 6 months, mono-infection with P. vivax detected by microscopy, asexual parasite count >250/μl, axillary temperature ≥37.5 °C or history of fever during the previous 48 h, ability to swallow oral medication, willingness to comply with the study for the duration of the study, informed consent from the patient or parent/guardian in the case of children [3,13]. Participants who were infected with vivax malaria requiring hospitalization, or had severe malnutrition, febrile condition due to diseases other than malaria, regular medication which might interfere with anti-malarial pharmacokinetics, were pregnant and breastfeeding, were excluded from the study [3,13]. The required sample size was calculated based on the prevalence of 3.6 % treatment failure in a study conducted in Serbo Town [7], 5 % precision, 95 % confidence level. An additional 20 % (10 patients) were expected loss to follow-up rate and withdrawal of consent during the study, and therefore 63 participants were included. Study design A one-arm, prospective evaluation of clinical and parasitological responses to directly observed treatment for vivax malaria was used. Patients with P. vivax infection, who fulfilled the inclusion criteria were enrolled, treated and followed for 28 days. The follow-up included a fixed schedule (1, 2, 3, 7, 14, 21, and 28 days) of check-up visits and corresponding clinical and laboratory examinations [13]. Questionnaires were used to gather general information such as socio-demographic information from study participants by senior laboratory technologists and checked by the principal investigator for completeness. Clinical and laboratory information were collected as follows: Treatment and follow-up A 28-day, in vivo, drug efficacy testing was done according to methods recommended by WHO [16] and the Ethiopian Ministry of Health [17]. Patients were treated with a 25 mg/kg CQ-sulfate (Addis Pharmaceuticals, Adigrat, batch number 9461, date 02/2011 and expiration 02/2016), administered for three consecutive days (10, 10 and 5 mg/kg on days 0, 1 and 2, respectively) [4,18,19] at Hossana Health Care Centre. All doses were administered under direct observation. Study subjects were checked for vomiting for 30 min after intake of the drug; those who vomited were retreated with the same dose. Subjects who vomited twice were excluded from the study. The study participants were advised not to take other drugs, except for patients with axillary temperature >37.5 °C who were treated with paracetamol (10 mg/ kg). Patients were advised to come back to the Health Care Centre if they felt sick at any time during the followup period for clinical and parasitological examination. In particular, parents or guardians were instructed to bring children to the Health Care Centre at any time if they showed any sign of danger (unable to drink or breastfeed, vomiting, presenting with convulsions, lethargic or unconscious, unable to sit or stand, presenting with difficult breathing) [13]. Patients who met all the inclusion criteria were given a personal identification number and received treatment only after the study was fully explained and informed consent provided. Patients who decided to participate in the study were examined, treated and followed. Successive monitoring of parasitological and clinical responses was made on the follow-up days to each patient until day 28. The day a patient was enrolled and received the first dose of CQ was designated as day 0. Patients were informed to come for follow-up on days 1, 2, 3, 7, 14, 21, and 28. Thick and thin blood smears were prepared and examined for checking parasite clearance and/or recurrence of parasitaemia at all follow-up visits. Haemoglobin measurement was made on days 0 and 28 during follow-up and on day of recurrence of parasitaemia. Any patient who did not come to the Health Care Centre on the day of appointment was traced at his/her home and assisted by the health extension workers to complete the follow-up. Clinical procedures Physical examinations such as the auxiliary temperature, weight and clinical conditions were done during the study period for all study participants. Laboratory procedures Capillary blood was collected from each study participant; duplicate thick and thin blood films were made at recruitment and on each follow-up days. The blood films were stained with stained with 10 % Giemsa for 10 min, examined with100X oil immersion objective. Species identification and parasite quantification was done by trained senior medical laboratory technologist and reports were recorded on the laboratory request. The thick blood smears were used to count the numbers of asexual parasites and white blood cells (WBCs) in a limited number of microscopic fields. Plasmodium vivax asexual stages were counted against 200 WBCs. Parasitaemia was determined according to formula below [13]. Haemoglobin was measured on days 0 and 28 during follow-up for each study participants and on day of recurrence of parasitaemia. Finger-pricked blood was taken and read by portable spectrophotometer (HemoCue Hb 301 System, Sweden). Anaemia was defined according to [20] categorization. Urine of all female participants was screened for pregnancy by Strip Test, (PR China, date 2012/11, Expiration 2014/11, Batch number W00121125.2) and those testing positive were excluded from the study. Study endpoints Study participants were classified either as 'adequate clinical and parasitological responses' , in the absence of parasite recurrence within 28 days of follow-up, or 'treatment failure' in case of recurrence of parasitaemia during follow-up and new infection with Plasmodium falciparum (within 28 days). Statistical analysis Statistical Package for Social Science (SPSS) version 16 was used for data management and analysis. Data of patients having mixed infection with P. falciparum, lost to follow-up and vomiting were excluded from the analysis according to WHO method. The analysis included the proportion of early treatment failure (ETF), late clinical failure (LCF), late parasitological failure (LPF), and adequate clinical and parasitological response (ACPR) at day 28. Kaplan-Meier survival estimate was used to evaluate risk of therapeutic failure in study participants during follow-up period. Change in mean haemoglobin level on days 0 and 28 was compared using paired t test. In not normally distributed data for example age; median value was used to measure the central tendency. Parasite counts were made using geometric mean. In all analysis, p value <0.05 was considered significant. Ethical consideration The study was reviewed and approved by the Ethical Review Committee of Jimma University, College of Public Health and Medical Sciences. Permission was obtained from Hadiya Zone Health Office, Hossana Town Administration Health Office and Hossana Health Centre. The purpose of the study was explained and written informed consent was obtained from each participant and parents or guardians of children. Data quality control Data collectors were trained by diagnostic senior experts from the regional referral laboratory, including the principal investigator before the actual work. Competency of the data collectors was assessed and selected for the study. Standard operating procedures were strictly followed. Quality of reagents and equipment was checked each day the diagnosis of the patient sample. All P. vivaxpositive slides on day of admission, all slides on day of recurrence and 5 % of negative slides were picked randomly from slides prepared during follow-up and were re-examined blind by experts from the regional referral laboratory. Results Of a total of 1693 patients screened for malaria at Hossana Health Care Centre, 1,412 were negative and 281 were positive for malaria. Among all positive patients, 182 were P. falciparum, 92 P. vivax and seven were mixed infection. From 92 P. vivax-positive patients, 63 who fulfilled the inclusion criteria were recruited. Twenty-nine patients were excluded from the study before enrolment as they did not fulfil the inclusion criteria, and of these, 13 were excluded due to long distance from the health centre, seven were pregnant, six had prior anti-malarial intake and three refused consent. From the 63 study participants, three patients were excluded from the study: one participant vomited twice during the third dose of CQ at day 2, the second participant was found infected by P. falciparum asexual stage on day 14, and the third patient was lost to followup on day 14; the remaining 60 participants completed the 28-day follow-up (Fig. 2). Among the recruited study participants, males were higher in proportion compared to females, (35 and 25, respectively). The median age of study participants was 23 (range 4-59) ( Table 1). Among the study participants, 32 (53.3 %) had a history of fever and 28 (46.7 %) had fever at the time of enrolment. The duration of illness of the patients before enrolment was 3.05 ± 1.41 (mean ± SD) days. The geometric mean of parasite at day 0 was 3472.1 parasites/μl. Based on therapeutic failure risk calculated by the Kaplan-Meier survival analysis, the number of patients at risk on day 0 was 63, but this was attributed to only two patients with treatment failure for 28-day study. Among the 60 patients, 100 % parasite clearance was observed by day 21. However, in two patients (3.3 %) CQ treatment failure was observed during the follow-up period. This was observed in 10 and 14 years old children on days 28 and 21, respectively. This places the risk of CQ failure until day 28 at 3.3 % ( Table 2). Parasite and fever clearance Parasitaemia clearance time was 72 h for all patients involved in the 28-day follow up in vivo study. At day 0 before drug administration, the geometric mean of parasitaemia of the study participants was 3708 parasites/μl of blood (maximum 10,720 parasites/μl, minimum 1600 parasites/μl (Figs. 3 and 4). Efficacy outcomes The mean parasitaemia was 3708.15 parasites/µl of blood while the mean haemoglobin was 11.5 mg/dl on day of recruitment. Following CQ treatment, parasitaemia and fever of study participants were cleared and the mean haemoglobin concentration was improved (13.4 mg/dl). In this study, there were no early treatment failure and late clinical failures but two (3.3 %) LPFs were observed at the end of the study (on days 21 and 28). The ACPR after the 28-day follow-up was 58 (96.7 %). Haemoglobin recovery On day of enrolment, 30 (50 %) patients were found to have mild anaemia: one patient was found to have moderate anaemia, no patients had severe anaemia and 29 (48.3 %) were non-anaemic (haemoglobin value ≥12 g/ dl). On the other hand, a significant (p = 0.01) increase was observed in the haemoglobin level between the baseline and day 28. Among the study participants with ACPR, 52 (86.7 %) were non-anaemic and eight (13.3 %) had mild anaemia. The mean haemoglobin concentration of study participants at day of enrolment was 11.5 g/ dl (ranging from 9.9 to 13.2 g/dl) and 13.4 g/dl (ranging from 10 to 14 g/dl) on day 28 (Table 3). Discussion CQ has been in use both for treatment and prophylaxis in health institutions in Ethiopia. It is the anti-malarial drug recommended as first-line treatment for vivax malaria by the Federal Ministry of Health, Ethiopia [18]. However, there are some alarming reports on CQR vivax malaria from different malaria-endemic areas of Ethiopia [5][6][7][8]. It is becoming a major public health problem that requires rapid and effective management to control the spread of resistance, which requires proper diagnosis of cases and administration of effective anti-malarial drugs [13]. The present study showed the prevalence of falciparum and vivax malaria at Hossana Health Care Centre, southern Ethiopia. Plasmodium falciparum-infected patients were detected more frequently than P. vivax patients during enrolment of study participants. Out of 282 malariapositive patients, 182 were P. falciparum patients, 92 had P. vivax infections, and seven had mixed infections. A report from Hossana Town health office showed that in previous reports P. falciparum is more prevalent than P. vivax, which supports this recent study. This study, a 28-day, in vivo, therapeutic efficacy test on CQ undertaken at Hossana Health Care Centre has demonstrated two (3.3 %) LPFs. The low level of treatment failure detected in the present study was comparable with previous reports from different parts of Ethiopia. Debrezeit 2 % [5], Serbo 3.6 % [7], However, the result of this study was relatively lower when compared with high treatment failures reported from Halaba Special Woreda in southern Ethiopia, 11.7 % [8]. In the current study, two patients with CQ treatment failure were children aged 10 and 14 years, which is a similar condition to cases of treatment failure observed by studies conducted in Debrezeit and Serbo towns in Ethiopia [6,7], in India [21] and in Indonesia [11] . Decreased parasite density on day of recurrence was observed in this study in one patient, unlike studies done elsewhere [7,22,23]. Based on WHO guidelines for therapeutic efficacy study of anti-malarials, resistance is classified as ETF, LCF, LPF, or ACPR. Accordingly, this study revealed two patients with LPF, which is defined as presence of parasitaemia on any day from day 7 to 28 and axillary temperature <37.5 °C, without previously meeting any of the criteria of ETF or LCF [24,25]. Anaemia is a major effect of malaria infection. During intra-erythrocytic development, malaria trophozoites digest haemoglobin. Thus, treatment with the appropriate drug is expected to improve patients' haemoglobin level with time [25]. In this study, significant increase in haemoglobin concentration (p = 0.01) was observed from baseline to day 28 among patients with ACPR. In addition, one patient with treatment failure showed improvement of haemoglobin value on the day of recurrence. However, one patient with treatment failure had no haemoglobin improvement on the day of recurrence. This study showed two (3.3 %) CQ treatment failures of vivax malaria at Hossana Health Care Centre, southern Ethiopia. However, the rate of resistance was lower compared to a previous study elsewhere in Ethiopia, which was 13.7 % [8]. It is important to caution responsible authorities to extend efforts to monitor drug resistance and treatment failure problems in all malaria-endemic parts of the country. According to WHO first-line treatment of malaria should be changed if the total failure rate exceeds 10 % [13]. It is important to survey CQ treatment failures and intervene before the level is reached that presents a public health problem. Conclusions The three-dose regimen of CQ showed therapeutic efficacy (96.7 %) in the treatment of uncomplicated vivax malaria among study patients at Hossana Health Care Centre, southern Ethiopia. A 3.3 % CQ treatment failure detected in this study is LTF. Rapid clearance of fever and asexual parasitaemia was observed after CQ treatment of uncomplicated vivax malaria. Improvement in mean haemoglobin level was achieved following CQ treatment from day 0 to 28. Regular monitoring of the pattern of resistance to CQ is needed in vivax malaria-endemic areas of the country, and measures be taken rapidly and effectively to control the spread of resistance. Proper instruction and utilization of CQ should be exercised in order to avoid resistance. The prevalence of falciparum and vivax malaria observed in the study area requires strong malaria intervention measures.
2016-05-12T22:15:10.714Z
2015-11-17T00:00:00.000
{ "year": 2015, "sha1": "4cc1b19e0778df2dce0842171e5dd97f64939dab", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12936-015-0983-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b9867545fa44e8a886c4dd7be08ad5d34b13f14e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233409247
pes2o/s2orc
v3-fos-license
Effect of respiratory inhibitors and quinone analogues on the aerobic electron transport system of Eikenella corrodens The effects of respiratory inhibitors, quinone analogues and artificial substrates on the membrane-bound electron transport system of the fastidious β-proteobacterium Eikenella corrodens grown under O2-limited conditions were studied. NADH respiration in isolated membrane particles were partially inhibited by rotenone, dicoumarol, quinacrine, flavone, and capsaicin. A similar response was obtained when succinate oxidation was performed in the presence of thenoyltrifluoroacetone and N,N’-dicyclohexylcarbodiimide. NADH respiration was resistant to site II inhibitors and cyanide, indicating that a percentage of the electrons transported can reach O2 without the bc1 complex. Succinate respiration was sensitive to myxothiazol, antimycin A and 2-heptyl-4-hydroxyquinoline-N-oxide (HQNO). Juglone, plumbagin and menadione had higher reactivity with NADH dehydrogenase. The membrane particles showed the highest oxidase activities with ascorbate-TCHQ (tetrachlorohydroquinone), TCHQ alone, and NADH-TMPD (N,N,N’,N’-tetramethyl-p-phenylenediamine), and minor activity levels with ascorbate-DCPIP (2,6-dichloro-phenolindophenol) and NADH-DCPIP. The substrates NADH-DCPIP, NADH-TMPD and TCHQ were electron donors to cyanide-sensitive cbb' cytochrome c oxidase. The presence of dissimilatory nitrate reductase in the aerobic respiratory system of E. corrodens ATCC 23834 was demonstrated by first time. Our results indicate that complexes I and II have resistance to their classic inhibitors, that the oxidation of NADH is stimulated by juglone, plumbagin and menadione, and that sensitivity to KCN is stimulated by the substrates TCHQ, NADH-DCPIP and NADH-TMPD. The oral microbiological ecosystem of humans is extremely dynamic and consists of a complex system with various metabolic activities. Over 400 dissimilar bacterial species have been found on oral surfaces 1,2 at a relatively constant temperature (34 to 36 °C) and a pH close to neutral in most areas, thus supporting the growth of a wide variety of microorganisms. Eikenella corrodens is commonly isolated from the human oral cavity and upper respiratory tract, and belongs to the family Neisseriaceae, genus Eikenella, and β-subdivision of the class Proteobacteria. This facultative anaerobic species is a gram-negative, fastidious, rod-shaped bacterium, and opportunistic pathogen in non-oral infections 3 . In general, the flow of electrons in respiration is branched, comprising different dehydrogenases, quinones, bc complexes, haeme-copper respiratory oxidases, reductases and respiratory supercomplexes 4,5 . The expression of cytochromes and their complexes depends on environmental conditions such as the culture medium composition and oxygen gradient. In the electron transport systems of the prokaryotes and mitochondria, inhibitors of different respiratory complexes have been used, and terminal oxidases are characterized with artificial substrates, such as ascorbate-TMPD, ascorbate-DCPIP 6,7 , ascorbate-TCHQ 8 and TCHQ, which exhibit different redox potentials. In addition, it is postulated that oxidized TCHQ can be spontaneously reduced again by NADH or NADPH 9,10 . A scheme of the respiratory chain E. corrodens ATCC 23834, grown under O 2 -limited conditions 11 , consists of succinate, NADH and formate dehydrogenases, a ubiquinone, a cytochrome bc 1 www.nature.com/scientificreports/ cbb-type cytochrome c oxidase. Furthermore, previous studies 12,13 showed that E. corrodens grows using nitrate as a possible alternative electron acceptor in the respiratory system. In this work, we studied the effect of inhibitors on the respiratory rate in the presence of endogenous substrates of the electron transport chain in isolated membranes from E. corrodens, cultured under O 2 -limited conditions. Additionally, the effects of naphthoquinone and ubiquinone analogues on the respiration and nitrate reductase activity in membranes were analysed. Finally, cbb3 cytochrome c oxidase activity was determined with the use of different intermediaries. Here, the purpose is to elucidate the nature and functional organization of the respiratory chain components with respiratory inhibitors, quinone analogues and artificial substrates for terminal cytochrome c oxidase. Results The effect of electron transport inhibitors in isolated membranes from E. corrodens was determined. The NADH dehydrogenase (NDH) and succinate dehydrogenase (SDH) activities were similar (Table 1); however, NADHcoupled respiratory oxidase activity was 2.4-fold lower than that of succinate oxidation (Table 1). This differential finding suggests by a low expression of NDH at the tested growth conditions. In bacteria, the presence of one of the three groups of respiratory NDHs has been reported 17,18 . The inhibitors of NDH-1 and NDH-2 such as rotenone, quinacrine, dicoumarol and flavone (inhibitor of NDH-2) inhibited NADH oxidation by 30-40% (Table 2). HOQNO, antimycin A, myxothiazol and cyanide were poor inhibitors of NADH dependent O 2 , with an inhibition of 31% with HQNO and 16-18% with the other inhibitors (Table 2). A similar response also was observed for an antimycin A plus myxothiazol experiment under, "double kill" conditions 19 . These results suggest www.nature.com/scientificreports/ a very low oxidation of NADH, which does not allow differentiation of the type of NDH and show marginal use of the bc 1 complex. The effect of bc 1 complex inhibitors on succinate-dependent respiration is shown in Fig. 1A. Succinatedependent respiration was inhibited by 60% by antimycin A and HQNO at the same concentration ( Fig. 1A and Table 2). Succinate oxidation was found to be more sensitive to inhibition by myxothiazol than by HQNO and antimycin A. Succinate respiration was inhibited with a half-maximal inhibitory concentration (IC 50 ) value in the presence of 1.7 µM myxothiazol, and O 2 consumption was abolished at 30 µM myxothiazol. The IC 50 concentrations were of 20 µM antimycin and 40 µM HQNO. The results indicated that the bc 1 complex (complex III) of E. corrodens 11 was more sensitive to low concentrations of myxothiazol than to antimycin and HQNO. Succinate oxidation was only inhibited 10-15% by TTFA and DCCD at 100 µM (Fig. 1A), and 2,4-dinitrophenol caused no inhibition (data not shown). The data suggest a partial inhibition of NDH in the presence of rotenone, quinacrine, dicoumarol, flavone and bc 1 complex inhibitors. Finally, the results suggest that SDH was weakly inhibited by TTFA, and the results were similar for Bacillus subtilis and B. cereus 15,20 . Regarding the behaviour of the terminal oxidase in membranes of E. corrodens, the specific oxidase activities for various substrates in isolated membrane particles were determined ( Table 1). The rate of O 2 uptake with NADH plus TMPD (E ~ = + 260 mV at pH 7.0) was 2.4-and 1.6-fold faster than that with NADH plus DCPIP (E ~ = + 217 mV at pH 7.0), and ascorbate plus DCPIP (E = 58 mV at pH 7.0), respectively, but the activity levels of the NADH-and ascorbate-TMPD oxidases (Table 1) were similar. The oxidation of TCHQ alone (E = 350 mV pH 7.0) was 1.6-fold faster than that with NADH plus TMPD and ascorbate-TMPD. In the presence of ascorbate, TCHQ was oxidized at higher rates than the previous substrates. Moreover, the specific activities of oxidoreductase were determined. The SDH and NDH activities (Table 1) were 200 and 235 nmol reduced DCPIP · mg protein −1 · min −1 , respectively, but the SDH-DCPIP was 35 nmol reduced DCPIP · mg protein −1 · min −1 . In the SDH assay was utilized to the phenazine methosulfate (PMS) was utilized as the mediator and DCPIP was utilized as final acceptor, while in the second assay, endogenous ubiquinone was utilized as the mediator. Thus, compared to other mediators, PMS is an order of magnitude more efficient as a direct electron acceptor for SDH. Furthermore, the specific activity of nitrate reductase in isolated membranes of E. corrodens was determined in the presence of methyl viologen (MV) reduced by dithionite as an electron donator, and the activity was 230 nmol oxidized MV · mg protein −1 · min −1 . The above experiments indicated that the rate of oxidation with NADH in the presence of DCPIP and TMPD was higher than that with NADH alone (Table 1), which is in agreement with the role of DCPIP and TMPD as electron donors at the respiratory chain downstream from ubiquinone 6 . Moreover, TCHQ alone and TCHQ with ascorbate were oxidized, but showed specific activities www.nature.com/scientificreports/ higher than those of physiological substrates and artificial substrates (Table 1). In E. corrodens, nitrate reduction has been demonstrated as a possible electron acceptor pathway to oxygen in the respiratory system 12,13 , and under our experimental conditions, we found the presence of membrane-bound activity of nitrate reductase by oxidation of MV. Respiration with NADH oxidation in the presence of DCPIP or TMPD established a bypass of NDH at c-type cytochrome and/or cytochrome oxidase. Cyanide inhibition of NADH-DCPIP and TCHQ oxidase revealed a monophasic curve in both cases (Fig. 1B). Respiration with NADH-DCPIP and TCHQ oxidation with IC 50 of 12 and 9 µM KCN, respectively, and 100 µM KCN caused more than 80% inhibition. Likewise, the NADH-TMPD dependent activity was 80% inhibited by 100 µM KCN. In accordance with these results, we previously reported that succinate and ascorbate-TMPD oxidation involves a KCN-sensitive type-cbb´ cytochrome c oxidase 11 , with monophasic kinetics and IC 50 values of 3 and 6 µM for cyanide, respectively. TCHQ was utilized to measure the quinol oxidase activities in A. diazotrophicus PAL5 21 membranes and cytochrome bo 3 , highly purified from Bacillus cereus PYM1 22 ; however, TCHQ, a benzoquinone similar to the endogenous ubiquinone of E. corrodens 11 , works in a more coherent way (physiological) and its oxidation proceeds via the bc 1 complex pathway and/or KCN-sensitive cytochrome c oxidase. Consequently, these results indicated the presence of a KCN-sensitive terminal oxidase and one fraction with very low respiration (13-17%) remaining at high concentrations of cyanide (100 µM). The effect of quinone analogues on the electron transport of membranes was determined. The respiratory activities in isolated membrane particles were examined by the addition of various naphthoquinone and ubiquinone analogues that differ in their structural features and redox potentials. The NDH and SDH activities were measured in the presence of water-soluble analogues of quinone (Table 3). Compared to the endogenous ubiquinone activity, the NDH: -juglone-, -plumbagine-and -menadione-DCPIP oxidoreductase activities were were increased 3-, 2-and 1.5-fold, respectively, with very high oxidoreductase activities (≥ 70%) remaining in the presence of site I inhibitors (data not shown). Succinate:juglone-DCPIP doubles the SDH-DCPIP oxidoreductase activity. These results showed that compared to the other analogues, juglone has a greater ability to catalyse the electron transfer from NADH or succinate to DCPIP. The different capacities of the three naphthoquinones as electron acceptors of dehydrogenases may be due to the chemical structure and redox potential of these quinone analogues. The rate of NADH oxidation increased with the concentration of different quinone analogues (Fig. 2). The capacity of different quinone analogues to stimulate NADH oxidation varies. The highest NADH oxidation rates were obtained with juglone, plumbagine, menadione, sodium 1,2-naphtho-quinone-4-sulfonate (NQS), and duroquinone (2,3,5,6-tetramethyl-1,4-benzoquinone). Higher concentrations of 150 µM NQS caused partial diminution of stimulated respiratory activity, probably by the effect of sulfonate. Lower NADH oxidation activities were obtained with menadione bisulfite (MBS), lawsone and 1 mM decylubiquinone (data not shown). Apparent Km values were 21, 49 and 74 µM for juglone, plumbagine and menadione, respectively. The kinetics of stimulation by naphthoquinone analogues are in accordance with the data for B. cereus 14 and B. subtilis aroD 23 . NADH-dependent respiration with quinone analogues was inhibited by rotenone, quinacrine, dicoumarol, capsaicin, flavone, site II inhibitors, SHAM (salicylic hydroxamate) and KCN (curves of inhibition at concentrations of 1-300 µM, data not shown) a similar form as in its absence (Table 2). NADH oxidation in the presence of menadione and juglone was 20-50% inhibited by AgCl (data not shown), a potent inhibitor of Na + -NQR 17 . Finally, succinate oxidation was not stimulated by quinone analogues, at concentrations of 1-400 µM (data not shown). Our results indicated that juglone, plumbagine and menadione naphthoquinones are far more effective than endogenous ubiquinone in NADH oxidation. Additionally, the polarographic experiments suggest that the activation exerted by naphthoquinone analogues on NADH oxidase was coupled to an augmentation of electron transport through the cytochrome system. These results are in accordance with previous data in Escherichia coli, in which it was suggested that NDH I, ubiquinone and cytochrome oxidase do not produce significant amounts Table 3. Activities of NADH and succinate: quinone-DCPIP oxidoreductases by different quinone analogues in membranes preparations of Eikenella corrodens ATCC 23834. a All quinone analogues were added at a final concentration of 300 μM. The specific activity for oxidoreductases is presented as nanomoles of reduced DCPIP min −1 mg protein −1 . b Succinate: endogenous ubiquinone-DCPIP oxidoreductase activity was measured in absence of PMS. Succinate dehydrogenase activity with PMS-DCPIP was 187 ± 7 nmol of reduced DCPIPmin −1 mg protein −1 . Culture procedures and activity assays were performed as described in Materials and Methods. , and ubiquinone is involved in defence against oxidative stress at the cytoplasmic membrane 25 . Finally, the results in the presence of site I inhibitors and quinone analogues in membranes of E. corrodens could not distinguish which types of respiratory NDH 17,18 predominate under our experimental growth conditions. Discussion To our knowledge, we reported the first study on the respiratory chain of E. corrodens 11 , and our results are in satisfactory agreement with the electron transport system of the genus Neisseria [26][27][28][29] and genetic data on the Eikenella corrodens 23834 genome (https:// www. ncbi. nlm. nih. gov/ genome/ 2067? genome_ assem bly_ id= 172078). Data on the respiratory inhibitors, the effect of quinone analogues, and the nitrate reductase activity in membranes of E. corrodens ATTC 23834 are scarce. The E. corrodens genome shows the presence of NDH, SDH, bc 1 , oxidase, and reductase complexes. The SDH and NDH activities were similar; however, the activity of NADH oxidation under all titration was 2.4-fold lower than that of respiration with succinate. Our results demonstrate that NADH respiration is partially inhibited by rotenone, dicoumarol, quinacrine, flavone, HQNO (Table 2) and Ag + , suggesting that it may possess NDH-1 and/or Na + -NQR, as indicated by the genomic sequence of Neisseria sp 28,30 . Likewise, an nqr operon has been found in many marine and pathogenic bacteria 15,19 . In contrast, a significant inhibition of N. gonorrhoeae NADH oxidase 31 was obtained with low concentrations of rotenone and HQNO (< 10 µM), which may reflect that NDH-1 and Na + -NQR are very sensitive to the inhibitors rotenone and HQNO 17,18 , respectively; however, in membranes of N. meningitidis 28 , NADH oxidation was weakly inhibited by rotenone, and highly sensitive to HQNO. Within the same family, the differences in sensitivity to the inhibitors of site I should be explained by the force of interaction with their active site for each case and/or growth conditions. Our results suggest the resistance of SDH to TTFA inhibitors. Additionally, the preliminary analysis of the genome and amino acid sequence of complex II of the family Neisseriaceae (N. shayeganii 871, N. weaver, N. arctica, N. meningitidis serogroup B (strain MC58), N. gonorrhoeae (strain ATCC 700825/FA 1090)) and especially in E. corrodens indicate two hydrophobic subunits, C and D, suggesting succinate-ubiquinone reductase type C with a b-type haem (https:// www. unipr ot. org/ unipr ot/ C0DU23; https:// www. unipr ot. org/ unipr ot/ C0DU24). The succinate respiration was very sensitive to myxothiazol, antimycin A and HQNO (Fig. 1), and the bc 1 complex of E. corrodens was more sensitive to low concentrations of myxothiazol than to antimycin and HQNO; in contrast, NADH oxidation was weakly inhibited by site II inhibitors and "double kill" conditions 19 . Our data obtained from inhibition by antimycin and HQNO, on physiological substrates in E. corrodens, taken together with the studies of N. meningitidis by Yu and DeVoe 28 and N. gonorrhoeae by Kenimer and Lapp 32 , indicate that these inhibitors are similar, with the exception of succinate-dependent respiration in N. gonorrhoeae, where succinate oxidase cannot be inhibited by HQNO 31 . www.nature.com/scientificreports/ We previously reported a high sensitivity of type-cbb' cytochrome c oxidase to cyanide in cytoplasmic membranes of E. corrodens 11 in the presence of succinate-and ascorbate-TMPD-dependent substrates, and functional analysis of the genome showed the presence of a cbb 3 -type terminal oxidase (https:// www. ncbi. nlm. nih. gov/ ipg/? term= eiken ella+ corro dens+% 5Borgn% 5D+ cytoc hrome-c+ oxida se% 2C+ cbb3-type). NADH oxidase is far less affected by cyanide than succinate-dependent respiration. Our data described here indicate that cyanide inhibits NADH-DCPIP and TCHQ oxidases, exhibiting monophasic kinetics with IC 50 values of 12 and 9 µM, respectively, and inhibits NADH-TMPD oxidase. The activity measured with TCHQ does not represent the maximum activity of the respiratory system of E. corrodens. The rate of oxidation of TCHQ possibly indicates a very efficient interaction with the bc 1 complex. The TCHQ oxidation activity in membranes of E. corrodens is similar or greater than that reported for A. diazotrophicus 21 , B. cereus 22 and H. pylori 33 . It has been accepted that the point of entry of electrons from TPMD and DCPIP into the respiratory system is at the c-type cytochrome level. In conclusion, KCN strongly inhibited respiration with succinate or with artificial substrates that preferentially feed the terminal part of the respiratory chain (ascorbate-TMPD and ascorbate-TCHQ); moreover, with somewhat less efficiency, mixtures of substrates open a bypass from the NDH dehydrogenase (NADH-DCPIP and NADH-TMPD). This result strongly suggests that under the growth conditions that we studied here, CN-sensitive oxidase is dominant in the respiratory system of E. corrodens. In addition, a possible interpretation with a background activity in the NADH-DCPIP and TCHQ oxidation of 13% and 17% with 100 µM cyanide, respectively can use a bb′-type oxidase with very low expression level in our growth conditions. Likewise, the complete genome sequences of N. meningitidis 34 and N. gonorrhoeae 25 indicate that they contain a cbb' or cbb 3 complex with IC 50 values below 10 µM KCN in the presence of succinate as a respiratory substrate 28,32 ; however, NADH oxidation in the membranes of N. gonorrhoeae appears to have an IC 50 value of 22 µM KCN 32 . Nevertheless, very recently Osyezka et al 35 reported that the interaction of cyanide with the native ferricytochrome c 1 of photosynthesis bacterium Rhodobacter capsulatus cytochrome bc 1 complex is an interesting new finding and suggests caution for viewing cyanide as a simple inhibitor of cytochrome oxidase. It is clear that endogenous UQ is not an optimal mediator of electron transport between dehydrogenases and oxidases, which is especially critical in the oxidation of NADH and formate 11 . In this article, it is demonstrated that the addition of quinone analogues, especially juglone, plumbagine and menadione, to membranes from E. corrodens results in stimulation of NADH-dependent respiration. At the maximum levels of juglone, plumbagine and menadione, NADH oxidase activity was stimulated 54-, 43-and 12-fold, respectively. Even though the structures and redox potentials of naphthoquinones are very different from those of endogenous ubiquinone, they have higher reactivity with NDH. Furthermore, oxygen consumption apparently does not occur as a product of hydrogen peroxide formation, suggesting that electron transport occurs across the respiratory system. Very recently, Seaver and Imlay 25 reported that H 2 O 2 is primarily formed by a source outside the respiratory system. Thus, it would seem that the above quinone analogues are better electron acceptors for NDH than endogenous ubiquinone. The activity measured with TCHQ does not represent the maximum potential activity of electron transport in the respiratory system of E. corrodens. The rate of oxidation of TCHQ possibly indicates a very efficient interaction with bc 1 complex. The TCHQ oxidase activity in membranes of E. corrodens is similar to or greater than that reported for A. diazotrophicus 21 , B. cereus 22 and H. pylori 33 . Previous studies showed that the metabolism of glutamate, serine and proline was associated with relatively high rates of nitrate reduction and the respiratory system in E. corrodens 12 . The amount of nitrate utilized was calculated on the basis of the nitrite level detected in culture filtrates from cells of E. corrodens grown aerobically. These findings suggest that the denitrification machinery is apparently not expressed, where nitrite is reduced to nitric oxide (NO), nitrous oxide, and, finally, dinitrogen; additionally, it seems that this organism does not express the pathway converting nitrite into ammonia by respiratory cytochrome c nitrite reductase sirohaeme containing NrfA or detoxifying enzyme NirBD 36 . Nitrite in bacteria is produced by one of three different types of nitrate reductases: periplasmic dissimilatory (Nap), membrane-associated respiratory (Nar) and soluble assimilatory (Nas). Gully and Rogers 12 did not directly show whether the nitrate reductase is type Nap or Nar. This article is the first to demostrate the presence of membrane-bound respiratory activity of nitrate reductase (dissimilatory nitrate reductase, Nar) in E. corrodens, and the genome sequence shows the presence of nitrate reductase (https:// www. ncbi. nlm. nih. gov/ ipg/? term= eiken ella+ corro dens+% 5Borgn% 5D+ nitra te+ reduc tase). However, according to genomic information and studies other Neisseria species 26,30,34 can express partial denitrification pathways, possessing genes necessary for the reduction of nitrite to nitrous oxide, via nitrite reductase AniA, and NO reductase NorB 37 , under limited oxygen conditions; and finally do not possess a known nitrate reductase 37 . In summary, our data strongly indicate that NADH-and succinate-dependent respiration in membranes of E. corrodens ATCC 23834 is resistant to inhibitors of NDH and SDH. However, succinate respiration is very sensitive to inhibitors of complex III. Likewise, succinate, NADH-DCPIP, NADH-TMPD and TCHQ oxidase are electron donors for a cyanide-sensitive cbb' cytochrome c oxidase. However, NADH oxidase is resistant to site II inhibitors and cyanide, indicating that a percentage of the electron transported can possibly reach O 2 without passing through the bc 1 complex and a type bb′ oxidase with very low expression level. Juglone, plumbagine and menadione naphthoquinones, with different structures from that of endogenous ubiquinone, higher reactivity with NADH dehydrogenase. Finally, the presence of dissimilatory nitrate reductase in the respiratory system of E. corrodens ATCC 23834, grown under O 2 -limited conditions is demonstrated for the first time, which confirms the suggestions in previous studies 11,12 about growth using nitrate as an alternative electron acceptor. cbb´ cytochrome c oxidase and nitrate reductase as terminal electron acceptors may be important determinants of pathogenicity in response to microaerobic conditions to permit the colonization of oxygen-limited environments and nitrate 27 . The nitrite formed may be an important substrate source for bacteria implicated in periodontal disease and other oral infections. Future work should be done to clarify the electron transport chain of NDH towards the bc 1 complex, and studies on the effect of oxidized and reduced benzoquinone analogues in the respiratory chain are necessary. Materials and methods Cultures, cell disruption, and membrane preparation. Eikenella corrodens ATCC 23834 was grown under O 2 -limited conditions as described previously 11 . The final pH of the culture media was adjusted to 7.4 with NaOH. All cultures were maintained at 34 °C without shaking. Cells in the stationary phase of growth (20-24 h of growth) were harvested and washed twice with cold 50 mM Tris, 5 mM EDTA, and 0.2 M NaCl, pH 7.5 (TEN buffer). Procedures for cell disruption using ultrasonication, membrane isolation, and protein concentration determination were similar to those described by Jaramillo et al. 11 . Respiratory activities. Oxidase activities were determined polarographically at 34 °C as previously described 11,14 using a Clark-type electrode covered by an ultra-thin Teflon membrane (YSI model 53 Oxygenmeter, Yellow Spring Instruments). Cytochrome oxidase activities were determined with 10 mM sodium ascorbate plus 0.1 mM TMPD at pH 6.8. In addition, oxygen consumption was determined in the presence of 10 mM ascorbate plus 0.08 mM DCPIP; with 0.5 mM NADH plus 0.1 mM TMPD and 0.5 mM NADH plus 0.08 mM DCPIP at pH 7.4; and with 10 mM ascorbate plus 3.5 mM TCHQ, and 3.5 mM TCHQ at pH 6.6. The experiments are means of at least 3 experiments. Respiratory inhibitor assay. The effect of inhibitors on the respiratory rate was evaluated polarographically and the compounds were dissolved as previously described 11 . Potassium cyanide and quinacrine were dissolved in 50 mM potassium phosphate pH 7.0; dicoumarol was dissolved in 30 mM KOH, and 2-heptyl-4-hydroxyquinoline-N-oxide (HQNO), antimycin A 3 , myxothiazol, thenoyltrifluoroacetone (TTFA), N,N'dicyclohexylcarbodiimide (DCCD), 2,4-dinitrophenol, rotenone flavone, and capsaicin were dissolved in dimethyl sulfoxide (DMSO). The concentration of DMSO used did not affect the respiratory activities tested. These inhibitors were preincubated with membranes before the addition of the substrates. Quinone assays. The effect of quinone analogues on the respiration rate was measured in the polarographic experiments. Oxidase activities were determined in absolute ethanol solutions (0.025 ml or less) containing quinone derivatives that were added to the membranes 2-4 min before the beginning of NADH and succinate oxidase assays (controls were made with ethanol alone). In 50 mM phosphate buffer at pH 6.6 spontaneous oxidation of quinol analogues is minimal 14 . The quinone analogues were menadione, menadione sodium bisulfite (MBS), lawsone, plumbagine, juglone, sodium 1,2-naphtho-quinone-4-sulfonate (NQS), duroquinone (DQ), and decylubiquinone. Data are means of at least 3 experiments. Oxidoreductase activities. The SDH and NDH activities were determined essentially as described elsewhere 15 in a DU640 Beckman spectrophotometer (Beckman Instruments, Fullerton, CA). The succinate: DCPIP oxidoreductase activity was measured at 30 °C in 1 ml of a mixture containing 100 mM potassium phosphate pH 7.4, membranes (0.1 mg of protein), 100 μM KCN, 40 mM disodium succinate, 1 mM PMS, and 0.08 mM DCPIP. The activity of NDH was measured under the same conditions, except that succinate and PMS were replaced by 0.2 mM NADH. An extinction coefficient of 21 mM −1 cm −1 was used for DCPIP. The nitrate reductase activity was measured based on the oxidation of reduced MV as described by Kučera 16 . The nitrate reductase activity was measured under anaerobic conditions in an assay mixture (2.5 ml) containing a N 2 -saturated solution of 0.1 mM sodium phosphate pH 7.4, 1 mM MV, and membranes (0.1 mg of protein). MV was reduced by addition of sodium dithionite. The reaction was started by the injection of an anaerobic solution of potassium nitrate (2 mM, final concentration). Oxidation of MV was monitored at 600 nm using an extinction coefficient of 11.4 mM −1 cm −1 . Data are means of at least 3 experiments.
2021-04-28T06:16:58.258Z
2021-04-26T00:00:00.000
{ "year": 2021, "sha1": "9aaa6273cde12ae1ad70c1c2983d8dd50ba84c12", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-021-88388-0.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4b6c8099b8d2a3eca24c4874b47ad5d090894c5f", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
53044212
pes2o/s2orc
v3-fos-license
Decisions to use surgical mesh in operations for pelvic organ prolapse: a question of geography? Introduction and hypothesis Surgical mesh can reinforce damaged biological structures in operations for genital organ prolapse. When a method is new, scientific information is often contradictory. Individual surgeons may accept different observations as useful, resulting in conflicting treatment strategies. Additional scientific information should lead to increasing convergence. Methods Based on data from the Swedish National Quality Register of Gynecological Surgery, all patients who underwent their first recurrent anterior compartment prolapse operation between 2006 and 2017 were included (2758 patients). Surgical mesh was used in 56.5%. We analyzed inter-county disparities in and patterns of mesh use over 12 years. To minimize confounding, we selected a group of highly comparable patients where similar decision patterns could be expected. Results The use of mesh differed between counties by a factor of 11 (8.6–95.3%). Counties with low use of mesh continued with low use and counties with high use continued with high use. Conclusions Decisions regarding how to interpret existing scientific information about mesh implants in the early years of mesh use have led to “communities of practice” highly influenced by geographical factors. For 12 years, these groups have made disparate decisions and upheld them without measurable change toward consensus. The scientific learning process has stopped—despite the abundance of new publications and the steady supply of new types of mesh. Ongoing disparity in surgeons’ choices in comparable patients has an adverse effect on clinical care. For the patient, this represents 12 years of a geographical lottery concerning whether mesh is used or not. Introduction Pelvic organ prolapse (POP) is common in women. Approximately 12% of women undergo an operation for prolapse [1][2][3], and high rates of recurrence in the range of 30-40% have been reported [4,5]. To provide additional support for weakened or damaged biological tissue, special surgical meshes have been designed and have been in use since the US Food and Drug Administration (FDA) approved the first mesh products in 2002 [6]. In contrast to the stringent requirements and formalized approaches for development of pharmaceuticals, there has never been a systematic scientific premarketing evaluation of mesh products. Faced with this situation, every surgeon individually has to investigate and validate available scientific information and accept the information gleaned as "current best knowledge." Consequently, treatment of POP with mesh remains controversial, and the decision when to use mesh is an unresolved challenge for gynecological surgeons and patients [7]. In 2011, the FDA released an update regarding the use of transvaginal surgical mesh, including a warning because of Bserious safety and effectiveness concerns^ [6]. A recent article [8] investigated 684,250 POP procedures performed in 2012 in 15 Organization for Economic Cooperation and Development (OECD) countries including Sweden. The article shows an extraordinary lack of uniformity across these 15 countries, with the median rate of surgical mesh utilization in the anterior compartment differing by a factor of 7.9 (range 3.3-26%) and in the posterior compartment by a factor of 5.3 (range 3.3-17.0%). Monitoring results in prolapse surgery, the Swedish National Quality Register of Gynecological Surgery (GynOp) has seen an even larger continuing disparity in mesh use in Sweden. The optimal rate of mesh use cannot be both low and high at the same time. A large and persisting geographical disparity in mesh use in comparable patient groups will affect clinical care negatively. The aim of this article is: (1) to describe the geographical disparity over 12 years in the use of surgical mesh in operations for POP in Sweden; (2) to chart changes in patterns of Swedish gynecological surgeons' mesh use over time. Methods The Swedish National Quality Register of Gynecological Surgery The GynOp register includes all major gynecological operations performed in Sweden. Since 2006, GynOp has registered prolapse operations in detail on a national scale, including a 1-year follow-up of patients. Today, the register contains complete information on more than 56,000 prolapse procedures [9]. All patients are included in the register when a urogynecological procedure is decided. Yearly comparisons with the Swedish National Patient Register (where all Swedish surgical procedures are registered by law) shows that the GynOp coverage of Swedish prolapse operations from 2006 to 2008 has been around 75% and since 2009 it has continuously been > 95%. The data collection process includes both surgeon and patientderived data up to 1-year post-operation [10][11][12]. Data completeness regarding the use of mesh reported by the surgeons has been 100%. The GynOp registry provides all Swedish gynecological surgeons with yearly reports that include detailed information about the use of mesh in all types of prolapse surgery. The Swedish hospital system The Swedish hospital system is primarily organized at a county level. Swedish counties are fairly independent political units responsible for all health services within their boundaries [13]. Public hospitals are owned by the counties and financed by county taxes. There are a few private clinics that specialize primarily in elective surgery. These clinics are contracted to county councils and reimbursed by them for operations they perform on Swedish patients as all Swedes are covered under the national health system. In an effort to make the hospital system more efficient, some counties have supported specialization in some hospitals, so differences in use of mesh at a hospital level may in some cases have organizational rather than medical reasons. The GynOp register shows that practically all (98.2%) patients who undergo a prolapse operation are operated on in the county they live in. This makes county results useful and robust as parameters for analysis of changes in mesh use. Based on this fact, we hypothesized that the proportion of POP operations with mesh county-wise, stratified by years, expresses the particular mesh policy for a particular year. Data The basic data used in this study include all POP operations registered prospectively and consecutively in GynOp from 1 January 2006 to 29 August 2017-in all, 56,120 operations. Patients with simultaneous operations for incontinence were not included. To minimize confounding, we selected a cohort in which the use of mesh was an accepted alternative and whose patients were so comparable that similar mesh decisions could be expected for them. We included only (1) patients who underwent only operations in the anterior compartment (anterior colporrhaphy). This is the most common operation in prolapse surgery and has a moderate level of difficulty. Patients with concomitant POP or non-POP operations were excluded. Additionally, (2) only healthy patients were included in the study (American Society of Anesthesiologists' classification system for patients' preoperative physical status, group one or two) [14]. Moreover, (3) all selected patients had a normal nondescended uterus. Since it can be argued that primary and recurrent POP operations represent different non-comparable patient groups, we (4) analyzed only patients undergoing their first recurrent operation, where the use of mesh is a generally accepted option; 96.7% of these patients had previously undergone Bnative tissue repair^operations and 3.3% had received a mesh. We excluded four small counties and/or counties with low activity regarding POP surgery that reported fewer than 50 recurrent operations each (in total, 103 patients). This rigorous selection process resulted in a study group of 2758 eligible patients with recurrent POP surgery in the anterior compartment, operated on by 467 Swedish gynecological surgeons in 52 gynecological departments in 17 counties over 12 years where surgical mesh was used in 1559 patients (56.5%). All statistical analyses were performed using SPSS version 23 (IBM, Armonk, NY, USA). Ethics The GynOp register was approved by the Ethics Committee of the University of Umeå, Umeå, Sweden (Dnr 04-107). This study and the use of data from the register were approved by the Ethics Committee of the University of Umeå (Dnr 08-076 M). Results The use of mesh in Sweden in operations for recurrent POP in the anterior compartment from 2006 to 2017 is shown in Fig. 1. At the national level, there was an increase in mesh use from 2006 to 2009 followed by a stable period, 2010-2012, at around 66% use. From 2013 (2 years after the FDA warning), there was a significant decrease (p = 0.000) to a new stable level of around 47%. At a county level, however, the use of mesh varied significantly, with a range of 8.6-95.3%. Figure 2 shows aggregated proportions of mesh use of the 17 counties over the entire period studied. To examine possible concealed changes over time, we performed a yearly ranking of counties' mesh use where 1 = the lowest yearly county rank in mesh use and 17 = the highest yearly county rank. Figure 3 shows the mean rank of the 17 Swedish counties from 2010 to 2017; it indicates that the ranking in mesh use has been fairly stable over time. We performed a logistic regression of decision to use mesh, with the year of procedure, county, and interaction between year and county as explanatory variables. This analysis showed that the interaction effect was insignificant (p = 0.732), but the main effects year (p = 0.000) and county (p = 0.000) were both significant, with a Nagelkerke R-square of 0.207. This substantiates that the use of mesh in the individual Swedish counties was independent of the year of surgery. The patient pool was practically identical across the Swedish counties concerning age, body mass index (BMI), and number of births. The size of the prolapse, however, varied substantially among the counties (Table 1) Error bars: 95% CI deciding to perform the procedure with respect to the size of the prolapse, but this had no bearing on their propensity to use mesh. Differences in mesh use were not attributable to patient characteristics. Discussion In this study, we analyzed 2758 consecutive POP operations over 12 years in comparable patients undergoing their first recurrent POP surgery in the anterior compartment. The strength of this material is its size and completeness. A weakness of non-randomized studies like ours is the risk of confounding because prolapse operations have many levels of difficulty, ranging from simple procedures in day surgery to very advanced operations with unsolved reconstructive problems. We tried to compensate for this by strict selection, resulting in a group of highly comparable patients, thus avoiding confounding by special anatomical or technical/operative necessities and enabling us to evaluate surgeons' decision making. At the national level, the use of mesh for recurrent cystocele has been fairly stable, giving an illusion of a certain consensus: in 2006-2009 (which can be interpreted as the learning period), there was a stepwise increase in mesh use followed by two stable rates of around 66% from 2009 to 2012 and around 47% from 2013 onwards. Among the Swedish counties, however, the use of mesh differed by a factor of 11 (range 8.6-95.3%) in our observation period. The decision-making patterns in the individual counties remained the same from 2006 to 2017: Counties with low use of mesh kept having low use and counties with high use continued high use through all 12 years. The FDAwarning led to a general decrease in mesh application, but the divergent pattern of mesh use prevailed. Evidence-based decision making is one of the core values of any health care organization, and the choice between different treatment options is assumed to be a rational process. Based on this principle, the greater the amount of valid scientific information physicians receive, the more structured their beliefs should become and the more convergence it is reasonable to expect in their decision-making patterns when treating comparable patients [15][16][17]. A decade ago, when decisions regarding mesh use were hampered by limited evidence, different surgeons drew different conclusions from the available information. This has led to clear Bcommunities of practice^at the county level regarding interpretation of existing scientific information about the effectiveness of mesh. In the last decade, the amount of scientific information on the use of mesh in POP has increased enormously. A PubMed search for B(Pelvic organ prolapse AND (mesh OR implant))î n July 2018 yielded more than 2200 articles on the subject. A Cochrane review on transvaginal mesh compared with native tissue repair analyzed 37 randomized controlled trials of the intervention [18]. Since 2006, GynOp has distributed annual quality reports to all Swedish gynecological surgeons. The results are stratified according to the regional, county, and hospital level; consequently, the differences among counties are well known to the surgeons. Still, unaltered through 12 years, these groups have made mesh decisions in a clearly biased fashion, highly influenced by geographical factors, with unchanged disparity and with no measurable change toward consensus in the treatment of recurrent cystocele. It is not within the scope of this article to argue whether the use of mesh should be low or high. However, when the application of mesh ranges from 8.6 to > 95% in treatment of the same condition in comparable patients, the greater part of the underlying decisions must be suboptimal; the surgeons just cannot agree on which part. The fact that Swedish surgeons' decision-making patterns have remained unchanged, despite mounting information on the conditions under which mesh is useful or not, suggests that Swedish surgeons' decisions may be attributable to two factors: (1) The available scientific information may not qualify, or be interpreted, as evidence and/or (2) surgeons may read scientific information selectively. In the case of POP surgery, where surgeons have worked on patients and drawn their own conclusions regarding the conditions under which mesh is useful or not, this may make them susceptible to favoring information that supports their own prior hypotheses. Whether one or a combination of both of the above factors is the underlying reason, the result is disturbing and unsettling. A large disparity in surgeons' decisions can be stimulating. It is an indication that there is potential for improvement and can be seen as a challenge to communicate and learn from each other. However, Swedish surgeons have maintained their contradictory positions for more than a decade with unchanged disparity. This indicates that the necessary scientific communication and learning process has stopped-despite the abundance of publications and the steady supply of new types of mesh to replace withdrawn ones [19]. For surgeons, this shows an astonishing mismatch between learning needs and learning readiness. For patients, this represents 12 years of a geographical lottery concerning whether mesh is used or not. The extraordinary disparity in mesh use between 15 OECD countries, shown in a 2012 survey, indicates that this is by no means a Swedish problem alone, but an international challenge [8]. Characteristics of patients operated on for recurrent cystocele in Sweden in 2006-2017, stratified by county BMI body mass index, 95% CI 95% confidence interval a Size of prolapse: distance from the lowest point of the prolapse to the hymen. Negative numbers indicate prolapse inside the introitus and positive numbers refer to prolapse outside the hymen To invigorate the surgical learning process, it seems prudent to question the apparently biased ways we glean evidence from the available information. A sensible way forward would be to focus on increased communication across established consensus groups to enhance awareness of and curiosity about different solutions, increase willingness to learn from each other, and view differences as a possibility to learn and not a chance to dominate. In Sweden, this communication would need to take place between counties-in the OECD, between member countries. Compliance with ethical standards Financial disclaimer The Swedish Association of Local Authorities and Regions supported the collection of data in the Swedish National Quality Register of Gynecological Surgery. The supporters had no role in the conduct of the study; management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; or the decision to submit the manuscript for publication.
2018-10-25T14:52:09.140Z
2018-10-20T00:00:00.000
{ "year": 2018, "sha1": "a01e321958348c31166b2b551f88c48e0e7728d9", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00192-018-3788-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a01e321958348c31166b2b551f88c48e0e7728d9", "s2fieldsofstudy": [ "Medicine", "Geography" ], "extfieldsofstudy": [ "Medicine" ] }
8277513
pes2o/s2orc
v3-fos-license
Effects of Airgun Sounds on Bowhead Whale Calling Rates: Evidence for Two Behavioral Thresholds In proximity to seismic operations, bowhead whales (Balaena mysticetus) decrease their calling rates. Here, we investigate the transition from normal calling behavior to decreased calling and identify two threshold levels of received sound from airgun pulses at which calling behavior changes. Data were collected in August–October 2007–2010, during the westward autumn migration in the Alaskan Beaufort Sea. Up to 40 directional acoustic recorders (DASARs) were deployed at five sites offshore of the Alaskan North Slope. Using triangulation, whale calls localized within 2 km of each DASAR were identified and tallied every 10 minutes each season, so that the detected call rate could be interpreted as the actual call production rate. Moreover, airgun pulses were identified on each DASAR, analyzed, and a cumulative sound exposure level was computed for each 10-min period each season (CSEL10-min). A Poisson regression model was used to examine the relationship between the received CSEL10-min from airguns and the number of detected bowhead calls. Calling rates increased as soon as airgun pulses were detectable, compared to calling rates in the absence of airgun pulses. After the initial increase, calling rates leveled off at a received CSEL10-min of ~94 dB re 1 μPa2-s (the lower threshold). In contrast, once CSEL10-min exceeded ~127 dB re 1 μPa2-s (the upper threshold), whale calling rates began decreasing, and when CSEL10-min values were above ~160 dB re 1 μPa2-s, the whales were virtually silent. Introduction Marine mammals rely heavily on both hearing and producing sounds for prey detection, predator avoidance, mate selection, communication, navigation, and other important life-history functions. Worldwide increases in underwater sound levels of anthropogenic origin [1][2][3][4][5] have been changing ocean acoustic environments for decades. Concern over how marine mammals are affected by and cope with these man-made sounds has motivated research on sound exposure thresholds that trigger biologically significant behavioral responses in various species. Some studies detected no behavioral changes in response to man-made sound [6,7]. Others have shown changes in calling behavior [8][9][10], migratory pathway [11], or diving behavior [12] in response to sound stimuli. Recently, changes in calling behavior in response to low levels of sound received from distant sound sources has also been demonstrated in blue and humpback whales [13,14]. Airgun pulses from seismic surveys are one of the main sounds of concern in the ocean environment because their low frequencies and high amplitudes allow them to travel over large distances when propagation conditions are favorable [15]. They are generally produced at 4-20 s intervals, over periods of days, weeks or months, albeit not continuously. For example, seafloor recorders in the Atlantic have detected airgun pulses on more than 80% of days over periods of several months [16]. Decreasing summer ice coverage at high latitudes over the past decade has opened up certain areas of the Arctic to increased oil and gas exploration. Some baleen whales, such as bowhead whales, are long-lived and migrate over long distances. These combined factors mean that over their lifetimes they are likely subjected to many airgun pulses. There is current interest in assessing cumulative effects of anthropogenic sounds on marine mammals over long time periods [17]-in a fashion similar to the studies done on humans [18]. For such assessments, information on behavioral reactions to airgun sounds are of vital importance, as they are common sound sources in ocean basins today. For bowhead whales (Balaena mysticetus), the species of interest in this study, Blackwell et al. [19] showed calling rates decreased when whales were relatively close (median distance 41-45 km) to an operational airgun array. Median received airgun pulse levels (in terms of rms SPL) at those sites were at least 116 dB re 1 μPa. In contrast, whales that were relatively distant from the same operation (median distance >104 km), and received median airgun pulse levels below 108 dB re 1 μPa, did not change their calling rates. This raised the following question: At what received "dose" of sound did calling behavior change? The present study attempts to pinpoint thresholds of received sound levels from airgun pulses at which whale behavior changes. In 2007, 2008, and 2010 Shell Offshore Inc. (Shell) conducted vessel-based seismic surveys and shallow hazard surveys on or near lease holdings in the Beaufort Sea. As part of these exploration activities, a passive acoustic monitoring program was implemented, with the objective of addressing the interaction of bowhead whales with industrial activities during their fall migration. This study is based on the data collected during passive acoustic monitoring efforts spanning the open-water seasons of 2007, 2008, 2009, and 2010. Two behavioral thresholds were identified in the response of bowhead whales to airgun activity: (1) At low received levels of airgun sound, the animals' calling rates actually increased over baseline levels, but (2) when received levels exceeded a certain threshold, calling rates decreased rapidly. Equipment The equipment and field methods are the same as those presented in Blackwell et al. [19]. Recordings were made using Directional Autonomous Seafloor Acoustic Recorders (DASARs, model C, see [20]). DASARs include an omnidirectional calibrated hydrophone (sensitivity -149 dB re 1 V/μPa at 100 Hz; noise floor, in dB re 1 μPa 2 / Hz: 62 dB @ 10 Hz, 48 dB @ 50 Hz, 44 dB @ 100 Hz, 37 dB @ 400 Hz), used for sound pressure measurements of the background sound field, including whale calls and airgun pulses. DASARs also include two particle motion sensors mounted orthogonally in the horizontal plane for sensing direction to sounds of interest, such as whale calls or airgun pulses. A 1 kHz sampling rate was used for each of these three data channels. The recorders included a signal digitizer with 16-bit quantization. Samples were buffered for about 45 min, then written to an internal 60 GB hard drive. Allowing for antialiasing, the 1 kHz sampling rate allowed for 116 days of continuous recording in the frequency range 10-450 Hz across the four years. The hydrophone recorder electronics in the DASARs overloaded (saturated and distorted) when the instantaneous sound pressure (0-to-peak) exceeded 151 dB re 1 μPa at 100 Hz. This occurred with some of the received airgun pulses discussed below. Field Procedures Each year during 2007-2010 up to 40 DASARs were deployed in the Beaufort Sea offshore of Alaska's North Slope, spread over an alongshore distance of~280 km. DASARs were deployed in five groups ("sites"), each comprised of 7-12 recorders, as shown in Fig 1. DASARs at each site were placed at the vertices of adjacent equilateral triangles with 7 km sides, and were labeled with letters (Fig 1, inset (b)). The southernmost DASARs were 15-33 km due north of the coast. Each DASAR was placed on the seafloor with a ground line of length 110 m connecting it to a small Danforth anchor. During deployment GPS positions were obtained for the DASAR and its anchor. Deployments in all years took place between August 6 and 26. Table 1 summarizes deployment information in all four years of the study. Water depths at deployment locations were in the range 15-53 m. The mean water depth of each seven-DASAR array (i.e., black triangles in Fig 1) increased from west to east and was 21. The five main, seven-DASAR arrays (black triangles), labeled 1-5 from west to east, were deployed each year, conditions permitting (ice prevented some deployments in 2010, see Table 1). DASARs were labeled A-G from south to north, as shown in inset (b). Other locations were used only in some years. In 2008 five recorders were deployed south of site 1: DASAR locations 1H, 1I, 1J, 1K, and 1L (red triangles). In 2010 two recorders were deployed west of site 4: DASAR locations 4H and 4I (blue triangles). Inset (b) shows calibration locations with respect to the DASARs' locations at a single array. The same relative calibration locations were used at each site. and 48.1 m for sites 1, 2, 3, 4, and 5, respectively. DASAR deployment coordinates and water depths for all locations are given in S1 Table. After deployment, each DASAR's orientation on the seafloor with respect to true north was determined in order to estimate bearing to a sound source. In addition, each DASAR clock experiences a small and constant drift, which was corrected over the course of a lengthy deployment period in order to time-align the DASARs [20]. Therefore, immediately following deployments and preceding retrievals, calibrations signals (source level~150 dB re 1 μPa @ 1 m, frequency range 200-400 Hz) were transmitted at known GPS-determined times and locations: six (in 2007-2009) or three (in 2010) locations about~4 km from each DASAR (see inset (b) in Fig 1; 2010 calibration locations shown with black dots). For more information on calibration methodology, see Greene et al. [20]. Retrieval was accomplished by grappling for the ground line, using the GPS positions obtained during deployment. DASARs were retrieved each year between September 28 and October 12 (Table 1), shortly before the Beaufort Sea begins freezing over. This deployment period captured much of the bowhead whale autumn migration [21], but not the tail end, which continues into late October or early November when boat-based operations are no longer possible due to the presence of sea ice. Permitting Passive acoustic recording of endangered bowhead whale calls does not typically require a federal permit as it does not have the potential to "take" the animals as defined by the U.S. Marine Mammal Protection Act or the U.S. Endangered Species Act. The research presented here was, however, part of an approved monitoring program around activities conducted under incidental harassment authorizations (IHAs) issued by the U.S. National Marine Fisheries Service. The research was therefore subject to regulatory review and approval under those authorizations and under the terms of lease agreements under the Outer Continental Shelf Lands Act. Bowhead Call Detection and Localization After retrieval, the DASARs' housings were opened and their hard drives removed. Data were transferred to file servers and analyzed on workstations running custom MATLAB-based software. The data collected at each site and each year were analyzed with an automated call detection algorithm [22]. This analysis identified and localized bowhead calls and airgun pulses, as discussed further below. A subset of all data collected (100% in 2007, 12.5% in 2008, 19.3% in 2009, and 14.3% in 2010) was also analyzed manually by trained analysts as described in Blackwell et al. [23,19]. These manually analyzed data served as a reference to which the automatically detected calls could be compared. The automated algorithm parameters (such as the neural network output threshold) were configured so that up to 20% of legitimate whale calls could be missed (recall of 0.8) in order to minimize false detection rates. An exact calculation of false detection rate was not possible because the manually analyzed data had significant biases and omissions. Nevertheless, the precision of the algorithm was estimated to be between 0.8 and 0.9 for a recall of 0.8. As a result, the automated detector always reported fewer bowhead calls than the manual analysts [22]. Further information on false detection rates is given in S1 File. In addition to the whale call analysis described above, bearings to calling whales were determined for all DASARs at each of the five sites. When two or more DASARs at a given site detected the same call, the location of the calling whale was estimated using triangulation (crossfixing), as described in Greene et al. [20]. The Huber robust location estimator [24] was used to compute the location of each call, as well as the associated 90% confidence ellipse, based on the intersection(s) of bearings from all DASARs at a given site that detected the call [20,19]. We define the call localization rate as the number of calls localized within a fixed period of time [19], distinct from both the number of calls detected (some of which are not localized) and the actual number of calls produced by whales (some of which are not detected). Defining the Analysis Area The analysis was designed to identify a relationship between received levels of airgun sounds and bowhead whale calling behavior. The following two factors are therefore of critical importance: (1) all whale calls included in the analysis should have approximately the same probability of inclusion (i.e., detection), and (2) our estimate of the received levels (RLs) from airgun pulses must be accurate. To satisfy these two analysis requirements, we restricted our analysis area to a set of 42 "analysis cells". Each analysis cell is a circle of radius 2 km centered on the mean location of each DASAR over the four years (variation in DASAR deployment locations was at most tens of meters between years). Examples of analysis cells are shown in Fig 2. Analysis cells in any given year were included if they represented a usable DASAR record. Table 1 and The suitability of the 2 km size for the analysis cells, from the perspective of the first constraint (equal probability of detection), is presented in S2 File. With respect to the second constraint (accuracy of airgun pulse RLs), a main advantage of restricting the analyses to relatively small circular cells is that airgun pulse levels as measured at the DASAR central to each cell can be used as the "dose" of sound to which concurrent call detection rates are compared. Larger analysis areas would have required interpolation, statistical or numerical propagation modeling or other correction of received airgun pulse levels (see for example [19]) in areas away from the recorders. Meanwhile, the analysis cells are small enough that we are confident call localization rates at the DASARs are highly correlated with actual calling rates by the whales. Hereafter, when discussing calls detected inside the analysis cells we use "calling rate" as a proxy for "call localization rate". Another advantage of restricting the analyses to these cells is that it removes any possibility that distant dispersed airgun signals (which can display similar bandwidth and time-frequency structure to bowhead whale calls) could be mistaken by the automated algorithm for bowhead whale calls. This fact becomes important when interpreting estimated calling rates of whales at low signal received levels. Time Intervals This analysis required defining the length of the time interval over which a dose (received level of sound from airgun pulses) and a response (whale calling rate) would be matched. The interval length needed to be long enough that many intervals included one or more calls. Conversely, the interval length needed to be short enough that potentially important variations in the received levels of airgun sounds over a time window would not be obscured. In addition, the interval length ought to be relevant for whale response to received levels of sound, of which little is known. Based on these considerations, we chose a time interval of 10 min and most of the results in this paper are presented for the 10-min interval. In addition, to test whether the results were sensitive to the choice of interval length, we conducted analyses for 5-and 20-min intervals. In each of the four years the entire field season (from DASAR deployment to retrieval) was partitioned into non-overlapping periods with lengths of 5, 10, or 20 min and which always began on the hour. The number of whale calls localized within each analysis cell was tallied for each of the three time periods each year. Hereafter, a particular analysis cell at a particular time interval will be referred to as a "cell-time interval". Over the four years, the following numbers of cell-time intervals were tallied: 1,704,688 (5 min), 852,344 (10 min), and 426,172 (20 min). Airgun Activity during 2007-2010 There were a number of seismic exploration activities using airgun arrays in the Beaufort Sea in 2007-2010. Some of these activities were within the DASAR arrays, or a few km away, while others were hundreds of km away. We define "nearby" activities as those occurring less than 50 km from the nearest DASAR. Nearby activities, shown in Fig 3, were carried out by Shell in 2007, 2008, and 2010, and by PGS (under contract to Pioneer / Eni) in 2008. Dates of operation, vessels involved, and airgun array sizes of these activities are given in Table 2. "Distant" activities were carried out by various operators and were generally located several hundred km from the DASAR arrays. These activities included-but were not limited to-the Received Levels of Sounds from Airgun Pulses To obtain a quantitative assessment of the number and received levels of airgun pulses detected at DASAR locations, we used an automated airgun pulse detector on every available DASAR record. This automated process utilized three stages [15,22]. In the first stage a banded energy detector (constant false alarm rate) detected individual transient signals. The second stage used the regular inter-pulse intervals and azimuthal consistency that are characteristic of seismic exploration using airguns to discard pulses that were not produced by airguns. The third stage calculated the following six parameters (see [27,28,29], Appendix A in [30]) for each detected pulse: (1) "peak pressure", i.e., the maximum of the received instantaneous sound pressures at the 1 ms sampling intervals (in dB re 1 μPa); (2) "duration", defined as the time interval between the arrival of 5% and 95% of the total pulse energy (in s); (3) "sound pressure level" (SPL, rms), averaged over the pulse duration (dB re 1 μPa); (4) "sound exposure level" (SEL), a measure related to the energy in the pulse, defined as the squared instantaneous sound pressure Table 2 for dates of operation and array sizes. doi:10.1371/journal.pone.0125720.g003 integrated over the pulse duration (dB re 1 μPa 2 -s); (5) "background level", the SPL measured over 0.5-1 s immediately preceding the pulse.; and (6) "bearing" (in°) from the DASAR to the airgun pulse. All metrics were computed after passing each transient time series through a finite-impulse response (FIR) bandpass filter, the details of which are explained in S3 File. The SPL and SEL estimates were obtained for "signal only", i.e., after subtracting an estimate of the background noise level from the integrated measurement. Fig 4 shows the output from the airgun pulse detector for locations 4A and 4G in 2010, in which bearing is plotted as a function of time. Airgun pulses from two known distant operations (CGS / USGS and GXT, see previous section) can be readily identified. For the CGS / USGS operation, positions of the seismic ship were provided [26], allowing us to confirm the bearings obtained by the airgun pulse detector. Isolated detections in Fig 4, for example in the gray highlighted oval for DASAR 4A, are likely false detections in the detection algorithm and do not correspond with actual airgun pulses. Such false detections were more prevalent in the shallower (southernmost) parts of each array, likely due to the more complex acoustic propagation environment. In addition, at least twice as many airgun pulses were detected by the deeper DASARs at the northern ends of the arrays, because the higher modal propagation cut-off frequencies and relatively larger bottom attenuation at shallower water depths led to lower received levels at the shallowest DASARs. For example, for the shallower DASAR 4A (Fig 4, bottom)~52,500 airgun pulses were detected in 2010, of which~0.3% were deemed to be false detections. Concurrently, >128,500 airgun pulses were detected at the deeper DASAR 4G with 0.07% of isolated (noise) detections. Outputs from the airgun pulse detector, such as the ones shown in Fig 4, were examined for each DASAR, each year, and compared to the locations of Table 2), we filtered the site 1 data in those two years by removing detections that did not occur at times when airgun arrays were firing at known operations or that did not originate from the general direction (±5°of calculated bearing) of these operations. All other sites yielded consistently high-quality airgun detection estimates. For each 5, 10, and 20-min period at each functional DASAR in each of the four years, a cumulative sound exposure level (CSEL t ) was calculated by summing the sound exposure levels (SELs) of all the airgun pulses detected during the time interval of interest. CSEL t , where t is 5, 10, or 20 min, in dB re 1 μPa 2 -s, was calculated as follows: where SEL i represents the i th of n pulses detected in the interval t [30]. With the exceptions for false detections explained above for site 1 in 2007 and 2010, all pulses detected by the airgun pulse detector were included. If no airgun pulses were detected during a particular period, then a missing value was assigned to CSEL t (i.e., CSEL t was undefined for that period, and that celltime interval was not used in most of the analyses). Note that SEL levels were unweighted in this study. Because of the frequency characteristics of the recorders and of the sound source, M-weighting, which is appropriate for low-frequency cetaceans (as defined in [30]), would not have made a meaningful difference in received SEL values [30,31]. The cumulative sound exposure metric was chosen in this study because it allowed us to calculate received sound over time as a "dose" that takes into account both the number of pulses received by the whale and the amplitude of those pulses. A single mean or median sound pressure level extracted from a distribution of such measurements over a 10-min period, for example, could yield the same value for a 10-min period containing one pulse or fifty pulses, and is therefore not a good measure of the total (integrated) airgun signal energy to which a bowhead would have been subjected. The omnidirectional sensor used in DASARs overloads when received levels exceed 151 dB re 1 μPa 0-to-peak (at 100 Hz). When the seismic ship was less than 20-30 km from the DASARs it was not unusual that close to 100% of detected airgun pulses were overloaded. Nevertheless, the computed received levels for overloaded pulses still provide an important piece of information in the framework of this study: they represent a minimum level for each received airgun pulse. (Because overloading is defined based on a peak pressure value, SELs for overloaded pulses varied by >20 dB.) In addition, based on previous investigations [19] we expected the thresholds for behavioral change to be well below the levels at which the DASARs overloaded. Therefore, instead of dismissing these pulses, they were flagged in the records. When CSEL t was calculated for each time interval, the percentage of the pulses that were overloaded was also computed. Call Response Parameterization and Poisson Regression Model The fundamental goal of this analysis was to determine the relationship between the received level (RL) of sound from airgun pulses (as measured by CSEL t , see above) and whale calling rates. To this end, two statistical analyses were conducted. The most straightforward was a set of simple t-test comparisons to determine whether the mean calling rate during times with and without airgun pulses were statistically different. This approach also gave a first order perspective on the type of model for which to aim. A more sophisticated approach involved fitting a non-linear Poisson regression model to our call rate vs. RL dataset, while confidence intervals for model parameters were estimated via block bootstrapping. Because the t-tests rely on concepts introduced in the Poisson regression modeling, they will be presented after the modeling in the section Comparing Plateau and No-Seismic Calling Rates. To get an idea of how to model call production rate in terms of airgun CSEL t , the mean calling rate per 10-min period was plotted as a function of received CSEL 10-min , for the entire dataset, including 10-min periods during which no airgun pulses were detected (Fig 5). (As discussed below, a sensitivity analysis of the 5, 10, and 20-min CSEL integration times found that 10 min was an appropriate integration interval for the analysis that follows.) Fig 5 shows that mean calling rate for times with no detected airgun pulses ("no-seismic" category) was about 0.1 calls per cell-time interval. Note that this category has about twice as many samples as all other CSEL categories combined. To ensure that the no-seismic calling rate was computed from samples spread over the entire season, the number of no-seismic cell-time intervals was tallied every day in all four years and compared to the total number of cell-time intervals (Fig 6). This plot confirms that no-seismic cell-time intervals are not biased to certain phases of the migration season. For example, there are fewer calls in mid-to late-August when the fall migration begins, but a peak occurs in call detection rates in mid-to late September. Thus, one needs to confirm that the noseismic category samples all time periods throughout the seasons. If it did not, one could argue that any differences seen between no-seismic call rates and other cell-time categories arise from as received CSEL 10-min increased from barely detectable to high amplitude airgun pulses, calling rates initially increased, then stabilized and peaked, and then decreased abruptly towards 0 as received CSEL 10-min increased further. The magnitude of these responses exceeded the 95% confidence limits and were thereby judged to be a real effect. Therefore, we sought a model to capture this fundamental plateau structure in call responses. Here, we refer to the levels at the transitions on either side of the plateau as "thresholds". We define the lower threshold (Δ 1 ) as the point at which calling rates reach the plateau, and the higher threshold (Δ 2 ) as the point at which calling rates begin to drop away from the plateau. The regression model estimated the two thresholds and their variability, with the response variable defined as the number of calls located within a particular analysis cell during a particular time interval t (of duration 5, 10, or 20 min). However, this response variable also depends on factors other than accumulated sound from airgun pulses, such as water depth. Therefore, it was necessary to estimate the thresholds while simultaneously accounting for these other effects. Preliminary analyses showed that most of these other factors are correlated with the DASAR site. For example, the mean depth of DASARs at a site is strongly associated with site number. These factors could therefore be incorporated into the regression model by simply including site number as a categorical factor in the analysis. We chose to disregard any cell-time intervals with a CSEL t value below 80 dB re 1 μPa 2 -s. Blackwell et al. [32] computed whole-season minimum percentile background levels for two DASARs at each site (generally locations A and G) in each of the four years 2007-2010. The Table 1). The figure shows that cell-time intervals without detected seismic activity are not biased toward certain phases of the migration, and thus any differences between calling rates found between no seismic and seismic cell-time intervals are not contaminated by seasonal effects. doi:10.1371/journal.pone.0125720.g006 median level of the 5 th percentile at all 40 DASARs (5 sites x 2 DASAR locations x 4 years) was 80 dB re 1 μPa (rms), with~60% of the median levels between 75 and 85 dB re 1 μPa. Therefore, a cell-time interval with a CSEL t below 80 dB could only occur at the quietest times of the season (less than 5% of the time) with a few barely detectable airgun pulses. For example, two airgun pulses with received SELs of 78 dB re 1 μPa 2 -s yield a CSEL of 81 dB re 1 μPa 2 -s, which exceeds the 80 dB cut-off. Estimates of the thresholds were derived from a non-linear Poisson regression model relating the number of calls in a particular cell-time interval to site (represented as a categorical variable) and CSEL t (represented by two threshold functions). Poisson regressions are a specific class of generalized linear models, a well-established branch of statistical regression theory. The non-linear Poisson regression took the form where y ij is the number of calls located in cell i during time interval j, E[y ij ] denotes the expected value (or average) of y ij , s kij are indicator functions for observations from site k during interval j at cell i (i.e., s kij = 1 if y ij is a count from an analysis cell at site k, s kij = 0 otherwise). Note that indicator functions (s kij ) for site 3 were absent because it was chosen as the reference site; i.e., the intercept, β 0 , represented the site 3 effect and the coefficients β 1 to β 4 represented the differential effects of the other four sites relative to site 3. x ij is the CSEL t for analysis cell i during interval j, H a (x ij , Δ a ) is a logistic approximation to the Heaviside step function defined below [33,34], and the Greek symbols β 0 , . . ., β 6, Δ 1 , and Δ 2 are all parameters to be estimated. The logistic approximations H a (x ij , Δ a ) are parameterized as and In Eq (2), β 5 and β 6 are the slopes towards and from the two thresholds, and Δ 1 and Δ 2 are the values of the thresholds in CSEL t . In Eqs (3) and (4), κ 1 and κ 2 represent the "knees" which control the rate of change in the immediate vicinity of the thresholds; these parameters were not estimated but rather were fixed as they had no discernible effect on the fundamental shape of the function nor on estimation of the other parameters. Our primary interests were in the threshold parameters Δ 1 and Δ 2 , the former being the threshold value at which calling rates reach the plateau, and the latter being the threshold value at which calling rates begin to decrease. We conducted bootstrapping to estimate parameter variances without making explicit distributional assumptions, while also accounting for potential serial correlation in whale call counts. Ninety-five percent confidence intervals for the regression model parameters were computed by block bootstrapping [35]. For more details about these methods, see S4 File. Comparing Plateau and No-Seismic Calling Rates A simple test was used to inquire whether there was a significant difference in calling rates between no-seismic periods (leftmost bar in Fig 5) and the plateau region of the distribution (the area between the two estimates for the thresholds Δ 1 and Δ 2 using CSEL 10-min ). Mean calling rates (calls / 10 min) were calculated for each site separately for the no-seismic and plateau periods. This reduced all observations (851,456 cell-time intervals, i.e., the sum of all the samples shown in Fig 5) to 10 mean values, two for each of the five sites. The site effect is substantial, that is, the calling rate at the five sites varies due to natural factors, such as the location and spread of the migration corridor or water depth. Therefore, it makes sense to treat the site as a blocking factor and since there are two observations per site, a natural test for the difference in calling rates is the paired t-test. This test is a one-sample test for whether the mean difference (i.e., the mean of the differences between no-seismic and plateau mean calling rates) differs from 0. However, rather than the absolute differences, relative differences were tested because overall calling rates are quite different between sites. The relative difference at the i th site was calculated as where x 2,i was the mean plateau calling rate (between the two thresholds) and x 1,i was the mean no-seismic calling rate. Having prior knowledge that the plateau / seismic rate exceeds the no-seismic rate, a one-sided test was performed to test whether the plateau / seismic rate is indeed significantly greater than the no-seismic rate. The test statistic is where x d is the mean difference, i.e., Whale Call and Airgun Pulse Counts Over the course of the study, 975,657 bowhead whale calls were localized at the five sites, as shown in section A of Table 3. Of these, 106,324 (~11%) were located inside the 42 analysis cells (Table 3, section B). The number of calls that were used in model estimation was 49,297, or about 5% of the total number of localized calls ( Table 3, section C). This smaller number includes only calls from cell-time intervals with concurrent airgun pulse detections. Over the course of the study a minimum of~628,000 separate airgun pulses were detected, representing nearly 11 million detections at all DASARs combined. A summary of the numbers of airgun pulses detected and statistics of the derived pulse parameters are shown in S2 Table. The percentage of cell-time intervals with detectable airgun pulses at each site was quite variable and is shown in Fig 7 (using 10-min intervals). This percentage evidently depended on the location of the airgun array(s), but also on DASAR deployment depth. Since shallow waveguides heavily attenuate low-frequency signals, site 1-the shallowest site-always detected the fewest airgun pulses (as well as the fewest whale calls). The year 2008 had the most seismic exploration within the area of the DASAR arrays and not surprisingly the highest percentages of cell-time intervals with detected airgun pulses (Fig 7). Site 5 was the farthest away from Shell's "nearby" airgun operations in 2007, 2008, and 2010, but its location in the deepest water (on average 48 m) made it well-suited for detecting distant airgun pulses, including those from the GSC / USGS operation to the north and operations to the east in the Canadian Beaufort. In 2009 and 2010, the nearest seismic exploration was hundreds of km from site 5, yet 45% and 68%, respectively, of 10-min intervals at that site contained airgun pulses from distant operators (Fig 7). Estimating the Thresholds The lower threshold Δ 1 , at which bowhead whale calling rates reach a plateau, was estimated at 92.0 dB, 94.5 dB, and 97.1 dB re 1 μPa 2 -s for time intervals of 5 min, 10 min, and 20 min, respectively (Table 4). For each doubling of the time interval the point estimate increased by~2.5 dB. The 95% confidence intervals were~35 dB, 13 dB, and 17 dB for 5, 10, and 20-min periods, respectively. The fitted Poisson regression models are summarized in Table 4 (thresholds and corresponding confidence intervals) and S3 Table (all parameters) for the three time intervals. The upper threshold Δ 2 at which calling rates begin to decline was estimated at 124.6 dB, 127.4 dB, and 130.5 dB re 1 μPa 2 -s for time intervals of 5 min, 10 min, and 20 min, respectively ( Table 4). As expected, and similarly to the lower threshold, each doubling in the length of the time interval led to an increase of~3 dB in the CSEL t value. For each of the three time intervals, the 95% confidence intervals spanned~7 dB and overlapped each other; they were therefore smaller and less variable than those for the lower threshold. Predicted calling rates were near 0 (less than 0.02 calls per cell-time interval) when CSEL t levels exceeded 160 dB. Between the two thresholds, a range of received CSELs of about 33 dB re 1 μPa 2 -s for all three time intervals (Table 4), calling rates remained high. In a more detailed look at the results, presented below, we focus on the analysis done with 10-min intervals. Data from all years and all sites were combined in these analyses, but there was a site effect in that the predicted mean number of calls per cell-time interval at the plateau differed between sites. This is demonstrated in Fig 8, which shows the final model, the threshold estimates, and the 95% confidence intervals for the 10-min time interval. Fig 8 also shows that for cell-time intervals with detected airgun pulses, site 2 had the highest calling rate, followed by site 5, site 4, site 3, and site 1. On average, the peak calling rate at site 2 was 5-6 times that at site 1. . Each dot on these plots represents an analysis cell at a particular 10-min time window in the fitting data set (i.e., a cell-time interval), and is shown as a function of the received CSEL 10-min and the number of localized calls detected during that time window. For purposes of display only, vertical coordinates of points from 0 to 9 have been "jittered" to show With an increasing percentage of overloaded pulses (per 10-min interval), both colors transition gradually to gray. Not surprisingly, sites 2 and 5, farthest away from the seismic exploration, had very few cell-time intervals with overloaded pulses (actual percentages for overloading are given in S2 Table). The numbers of cell-time intervals with 0 (orange) versus 1 or more (purple) call localizations are also given in each plot. For site 1, there were 35.5 times more cell-time intervals without calls than with calls (20,875 / 588 = 35.5). This ratio was intermediate for sites 3 and 4 (15.4 and 11.4) and lowest for sites 2 and 5 (7.8 and 10.6). Comparing Plateau and No-Seismic Calling Rates Results from the t-test comparison of calling rates between the plateau / seismic and times with no detected airgun pulses (no-seismic calling rate) are shown in Table 5. In the analysis for all years combined and 2007-2008 (sections A and B in Table 5), the plateau calling rates were consistently greater than the no-seismic calling rates at all five sites and therefore the differences all have the same sign. Both comparisons were statistically significant (P<0.05), showing that the whales' heightened calling rate in the presence of low levels of received airgun pulses was statistically different from calling rates in the absence of seismic operations. In contrast, when the comparison was limited to 2009 and 2010 (section C in Table 5), the two years with distant seismic operations or low-level shallow hazard surveys using a single airgun (see Table 2), the increase in calling was non-significant (P = 0.183). CSEL Thresholds Relative to Distances from the Seismic Ship It is useful to put these CSEL thresholds in context: For example, at what distance from an airgun array will the upper threshold of 127.4 dB re 1 μPa 2 -s be reached, leading whales to start calling less? The M/V Gilavar was the seismic ship involved in Shell's seismic exploration in 2007 and 2008 (see Fig 3 and Table 2). This vessel used a 24-airgun, 3147 in 3 array during full operations, a single 30 in 3 mitigation airgun, and triggered the airguns every 10 s. In both 2007 and 2008, sound source characterizations (SSCs) were performed on the Gilavar's airgun arrays [36,31]. Both SSCs were performed in our study area, near sites 3 and 4 in 2007, and near site 1 in 2008. The empirical data collected were used to estimate the distances from the seismic ship at which various behavioral changes in whale calling behavior are predicted to take place. If we assume 60 airgun pulses per 10-min period and, for the sake of simplicity, a constant received pulse SEL, the resulting CSEL 10-min value is 17.8 dB above the pulse SEL value, e.g., for a pulse SEL of 100 dB: 10 log 10 (60(10 100 / 10 )) = 117.8 dB re 1 μPa 2 -s. In other words, a CSEL 10-min value of~127.4 dB re 1 μPa 2 -s-the upper threshold, when calling rates start to drop-corresponds to a single-pulse SEL of 109.6 dB re 1 μPa 2 -s. (Note that this is a simplified example. A CSEL 10-min level of 127.4 dB could also be achieved with fewer pulses at higher SELs or more pulses at lower SELs). Table 6 shows that a received level of~110 dB SEL was reached about 50 km from the Gilavar using its full array based on the 2008 SSC [31]. The other CSEL 10-min value shown in Table 6 is 160 dB re 1 μPa 2 -s, when very few calls were detected in our analysis Mean calling rate (calls per 10-min cell-time interval) is shown for "no-seismic" and "seismic / plateau" periods, at all sites, for three different data sets: (see Fig 9). Based on the 2008 SSC, the predicted distance at which this would occur was 10-20 km from the seismic ship ( Table 6). The lower threshold (94.5 dB)-when calling rates in the presence of airgun pulses reached the plateau (highest) level-is not included in the Table 6. Cell-time intervals with CSEL 10-min values below 94.5 dB generally included only a few airgun pulses (3 or fewer pulses for~30%, 6 or fewer pulses for~50% of cell-time intervals) because they occurred far from seismic operations. The lower threshold is therefore not amenable to a calculation as performed for the upper threshold in the example above. Nevertheless, the fact that the lower threshold is reached with so few detected airgun pulses means that bowhead whale calling rates will rapidly double, compared to non-seismic calling rates (see Fig 5), once airgun pulses are detectable. The examples shown in Table 6, based on four different SSCs (two array configurations in two different years), are only a few of many possible scenarios. The numbers given in Table 6 should, therefore, be used in a general way, as a rough-order estimate of the radius of a circle around a seismic ship where changes in calling behavior are likely to take place. Table 6 shows that when the Gilavar used its full array, few or no whale calls would be expected within~10-40 km of the ship, and whale calling rates would start decreasing 50 km and more from the seismic ship. Based on the data collected during the 2007 SSC, this distance is likely over 80 km but measurements were not made to that range so we can only speculate based on the shape of the fitted data (see Fig 3.19 in [36]). When the Gilavar used the mitigation airgun, source levels were much lower so the expected ranges for changes in calling behavior are much smaller: few or no whale calls would likely be detected 2-6 km from the ship, and a decrease in calling rates would begin 20-40 km from the ship. An alternative way of expressing the upper threshold is as a received dose of airgun sound per minute. A CSEL 1 min exceeding~118 dB re 1 μPa 2 -s corresponds to a CSEL 5-min , CSEL 10-min , or CSEL 20-min exceeding the estimated upper threshold for those three time intervals. In other words, if the received CSEL 1 min at the whales is above 118 dB, calling rates will begin decreasing. S5 File provides some insight into how these CSEL thresholds translate into SPLs, a more commonly used metric. Discussion This analysis has shown two measurable behavioral thresholds in bowhead whales in response to sounds from airgun pulses. At first, as soon as airgun pulses were detectable above ambient Table 6. Distances from the M/V Gilavar at which various threshold levels are reached. This table uses empirical data collected during sound source characterizations (SSCs) in the same study area in 2007 and 2008 [36,31]. Note that regressions for sound exposure levels were not included in the reports, so the distances in this table are estimated visually from data plotted in the listed figures. CSEL = cumulative sound exposure level, SEL = sound exposure level. levels, bowhead whale calling rates increased over no-seismic calling rates. Calling rates increased with received cumulative sound exposure level (CSEL, in units dB re 1 μPa 2 -s and summed over 10 min) until they were about twice the no-seismic rate. Calling rates remained high over a~33 dB range of received CSELs (Fig 8). In addition, at a received CSEL of~127 dB (equivalently expressed as~118 dB re 1 μPa 2 -s, as summed over 1 minute), calling rates began to decrease, and were near zero at received CSELs of about 160 dB. The use of alternative time intervals of 5 min and 20 min did not change these results, other than by shifting the thresholds by~3 dB, as one would expect with a doubling or halving of the integration time in the calculation of a cumulative sound exposure level. This shows that the results are generally not sensitive to the choice of the time interval, although the variability of the lower threshold increases substantially when the shortest time interval of 5 min is employed. Note that our analysis required an integration time (i.e., 5, 10, or 20 min) but from the perspective of the whale those times are not of any particular importance-they are only a way for us to quantify the dose of airgun sound received by the animal. In most situations, this dose tends to be fairly constant over time, because airgun arrays are normally used for many hours while the seismic ship-and the whales-move relatively slowly. The exception is close to the seismic ship, where received levels will change rapidly over time. Masking is always a potential concern in passive acoustic studies. It is particularly important that the anthropogenic sound being studied (in this case, airgun pulses) not mask the response of interest (in this case, whale calls) since it would then appear as though animals stop calling when, in fact, calls merely could no longer be detected. Guerra et al. [37] showed that reverberation from seismic surveys can substantially increase background levels, particularly within a few km of the seismic ship (e.g., 10-25 dB re 1 μPa within 15 km of a 3147 in 3 array). In this study, masking of whale calls by airgun pulse reverberation is not believed to be an issue because the received airgun pulse levels at which behavioral changes were detected were low, corresponding to distances from the seismic vessel of tens of km. In addition, by restricting the analysis area for call detection to 2-km circles around each DASAR, whale calls used in the analyses generally had high SNRs (signal-to-noise ratios). At the distances where the thresholds were detected, the received levels of whale calls were usually higher than the received levels of airgun pulses. Comparison of Upper Threshold with Results from Other Whale Species A review of studies investigating the effects of anthropogenic sounds on the vocal behavior of large whales [8,9,13,14,38,39] quickly revealed that comparisons with the present study are difficult to make. These other studies addressed a variety of species, used sound sources with differing acoustic parameters (e.g., amplitude and frequency), and the subject animals likely experienced different contexts (e.g., feeding versus migrating, see [40]). Not surprisingly, the changes noted in calling behavior did not all follow the same trend. We therefore limit our discussion to the effects of sounds from airguns on the vocal behavior of large whales. McDonald et al. [41] found that blue whales stopped calling when one or more animals were about 10 km from an airgun array, where estimated received sound levels at the whale(s) were 143 dB re 1 μPa peak-to-peak (10-60 Hz band). This corresponds roughly to a pulse sound pressure level 15-20 dB lower [42], or~123-128 dB re 1 μPa. The corresponding sound exposure level depends on the pulse length at 10 km in the McDonald et al. [41] experiment, which is unknown to us. At a distance of 10 km the pulse length could be close to 1 sec, meaning that the single pulse SEL is also~123-128 dB re 1 μPa 2 -s. Nevertheless, even if the pulse duration was much lower, say 0.2 sec, the corresponding single pulse SEL (from S5 File, 116-121 dB re 1 μPa 2 -s) would still result in some degree of repressed calling in the bowhead data set. In addition, the narrow bandwidth in the McDonald et al. [41] study suggests their estimated level is a minimum. The McDonald et al. [41] findings do not, therefore, contradict our own. For sperm whales foraging in the Gulf of Mexico, Miller et al. [43] found a 19% drop in buzz rates (a proxy for foraging attempts) when a seismic ship was operating nearby, but the effect was not significant, likely because of a small sample size. A study by Bowles et al. [44] in the southern Indian Ocean suggested that sperm whales may have been silenced by a distant seismic operation. In contrast, in an earlier Gulf of Mexico study [45], no observable avoidance of the whales or changes in vocal patterns during feeding dives were observed when the estimated received (single pulse) sound exposure level at the whales was as high as 124 dB re 1 μPa 2 -s. The seismic ship in the Madsen et al. [45] study used a 10-s repetition rate. Thus, 10 min of airgun pulses with a RL of 124 dB SEL would result in a CSEL 10-min value of 141.8 dB re 1 μPa 2 -s, well above our threshold of~127 dB. In summary, studies of the effects of airgun pulses on the calling of blue whales and sperm whales have shown either a drop in vocalization rates or no detectable effect. Nevertheless, small sample sizes, dissimilar acoustic units and differing methodologies make comparisons with the present study difficult. The Existence of Two Behavioral Thresholds Provides Insight Into Past Bowhead Studies One of the surprising results of this study is the existence of two behavioral thresholds in bowhead whales' acoustic responses to seismic activity. To our knowledge, this study is the first to suggest that calling behavior can change in two different ways in response to the same anthropogenic activity, depending on the received levels involved. In retrospect, however, the presence of two thresholds provides insight into some previously perplexing research findings. In the early stages of this study there was no awareness of the fact that calling rates first increase before they start decreasing-we assumed the "plateau" calling rate was the "normal" calling rate. It was, therefore, puzzling that there seemed to be a positive correlation between the number of airgun pulses detected each year and the number of bowhead calls detected. Even after standardizing across the same dates each year, the highest and lowest whale call counts were obtained in the years with the most and fewest, respectively, airgun pulse detections, with intermediate values for the two intermediate years. Based on the results presented in this paper, this observation can likely be explained by the increased calling which occurs "away" from the general area of the seismic ship, where received levels of airgun pulses result in CSEL 10-min values between the two thresholds. In other words, even though the nearby presence of the seismic ship leads to little or no calling by the whales, the increased calling at more distant sites largely makes up for it. The existence of a double threshold also sheds new light on previous studies with ambiguous results. For example, Richardson et al. [11] showed that bowhead whales exposed to distant seismic pulses exhibited a slight (non-significant) drop in average calling rates. Greene et al. [46] found that call detection rates differed significantly at some locations as a function of whether airguns were detectable or not, but the changes were not always consistent. In view of the results presented here, with calling rates first increasing and then dropping to near-zero, a study could readily obtain a non-significant effect if calling rates for whales receiving a wide range of airgun pulse levels were pooled. In other studies, results that could not be explained by the authors at the time now make perfect sense. For example, Greene et al. [47] compared bowhead whale call detection rates at several recorders as a function of distance from the airgun arrays. They stated "At the recorder closest to the airguns, call detection rates were lower (P<0.02) at times with pulses than without them. At the recorder farthest [from the airguns], call detection rates were higher when airgun pulses were evident [. . .] than without pulses." In an analysis preceding this one, Blackwell et al. [19] showed that for bowhead whales relatively near an airgun source (median distance 41-45 km), calling rates dropped when the array became operational. This result is in agreement with the findings of this analysis: At a median distance of~40 km, RLs at the whales would yield a CSEL 10-min >~127 dB re 1 μPa 2 -s and calling by the whales would be repressed (see Table 6). The Blackwell et al. [19] analysis also showed that for distant whales (median distance >104 km) there was no change in average calling rates when the airguns became operational. Sites 1, 2, and 5 were all lumped into the "far" category in that analysis, but examination of Fig 5 in Blackwell et al. [19] shows that when airguns were turned on, calling rates at site 5 fell whereas calling rates at sites 1 and 2 increased. On average, therefore, they did not change, but it is likely that the "far" category included a wide range of CSELs, leading to disparate calling rates. The BACI-type analysis performed in the Blackwell et al. [19] study did not have the resolution to detect the subtle shifts in calling rates shown by the present analysis. Interpretation of Between-Site Differences Bowhead whales are long-lived animals [48] that have been exposed to airgun sounds in the Alaskan Beaufort since the late 1960s [49]. There was no expectation of a seasonal habituation or any reason to believe the whales' reactions should be different at the different sites. Therefore data from all years and all sites were pooled, while retaining a site effect in the model, because preliminary analyses indicated differences in calling rates among the sites (see Eq (2), Fig 8, and paragraph below). No-seismic calling rates varied by factors of three to seven between sites. For example, in 2007-2008 the calling rate at sites 3, 4, and 5 (0.14-0.16 calls / cell-time interval) were 2.6-3 times the calling rate at site 1 (0.05 calls / cell-time interval, section C in Table 5). (In 2009-2010 calling rates at site 1 were even lower (0.02 calls / cell-time interval), because immediately following its deployment in 2010, site 1 was covered in dense pack ice for much of the season.) The fact that the migration corridor tends to be wider at site 1 than at sites further east could be a contributing factor, as the stream of whales gets "diluted" when traveling over the westernmost site (see Fig 9.17 in [50]). Nevertheless, the paucity of whales near and west of site 1 has also been noted during aerial surveys conducted by BOEM/NMFS: compared to other coastal survey blocks within our study area, the survey block encompassing site 1 yielded the lowest number of bowhead whales per km surveyed in 2007, 2008, and 2010 [51][52][53]. The differences in plateau heights among sites (Fig 8) are mainly a result of two factors: the intrinsic between-site differences in calling rates mentioned above, and the distribution of CSELs for each site. For example, site 4 has many more cell-time intervals above the upper threshold (29%) than site 2 (9%), which contributes to the disparate heights of the plateaus for these two sites. Nevertheless, irrespective of site differences, the main findings provided by the model are that calling rates roughly double in response to low levels of airgun pulses and then decrease when the received CSEL 10-min exceeds the upper threshold. Possible Effect of Vessel Range on Behavioral Response Received sound level is not the only factor that influences the behavior of the whale-in this case, the value of the threshold. An animal's motivational state has been shown to result in gradation of its response to stimuli. Feeding bowhead whales, for example, are less likely to be disturbed by anthropogenic activities than migrating whales [54,55]. Another factor that could influence behavior, and which is relevant in this study, is the distance between sound source and receiver [40]. There is no reason to assume that a bowhead whale's reaction to airgun pulses with received SELs of 120 dB re 1 μPa 2 -s will be the same if these pulses come from a large array 100 km from the whale versus a single airgun a few km away. In Fig 9, the purple scatter of points (each representing one or more calls in a cell-time interval) looks somewhat shifted to the right for sites 2 and 5, always distant from the seismic operations, compared to sites 3 and 4, generally much closer to the airguns. Also, the t-test analysis found the increase in calling in response to low levels of airgun pulses was significant for the two years with nearby seismic operations (2007-2008, Table 5, section B) but not for the two years with distant or low-level operations (2009-2010, Table 5, section C). Could both of these effects be due to a vessel-range response by the animals, in which the threshold for calling cessation is slightly higher and the increase in calling at low received levels is less pronounced when the whales know the sound source is farther away? Such subtleties in the whales' responses to sound were beyond the scope of this modeling exercise. Nonetheless, despite being speculative they are worth mentioning if only to guide future research on the subject. Implications for Seismic Exploration One of the most remarkable aspects of these results are the low levels of sound at which a change in calling behavior was detected. Two recent studies, Risch et al. [13] and Melcón et al. [14], also made that observation regarding their own results. Melcón et al. [14], working on the vocal response of foraging blue whales to an MFA sonar sound source, state "It is remarkable that relatively low intensity sound levels cause a perturbation such that the probability of D calls decreases compared to our reference (non-anthropogenic noise). This suggests that a single MFA sonar source could elicit a response from blue whales over a broad region of the Southern California Bight." Similarly, the area around a seismic ship within which our results predict a behavioral response by bowhead whales is sizable. Based on the SSC data presented in Table 6, calling by bowhead whales is repressed within a radius of~50-100 km from the seismic ship (~7850-31,410 km 2 ), assuming the seismic source and propagation conditions are similar to this study. Within~10-40 km of such a seismic source (~314-5026 km 2 ), calling by bowhead whales would be almost nonexistent. Therefore, under the source and propagation conditions analyzed here, in relatively shallow water (<100 m), we must conclude that monitoring for the presence of migrating bowhead whales within~40 km of seismic exploration activities cannot be done acoustically. In the Risch et al. [13] study on humpback whales, mentioned above, concurrent visual observations confirmed that male humpback whales were present in the area even when no songs were detected. The authors therefore suggested that male humpbacks may have ceased singing and remained in the area. Similarly, we have no evidence that decreases in calling rates were the result of whales moving away. In a few cases, shut-down of the airguns led to a resumption of calling that was fast enough that the whales must have remained in the area. Cessation of calling is likely one of the first measurable behavioral changes when a whale encounters a sound source such as an airgun array, and calling cessation very likely precedes deflection. It follows that deflection to seismic operations would be a challenge to study in bowhead whales using passive acoustics. In a review of the effects of seismic exploration on bowhead whales, Richardson and Malme [56] state that beyond a distance of~7.5 km from an airgun array, bowheads rarely show deflection, even though avoidance may occasionally occur at distances of 20 km or more [57]. Changes in surfacing-respiration-dive (SRD) cycles have also been observed and described in a number of studies [11,58,12]. Changes in SRD cycles are known to occur at greater distances than deflection, out to~70 km in some studies [59]. In conclusion, this study has shown an unexpectedly complex change in bowhead whale calling behavior-first an increase, followed by a plateau, and then a decrease-to received levels from airgun sounds. Proximate effects on the animals of such a change in behavior could be minor, but are unknown. Nevertheless, the Bering-Chukchi-Beaufort Sea (BCB) population of bowhead whales has shown a healthy increase in numbers since the late 1970s [60] despite being exposed to airgun pulses since 1968, when MMS issued the first permits for seismic survey activities in the Beaufort Sea [49]. On a global scale, seismic exploration activities are likely to increase in the Arctic, including in areas that are part of the bowhead range but have not been prospected much in the past, such as in the North Atlantic east and west of Greenland. The results presented in this paper will be important in understanding the effects of seismic operations on several scales. On a small scale, they will help in interpreting bowhead calling rates recorded by passive acoustic systems. On a larger scale, they will contribute to the current attempts to better understand lifetime sound exposure of these long-lived, highly migratory marine mammals. Supporting Information S1 Sarah Tennant; and Bob Norman and Alex Conrad (Greeneridge Sciences) for help with data processing. We thank Dr. Burney Le Boeuf for comments on two separate drafts of the manuscript, and Dr. Christopher Clark and an anonymous reviewer for constructive comments and guidance. Drs. Koen Broker and Louis Brzuzy (Shell) also provided helpful comments on the manuscript. Author Contributions Conceived and designed the experiments: CRG SBB TLM CSN AMT AMM. Performed the experiments: SBB DM AMT KHK CSN. Analyzed the data: CSN SBB AMT KHK TLM. Wrote the paper: SBB CSN AMT.
2016-05-12T22:15:10.714Z
2015-06-03T00:00:00.000
{ "year": 2015, "sha1": "a33421884d3b3d9465be215e4f194228ddec74e5", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0125720", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d1ee7fa0c76c970c9e34003f57b261b7b895c1eb", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
208598215
pes2o/s2orc
v3-fos-license
Antibacterial and Antifungal Activity of Ashwagandha (Withania somnifera L.): A review Approaches for studying antimicrobial susceptibility and discovering new antimicrobial agents from the plants and other natural sources have been extensively utilized. Withania somnifera (L.) Dunal, commonly known as Ashwagandha or Indian ginseng or winter cherry, is a popular medicinal plant in Ayurvedic medicine. The principal active compounds include several withanolide-type compounds. Various plant parts, like roots and less often leaves and fruits of Ashwagandha, have been used as plant-derived medicines. The plant possesses various pharmacological activities including antimicrobial activity. Many bacterial and fungal species have been used as a test microorganism for the assessment of the antimicrobial activity of extracts and purified compounds of various plant parts of Ashwagandha. In this article, we tried to compile and to discuss the information about the antimicrobial activity of W. somnifera. This will provide the platform for the researchers to select plants, plant parts, solvent system, test microorganisms, method of evaluation and other related factors affecting the analysis. INTRODUCTION At present, microbial infections have turned out to be an important clinical warning, with significant allied sickness and death, which is mainly due to the increase in microbial resistance to the prevailing antimicrobial agents 1,2 .Hence, approaches for antimicrobial susceptibility analysis and discovering new antimicrobial agents have been extensively utilized and continue to be developed 3 . After the revolution in the 'golden era', when nearly all groups of important antibiotics (cephalosporins, tetracyclines, aminoglycosides and macrolides) were discovered and the key difficulties of chemotherapy were resolved in the 1960s, the history repeats itself these days and these prevailing compounds are at risk of losing their effectiveness due to the increase in microbial resistance.Currently, its effect is considerable with treatment failures related to the multidrug-resistant bacteria and it has become a worldwide alarm to public health 4 .For this reason, the discovery of new antibiotics is an absolutely essential objective.Natural products, mostly plants are still one of the chief sources of new drug molecules today 5,6 .Microbial and plant products occupy the main part of the antimicrobial compound identified and discovered till date.Plants and other natural resources can deliver an enormous range of complex and structurally different compounds.Recently, many researchers have focused on the investigation of plant and microbial extracts, essential oils, pure secondary metabolites and newly synthesized molecules as probable antimicrobial agents [7][8][9][10] .The fact that a plant extract shows antimicrobial activity is of concern, but this primary part of the data should be reliable and let researchers to relate results, avoiding work in which researchers use the antimicrobial activity study only as a supplement to a phytochemical analysis 3 . Various methods are used for the assessment of the antimicrobial potential of plant extracts, essential oils and other antimicrobial agents which include, agar disk-diffusion method, antimicrobial gradient method, agar well diffusion method, agar plug diffusion method, cross streak method, poisoned food method, thin-layer chromatography (TLC)bioautography (agar diffusion, direct bioautography, agar overlay bioassay), dilution methods (broth dilution method, agar dilution method), time-kill test (time-kill curve), ATP bioluminescence assay, flow cytofluorometric method 3 .A variety of laboratory methods can be used to screen or evaluate the in vitro antimicrobial activity of a pure compound or a plant extract.The widely used and basic methods are the disk/disc diffusion method and broth/agar dilution methods.Other methods are used especially for the testing of antifungal activity, like, poisoned food technique 11 .To further study the antimicrobial effect of an agent in depth, Time-kill test and flow cytofluorometric methods are endorsed for further in depth study of the effect of antimicrobial agents, which provide details on the nature of the inhibitory effect (bacteriostatic or bactericidal) (concentration-dependent or time-dependent) and the cell damage imposed to the test microorganism 3 . ASHWAGANDHA: AN IMPORTANT MEDICINAL PLANT Withania somnifera (L.) Dunal, commonly known as Ashwagandha or Indian ginseng or winter cherry, is a renowned medicinal plant in Ayurvedic medicine 12 .The principal active compounds include several withanolide-type compounds 13,14 .Due to the nonhazardous and great medicinal value, it is commonly used all over the world.Roots, and less often leaves and fruits, have been used as phytomedicines in the form of decoction, infusions, ointment, powder, and syrup [13][14][15] .These days, it is cultivated as a crop to maintain the high demand of biomass and a sustainable eminence for the requirements of pharmaceutical industry 16 . Ashwagandha is an important herb in the Ayurvedic and indigenous medicine system for over 3000 years.It belongs to the family Solanaceae and possess a chromosome number of 2n=48.In the India, only two species of Withania are found which includes W. somnifera and W. coagulans 17 .This plant has been used as a home remedy for numerous diseases in the India and many parts of the world.It is found in the wild form in many parts of the India and in the Mediterranean region of North Africa.In the India, it is grown in Rajasthan, Madhya Pradesh, Himachal Pradesh, Punjab and Uttar Pradesh 17 .It is designated as an herbal tonic and health food in the Vedas and is considered as 'Indian ginseng' in the conventional Indian medicine.It is utilized as a liver tonic, anti-inflammatory, antioxidant, antimicrobial agent and cure for asthma 18 .Withaferin-A has been receiving a good deal of attention because of its antibiotic and antitumor activity 19 . In Unani system of medicine, roots of W. somnifera usually known as Asgand are utilized for the medicinal properties 20 .In Ayurveda, Ashwagandha is claimed to have effective aphrodisiac rejuvenating and life extending properties.It has overall animating and regenerative abilities and is used among others, for the treatment of nervous exhaustion, insomnia, memory related conditions, skin problems, tiredness potency issues and coughing.It also increases learning capability and memory capacity.The traditional use of Ashwagandha was to escalate energy, youthful vigor, strength, endurance, health, increase vital fluids, nurture the time elements of the body, muscle fat, lymph, blood, cell production and semen.It helps counteract chronic fatigue, dehydration, weakness, loose teeth, bone weakness, impotency, thirst, premature aging emaciation, muscle tension, debility and convalescence.It aids invigorate the body by rejuvenating the reproductive organs, just as a tree is invigorated by feeding the roots 21 . CHEMICAL CONSTITUENTS Figure 1: Chemical structure of (A) Withaferin A and (B) Withaolide A The chemical constituents of W. somnifera have always been of great interest to the scientific community.The biologically active chemical constituents are alkaloids (ashwagandhine, anahygrine, cuscohygrine, tropine etc), steroidal compounds i.e. withaferin A, withasomniferin A, ergostane-type steroidal lactones, withanolides A-Y, withasomniferols A-C, withasomidienone, withanone etc 22 .Withaferin A (Figure 1A), and withanolide A (Figure 1B) are the chief withanolidal active components isolated from the plant.These compounds are chemically similar but varied in their chemical constituents 23 . Antifungal activity of Ashwagandha In the past, antifungal activity activity has been evaluated for various extracts of different plant parts of Ashwagandha.The detailed information on the antifungal activity of Ashwagandha is provided in the Various plant parts viz., calyx, flower, fruits, leaves, root and stem were used for the antifungal activity assessment.Mostly used plant part was the root of Ashwagandha.Acetone, benzene, chloroform, ethanol, ethyl acetate, glacial acetic acid, hexane, isopropanol, methanol, petroleum ether, toluene and water (hot and cold) were used as a solvent for the extraction procedure to evaluate antifungal activity of various parts of Ashwagandha.However, methanol was the most preferred solvent for the extraction of phytochemicals from parts of Ashwagandha.Like antibacterial activity assessment, most chosen method for the evaluation was the disc diffusion method.However, agar well diffusion method and poison food technique were also used for the evaluation. CONCLUSION: Ashwagandha (W.somnifera) owns a tremendous amount of medicinal properties including antimicrobial activity.Many test microorganisms have been used for the assessment of the antimicrobial activity of extracts and purified compounds of various plant parts of Ashwagandha.Still, there are many scopes of the research or the identification and isolation of antimicrobial agents from Ashwagandha.The information Figure 2 : Figure 2: Pharmacological activities of Ashwagandha (W.somnifera) Some common concerns must be established to evaluate the antimicrobial activity of plant extracts, essential oils and the isolated/extracted compounds from them.The greatest relevance is the characterizing and defining common factors, such as plant parts used, methods employed, growth medium and test microorganisms evaluated(Rios and Recio, 2005).Systematic standards should be used in the selection and collection of the plant parts/materials.Moreover, to avoid unnecessary exercise, the selection of plants and plant parts should be made from an ethnopharmacological perception.The solvent systems and the extraction procedure may alter the final outcome of the study.The solvents and methods for extraction used in folk medicine should be used as they are most appropriate.The active chemical constituents are more soluble in some solvents, which should be used as it may affect the results.The crude extract or essential oil offers variable results as they contain different phytochemicals present in them.The presence of such active phytochemicals depends on their solubility in the solvents used.Sometimes, the presence of phenolic, carboxylic compounds or other impurities in the extract may affect the activity of the active phytochemicals.The experiments may be carried out with a collection of strains, but additional experiments with isolated pathogens would be of importance in the case of purified compounds or active extracts to evaluate their actual effects.According to the literature the evaluation of medicinal plants as antimicrobial agents, understanding medicinal flora and their real value is important, however the use of a standard technique for the research is also crucial.As the reports suggest, W. somnifera possesses significant antimicrobial activity, various types of research on the mechanisms of action, interactions of microorganisms with plant extracts and the pharmacokinetic profile of the extracts should be given main concern(Rios and Recio, 2005).
2019-10-31T09:13:34.098Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "93787822e05309a28c394e7d030f49d446447b74", "oa_license": null, "oa_url": "http://jddtonline.info/index.php/jddt/article/download/3573/2793", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "4b4c95de8b9ad14eec624b44fc19eb27398a4e2f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
31392554
pes2o/s2orc
v3-fos-license
Tunable doping of graphene by using physisorbed self-assembled networks † One current key challenge in graphene research is to tune its charge carrier concentration, i.e., pand n-type doping of graphene. An attractive approach in this respect is offered by controlled doping via wellordered self-assembled networks physisorbed on the graphene surface. We report on tunable n-type doping of graphene using self-assembled networks of alkyl-amines that have varying chain lengths. The doping magnitude is modulated by controlling the density of the strong n-type doping amine groups on the surface. As revealed by scanning tunneling and atomic force microscopy, this density is governed by the length of the alkyl chain which acts as a spacer within the self-assembled network. The modulation of the doping magnitude depending on the chain length was demonstrated using Raman spectroscopy and electrical measurements on graphene field effect devices. This supramolecular functionalization approach offers new possibilities for controlling the properties of graphene and other two-dimensional materials at the nanoscale. Introduction Since its first isolation in 2004, graphene, a single atom thick layer of sp 2 -hybridized carbon, has attracted enormous interest from the scientific community and a myriad of applications have been envisioned. 1,2 In particular its outstanding physical and electrical properties make it a relevant material for integration in electronic applications, e.g., as channels or interconnects in field-effect transistors (FETs) and as transparent electrodes in optoelectronic devices. 3,4 However, one of the major issues that prevents its introduction into technological applications is that, although graphene has a high charge carrier mobility, its intrinsic conductivity is low due to the negligible charge carrier density near the Dirac point. In addition, many applications require precise engineering of the work function to lower the energy barrier at the interface with semiconducting materials. 5 Both these issues can be overcome through p-type or n-type doping. 6 Currently, two approaches for doping have been presented based on either substitutional or adsorbate-induced protocols. Replacing carbon atoms within the graphene lattice by nitrogen or boron, indeed causes a shift in the work function, but it also disrupts the graphene lattice in an uncontrollable manner, thereby decreasing the charge carrier mobility. 7 Doping using physisorbed adsorbates through charge transfer with graphene, on the other hand, provides an elegant, non-destructive method that preserves the carrier mobility. 8 Gas adsorption, alkali atoms and organic molecules have been shown to cause doping of the underlying graphene, [9][10][11][12][13] however, the distribution of adsorbates is often inhomogeneous resulting in uncontrolled doping. One of the attractive routes towards controlled functionalization of graphene is via the use of two-dimensional (2D) supramolecular self-assembly of molecular building blocks. [14][15][16] Although self-assembly on graphene has been intensively investigated, the focus has mainly been on fundamental aspects. [17][18][19][20][21][22] Depending on the chemical nature of the building block and interplay of external parameters like solute † Electronic supplementary information (ESI) available: Synthesis of NCA; AFM images of ODA and NCA films on HOPG before and after annealing; AFM images before and after cleaning graphene with toluene; methodology for correcting Raman data; Raman data before and after immersion in ethanol and ethanol/toluene; AFM images of device channels before and after mechanical cleaning with AFM; AFM images of device channel after functionalization with ODA and NCA; transfer curves of GFET devices functionalized with ODA and NCA before and after annealing; a table containing electrical parameters of the devices. See DOI: 10.1039/c6nr07912a concentration, solvent choice or temperature, precise control over the molecular organization on the surface can be achieved. 23,24 Accordingly, by incorporating dopant moieties within the molecular building block, the underlying graphene can be doped in a uniform and well-controlled manner. P-type doping using self-assembled monolayers of alkylphosphonic acid derivatives and n-type doping using alkylamine derivatives have been demonstrated. 25,26 However, demonstration of tunable control over the doping effect by rational design of the self-assembling species has not yet been explored. In this contribution, we present a robust and reproducible method for tuning the level of n-type doping by controlling the density of amine groups present on the graphene surface. This method is based on the use of aliphatic amines which differ only in the length of the alkyl chain. Within the 2D selfassembled networks, the length of the alkyl chains determines the density of the amine groups at the interface by acting as a spacer, as revealed by scanning tunneling microscopy (STM) and atomic force microscopy (AFM). Tunable n-type doping via modulation of the alkyl chain length was proven using Raman spectroscopy and electrical measurements performed on fieldeffect transistor (FET) devices. Graphene grown by chemical vapor deposition (CVD) transferred to SiO 2 was used as it is the most promising graphene type from a technological viewpoint due its quality and large scale, low cost production. Self-assembled film preparation HOPG and CVD grown graphene (devices) were functionalized with ODA or NCA via dip coating or drop casting. For dip coating, the samples were immersed for 30 min in the dopant solution and subsequently withdrawn using a computer controlled sample stage which allows for precise control of the speed of withdrawal, which was set at 1 mm min −1 . For functionalization with ODA via dip coating, a 10 −3 M solution in ethanol was used and for NCA, a 10 −4 M solution in ethanol/toluene (50/50 v/v%) was used. Functionalization with ODA and NCA via drop casting was performed using 10 −4 M solutions (30 µl) in ethanol and ethanol/toluene (50/50 v/v%), respectively. All procedures were carried out under ambient conditions at room temperature. Scanning probe microscopy STM experiments were performed using a Molecular Imaging STM (Keysight) system, operating in constant-current mode. STM tips were prepared by mechanical cutting of a Pt/Ir wire (80%/20%, diameter 0.25 mm). The bias voltage refers to the substrate. AFM measurements were performed on a PicoSPM (Keysight) machine under ambient conditions with silicon cantilevers (AC160TS or AC240TS; Olympus). Gwyddion and Scanning Probe Image Processor (SPIP) softwares (Image Metrology ApS) were used for image analysis. Raman characterization Raman measurements were performed with an OmegaScopeTM 1000 (AIST-NT). Laser light from a He-Ne laser (632.8 nm) was reflected by using a dichroic mirror (Chroma, Z633RDC) and then focused onto the sample surface by using an objective (MITUTOYO, BD Plan Apo 100×, N.A. 0.7). The optical density at the sample surface was about 500 kW cm −2 . Raman scattering was collected with the same objective and directed to a Raman spectrograph (Horiba JY, iHR-320) equipped with a cooled charge-coupled device (CCD) camera operating at −100°C (Andor, DU920P) through the dichroic mirror, a pinhole and a long pass filter (Chroma, HQ645LP). Accumulation time for all spectra was 3 s. Measurements were carried out under ambient conditions and at room temperature. Graphene FET fabrication For device fabrication, CVD grown graphene already transferred to Si++/SiO 2 (90 nm) substrates (Graphenea) was used. After cleaning the samples in acetone and IPA, graphene was patterned using photolithography. For this purpose, first poly (methyl methacrylate) (3%) in chlorobenzene was spun at 6000 rpm as a protective layer for the graphene and then an IX845 resist was spun at 4000 rpm for 30 s and baked for 1 min at 120°C for photolithography patterning. The samples were then developed in OPD 5262 and stripped with O 2 plasma. After the patterning of graphene, the residual resist was removed by immersion in acetone at 50°C followed by isopropyl alcohol (IPA) dip and N 2 blow dry. Then the source and drain were deposited by lift-off using photolithography (IX845 resist). For the metal contact, a 70 nm Pd layer was used. After lift-off, the devices were once more cleaned with acetone, IPA dip and N 2 blow dry. For the characterization of the devices a Keithley 4200 parameter analyzer was used with a Cascade probe station in a dark environment and a constant flow of 1 scfm of N 2 over the sample. Results and discussion Two alkyl amines, namely, octadecylamine (ODA) and nonacosylamine (NCA) (Scheme 1) were chosen for demonstrating tunable doping of graphene using self-assembled monolayers. Aliphatic compounds form stable self-assembled patterns with their chains orienting parallel to graphitic surfaces due to commensurability between the alkyl chains and the substrate lattice. 27 By varying the length of the alkyl chain (spacer), the periodicity of the self-assembled pattern can be tuned and thereby also tuning the density of the amine groups (dopant moieties), which is inversely proportional to the alkyl chain length. Based on the lengths of the molecules under investigation, (18 carbon atoms for ODA and 29 for NCA) the amine group density on the graphene surface, and thus also the doping effect, is expected to be 1.6 times higher for ODA compared to NCA. Before studying the self-assembly of the two molecules on graphene, the self-assembled networks were first characterized using both STM as well as AFM on highly oriented pyrolytic graphite (HOPG) which served as a model surface. HOPG offers large, atomically flat terraces and thus is a more straightforward platform to study self-assembly. Achieving molecular resolution on CVD grown graphene transferred to SiO 2 on the other hand is typically more challenging due to its appreciably higher surface roughness caused due to the underlying SiO 2 substrate and also due to contamination from the transfer process. 28,29 The choice of HOPG as a model surface for subsequent measurements on graphene is motivated from previous studies, which showed that in general, there is almost no difference between self-assembled structures formed on graphene and HOPG. 30,31 To study how the molecules arrange on a graphitic surface, STM was carried out at the 1-phenyloctane/HOPG interface. STM is capable of imaging self-assembled networks with submolecular resolution and is therefore a powerful tool to study the precise arrangement of molecules on surfaces. STM measurements were performed by depositing a saturated solution of NCA or ODA in 1-phenyloctane on freshly cleaved HOPG and subsequent imaging under ambient conditions at the solution-solid interface. The high resolution STM image shown in Fig. 1a reveals that ODA is ordered in a lamellar structure characteristic of alkanes. Within the lamella, the molecules are oriented in a head-to-head configuration to facilitate hydrogen bonding between the amine groups, with the alkyl chains perpendicular to the lamella axis. NCA forms similar self-assembled networks as ODA, however, the NCA molecules are randomly orientated within the lamella (head-to-head or tail-to-head, see Fig. 1b). This difference can be explained by the stronger intermolecular and moleculesubstrate interactions of the longer alkyl chain, resulting in a kinetically trapped less ordered structure. Based on the periodicity of the lamella, 4.9 ± 0.1 nm for ODA and 4.1 ± 0.1 nm for NCA, and the intermolecular distance within the lamella, 0.47 ± 0.05 nm and 0.43 ± 0.08 nm, respectively, the density of the amine groups on the surface can be estimated to be 0.89 ± 0.03 nm −2 and 0.56 ± 0.01 nm −2 . Thus, based on the STM measurements, the amine group density is 1.6 ± 0.1 times higher for ODA compared to NCA, which is in agreement with the ratio of the alkyl chain lengths. For doping using self-assembled networks of ODA and NCA, dry uniform films need to be prepared on graphene. In this regard, a deposition protocol for thin, fully surface covering films was first developed on HOPG. To deposit these films, dip coating was used as it results in highly uniform films and to characterize molecular ordering, surface coverage and domain sizes, AFM was employed. Fig. 2a and b show AFM images after dip coating HOPG in ODA and NCA in ethanol and ethanol/toluene (50/50 v/v%) solutions, respectively. Different solvents were used for these compounds due to solubility limitations. In addition, the samples were annealed for 1 minute at 100°C on a hot plate to improve the film quality and increase domain sizes (ESI Fig. S1 and S2 †). The deposition protocol results in self-assembled films with full surface coverage. AFM images reveal that both molecules form lamellar structures similar to those found at the 1-phenyloctane/HOPG interface. The periodicity of the ODA lamella is 5.0 ± 0.3 nm which is in good agreement with the STM measurements on HOPG. For NCA however, the periodicity is 7.8 ± 0.2 nm which indicates that, in contrast to the previously described STM measurements, the molecules are predominately in the thermodynamically favourable head-to-head orientation. This discrepancy between the two measurements can be explained by the different conditions under which the measurements were performed: while STM was performed at the 1-phenyloctane/HOPG interface, AFM was carried out on dry films created by dip coating in toluene/ethanol solution, Scheme 1 Schematic illustration of the concept of tunable doping by controlling the density of dopant moieties on the surface using selfassembled networks. and followed by annealing. Nonetheless, this difference in organization of individual molecules within the lamella does not change the density of the amine groups on the graphene surface. To functionalize CVD grown graphene on SiO 2 , the same deposition conditions were used as on HOPG. Fig. 2c and d show AFM images of graphene after deposition of ODA and NCA and subsequent annealing. Despite its high surface roughness, the molecules form similar self-assembled structures on CVD graphene as on HOPG. The alkyl chains are aligned along the graphene symmetry axes and the periodicities of the lamella are 5.3 ± 0.3 nm for ODA and 8.0 ± 0.2 nm for NCA. However, the domain sizes are in general smaller than those on HOPG. These smaller domains are possibly caused by the corrugation of graphene or polymeric contamination originating from the transfer process. The bright features in the images likely correspond to this contamination as they are not present on functionalized HOPG and were present on the graphene surface prior to deposition (ESI Fig. S3 †). In order to compare the doping effect of ODA and NCA, Raman spectroscopy was performed on CVD grown graphene before and after functionalization. Raman spectroscopy is a well-known technique to study various properties of graphene, such as defect density, number of layers and also doping. Upon n-or p-type doping, the G peak position, Pos(G), shifts from its charge neutral position (1581.6 cm −1 ) to higher wavenumbers. In addition, both types of doping results in a decrease of the ratio of the 2D and G peak intensities, I(2D)/ I(G), and the full width at half maximum of the G-peak, FWHM(G). 32,33 The behavior of the 2D peak position, Pos(2D), upon doping is more complex. It increases for p-type doping while for n-type doping it first increases slightly followed by a decrease at higher doping levels. 32,33 Fig. 3a shows typical Raman spectra before and after deposition of ODA and NCA. Pos(G) is located at 1599 cm −1 before deposition, indicating that the graphene sample is heavily p-type doped due to the strong interaction with the SiO 2 substrate and doping by oxygen and water from the ambient. 34 Upon deposition of ODA or NCA using the same dip coating protocol as used for the AFM characterization described before, Pos(G) shifts to lower wavenumbers due to electron donation from the molecules to graphene, leading to a reduction in p-type doping. The n-type doping ability of amine groups can be attributed to two different mechanisms. First, it has been demonstrated using first principles calculations that orbital mixing between graphene states with molecular states of ammonia results in a small amount of charge transfer. 35 However, the large doping effect found experimentally for amine groups cannot be fully attributed to this. A second mechanism that can explain this observation is that polar molecules, like water or ammonia, result in charge transfer due to the formation of a dipole layer. 36,37 Based on the larger red shift of Pos(G) and a larger increase of I(2D)/I(G) after deposition of ODA compared to NCA, it is clear that the former has a higher doping effect than the latter. For a fully quantitative analysis of the doping effect however, multiple measurements at different positions on each sample need to be conducted. Therefore, a large number of Raman spectra, 3 Raman maps each containing 100 spectra, were collected on each sample before and after functionalization. The data is corrected for defect areas and bilayer spots by removing data points with a high D peak and an abnormal I(2D)/I(G) ratio (see ESI Fig. S4 and S5 † for details). Moreover, the effects of strain induced by the corrugation of the SiO 2 substrate must also be considered. As both Pos(G) and Pos(2D) are sensitive to strain as well as doping, variations in peak positions caused by spatial variation in strain are removed using a method based on work by Ryu et al. 38 In this method, the contributions of doping and strain are separated by correlating the spatial variation in the G-peak position with that of the 2D-peak position. In a plot of Pos(2D) versus Pos(G), points with varying strain but an identical amount of doping fall on a single line with a slope (ΔPos(2D)/ΔPos(G)) of 2.2 ± 0.2, whereas upon p-type doping without any variation in strain, the points shift along a line with a ΔPos(2D)/ΔPos(G) of 0.7 ± 0.05. 38,39 Based on these facts, a line can be drawn in this Pos(G)-Pos(2D) space that represents modulation in hole-doping of strain-free graphene. The point on this line corresponding to charge neutrality can be estimated by the G and 2D peak positions of suspended graphene (1581.6, 2630 cm −1 ), which is essentially strain-free and undoped. 40 Accordingly, for every point in the Pos(G)-Pos(2D) space corresponding to a certain amount of p-type doping and strain, the peak positions that relate to the same amount of doping in the absence of strain is given by projecting this point onto the line representing p-type doping of strain-free graphene. Fig. 3c and d map the 2D peak positions versus the G peak positions for NCA and ODA before deposition, after deposition via dip coating and after subsequent annealing. Both before and after deposition, the data points are distributed along a line with a slope of ≈2.2 (see lines fitted through data points) indicating that there is a large variation in strain across the samples. By using the aforementioned method, the average peak positions corresponding to the amount of doping without strain-effects can be estimated, with the average values detailed in Table 1. After deposition of the compounds, the mean G peak position, corrected for strain effects, red shifts 15.7 ± 2.1 cm −1 for ODA and 9.9 ± 1.7 cm −1 for NCA (Fig. 3b). Since the G-peak position is approximately linearly dependent on the charge carrier concentration in this spectral range, the ratio of the doping effect can be readily determined from the ratio of the shifts observed in the G peak positions. This ratio is 1.59 based on the observed G peak shifts, and thus is in excellent agreement with what is expected on the basis of the difference in the amine group density on the surface determined using STM. The comparative Raman analysis of ODA and NCA functionalization presented above thus clearly demonstrates that the extent of doping is controlled by the length of the alkyl spacer in the self-assembled network. Although, the functionalized graphene samples showed improvement in film quality after annealing, Raman experiments revealed a concomitant decrease in the n-type doping effect for both compounds. This reduction may be caused by a decrease in the molecular film thickness, as it has been reported that long chain alkyl amines can be vaporized at temperatures as low as 100°C. 41 The ratio between the extents of doping however remained around 1.6. To exclude the possibility that the difference in doping is caused by a discrepancy in the amount of the material present on the surface, an additional experiment was performed where a large amount of the material, 3 × 10 −9 mol cm −2 , was drop cast after annealing. From the experimentally obtained unit cell parameters (STM), the surface density required for a full monolayer coverage of ODA can be calculated to be 1.4 × 10 −10 mol cm −2 , and therefore the amount of drop cast material should give more than 20 monolayers for both ODA and NCA, assuming that the material is homogeneously distributed on the surface. It has previously been reported that mainly the first few layers of molecular dopants significantly contribute to the charge transfer doping effect, 11 and hence 20 monolayers are enough to ensure that the maximum extent of doping is achieved. Upon drop casting NCA, Pos(G) shifts to lower wave- numbers, indicating that the doping amount increased. The final peak position is very close to the position of charge neutral graphene, hence, the initial amount of p-type doping before functionalization is almost completely counterbalanced by NCA. In contrast to NCA, for ODA there is a shift to higher wavenumbers after addition of more material via drop casting. As a reduction in n-type doping is highly improbable after addition of more material, the upshift after addition of ODA stems from the fact that upon further n-type doping the Fermi level crossed the Dirac point and moved into the conduction band. This explanation is further supported by the slight decrease of Pos(2D) after drop casting ODA. The total shifts of the G-peak relative to the position before deposition are now 25.2 ± 1.8 cm −1 for ODA and 16.9 ± 1.3 cm −1 for NCA, which results in a ratio of the doping effect of 1.49. Thus, despite the large increase in doping, the ratio of the extent of charge transfer doping by the two molecules did not change significantly, which further demonstrates that the doping magnitude is controlled by the length of the alkyl chains. In addition, the further increase in doping after addition of a large amount of material suggests that a significant part of the observed doping can be attributed to an alignment of molecular dipoles, as charge transfer due to orbital mixing typically takes place between molecules directly in contact with graphene. Note that for the case where the Fermi level is in the conduction band, the assumption is made that the line corresponding to strain-free n-type doping can be approximated to be linear for small doping levels, with a ΔPos(2D)/ΔPos(G) of 0.3. 38,42 Systematic control experiments revealed that neither toluene nor ethanol causes n-type doping of graphene (ESI Fig. S6 †). To confirm that deposition of the molecules does not induce defects in graphene, the intensity ratio of the D and G peaks (I(D)/I(G)) is analyzed. For defect free graphene the D peak is not present, however, it appears in the presence of crystallographic defects, like sp 3 -hybridized carbon, and its intensity grows with an increasing defect density. After deposition of ODA via dip coating, I(D)/I(G) increases and remains the same after subsequent annealing, see Table 1. However, drop casting additional material results in a decrease of this intensity ratio. These observations can be explained by the fact that, similar to I(2D)/I(G), I(D)/I(G) is doping dependent. 32 The decrease of the intensity ratio after drop casting ODA suggests that the increase after dip coating is not the result of an increase of the defect density, but merely the result of modulation of the intensity by shifts in the Fermi level. This explanation is supported by the behavior of I(D)/I(G) after addition of NCA; there is only an increase after drop casting, where the Fermi level is close to the Dirac point. In line with the physisorbed nature of the doping method, it can be concluded that deposition of the molecules does not result in a significant increase of the defect density. In parallel to the Raman studies, doping with the selfassembled networks was also assessed through electrical measurements of four-probe, back gated graphene field effect transistors (4P-FET). Two sets of 5 devices were fabricated using CVD grown graphene transferred to 90 nm SiO 2 covered highly doped Si substrates. For the device fabrication, graphene was first patterned by photo-lithography and etched with oxygen plasma. The source, drain and voltage probe electrodes were thereafter patterned again using photo-lithography followed by metal contact deposition (Pd 50 nm) and subsequent liftoff. A sketch of the device can be seen in Fig. 4a, as well as an optical micrograph in the inset of Fig. 4b. After fabrication, AFM was used to assess the amount of contamination on the devices, where the root mean square (RMS) surface roughness was taken as a measure of the degree of contamination. The high surface roughness of the graphene channel after fabrication in comparison to the pristine graphene (0.82 nm versus 0.20 nm) indicates that residues were introduced during the development of the devices (ESI Fig. S7 †). In order to reduce this contamination, one device of each set was cleaned by mechanical scratching using AFM. In this method, the residues are removed by sweeping them away with the AFM tip, whilst the machine is operated in contact mode. 43 After mechanical cleaning, the RMS roughness was reduced to 0.19 nm, indicating that most of the contamination was removed (ESI Fig. S7 †). For electrical characterization, a fixed potential difference was applied between the source and drain electrodes (V DS ), inducing a current to circulate from the source to the drain (I D ). I D is then modulated by the potential applied to the highly doped Si back gate (V GS ) while measuring the potential Table 1 Average values of the Pos(G), Pos(2D), FWHM(G), I(2D)/I(G) and I(D)/I(G) before and after deposition via dip coating, after subsequent annealing and after deposition of a thick layer using drop casting of ODA and NCA. The peak positions are corrected for strain, i.e., they correspond to the intersection of the lines fitted through the data points and the doping line in Fig. 3c drop in the channel through the potential probes P1 and P2 (V CH ). This method allows extraction of the exact potential drop in the channel without it being affected by the metal/contact interface parasitic resistance. The sheet resistance of the device is then extracted as R SH = (W CH /L CH ) (V CH /I D ), where W CH is the channel width (5 µm, 20 µm or 50 µm) and L CH is the distance between P1 and P2 (25 µm). Of the two device sets, the first (1) was used to characterize doping by ODA and the second (2) to characterize doping by NCA. In Fig. 4b, R SH as a function of V GS (transfer curve) of two devices from each set is plotted, before (A) and after (B) AFM cleaning. The highest peak of R SH (K point) corresponds to the point where the Fermi level crosses from the conduction band (electron transport) to the valence band (hole transport) or vice versa. Ideally, for a neutral device with an equal amount of electrons and holes, this R SH peak occurs for V GS = 0. If the K point is shifted towards a more positive V GS in the transfer curve, graphene is p-type doped as more electrons need to be injected by the gate to reach the K point. Conversely, a shift towards a more negative V GS signifies n-type doping. From the position of the K point (V K ) in the transfer curve, the ungated carrier concentration (n 2D ) can be estimated as n 2D = (V K C OX )/ qe, where C OX is the oxide capacitance (3.80 × 10 −8 F cm −2 ) and qe is the electron charge (1.60 × 10 −19 C). 1 The carrier mobility is calculated from the sheet conductance (G SH = 1/R SH ) as µ 4P-FET = (dG SH /dV GS )(1/C OX ). The extracted parameters from all the devices can be seen in the ESI (Table S1 †). In Fig. 4b, a clear shift of V K from 50 V and 43 V in the as-built devices to 17 V and 18 V in the AFM cleaned devices can be seen. The high V K from the as-built devices indicates that they are heavily p-type doped after fabrication and the shift to lower values after cleaning indicates that a considerable part of this p-type doping can be attributed to residues from device processing. After these initial measurements, the two sets were functionalized by dip coating following the same procedure as described previously. The R SH plotted versus V GS for the cleaned devices before and after functionalization can be seen in Fig. 4c. In device 1B (AFM cleaned) a clear shift can be seen from 17 V to −30 V after functionalization with ODA and from 18 V to −13 V in device 2B (also AFM cleaned) after functionalization with NCA. This shift corresponds to an injection of 11.2 × 10 12 cm −2 electrons in device 1B from the ODA selfassembled networks and of 7.4 × 10 12 cm −2 for device 2B from NCA. As expected, the amount of electrons injected in the device functionalized with NCA is smaller than that for the ODA device. Interestingly, the ratio of the ODA injected electrons to that of the NCA injected is 1.52, which is in good agreement with the density ratio of amine groups observed previously with STM (1.6 ± 0.1) and with the ratio obtained from the Raman analysis (1.59). This agreement between different types of experiments supports the hypothesis that the doping magnitude is controlled by the length of alkylamine molecules and thus in turn by the density of dopant functional groups in contact with the graphene surface. For device 1 and 2 the electron (hole) mobility after cleaning with the AFM was 333 (1325) and 496 (1525) cm 2 V −1 s −1 , respectively. After doping via dip coating the electron (hole) mobility was 1530 (1455) and 1106 (2322) cm 2 V −1 s −1 for the same devices. This increase in mobility after doping is explained by the screening of charge impurities by the increase of carriers, thereby reducing coulomb scattering, and further demonstrates the non-destructive nature of the doping method. Furthermore, AFM measurements carried out directly on the devices revealed that the self-assembled networks formed on the devices are similar to those observed on HOPG and CVD graphene (ESI Fig. S9 †). The self-assembled networks were then removed by immersion of both samples in toluene for 8 h at room temperature followed by acetone for 5 min at 50°C. Thereafter, the samples were dipped in isopropanol and annealed for 8 h at 200°C with a continuous flow of Ar for 90 min at a chamber pressure of 3 × 10 −3 mbar and finally cleaned with AFM. It can be seen from Fig. 4 that the doping effect is reversed completely. We note that the devices do not revert to the same doping levels as those measured after cleaning. The higher extent of p-type doping in comparison with the devices after cleaning can be explained by the re-deposition of processing residues from the sample on the devices during immersion in the solvents or an increased interaction with a substrate due to the thermal annealing step. Subsequently, the samples were functionalized again using dip coating and annealed on a hot plate for 1 min at 100°C. In accordance with the Raman measurements, there is a reduction in the doping magnitude after annealing the devices functionalized with ODA and NCA, see ESI Fig. S8. † Next, the devices were doped by drop casting to increase the thickness of the molecular dopant films, the results of which are represented in Fig. 4e. After drop casting, there is a larger shift in V K compared to the functionalization using dip coating (−40 V and −29 V for ODA and NCA). Similarly, as in the Raman experiments, this lower V K corresponds to a higher amount of doping compared to that after dip coating. However, the doping ratio between the two compounds after this deposition stage was found to be 1.38 (Fig. 4f ), which further suggests that even for thicker films the amount of doping is mainly controlled by the density of the amine groups within the selfassembled networks. Additionally, the effect of alkylamine functionalization on the as-built devices (without AFM cleaning) was also studied. The n 2D of the as built devices was extracted and the results are shown in Fig. 5. Clearly, the amount of electrons injected increases as the length of the molecule decreases. Furthermore, the ratio of the injected n 2D for the NCA and ODA functionalized devices yielded an average value of 1.71, close to the expected 1.6 for an ideal device (Fig. 4f ) further indicating that the functionalization process is uniform across all different samples. The higher value of the ratio can be linked to the presence of residues before and after functionalization. Conclusions A bottom-up strategy for tuning the Fermi level of graphene is presented based on controlling the density of doping moieties in self-assembled networks via rational design of molecular building blocks. Using a combination of scanning probe microscopy, Raman spectroscopy and electrical measurements on graphene field-effect devices, precise molecular ordering was determined and related to the amount of doping. The relative amount of doping was in excellent agreement with the difference in dopant density on the graphene surface. This non-destructive strategy for precisely controlling the properties of graphene holds promise not only for doping graphene, but also for opening a tunable bandgap in bilayer graphene or functionalization of other 2D materials. In addition, using the rich world of supramolecular chemistry on surfaces, more intricate nanostructures can be envisioned. For example, bicomponent systems consisting of p-type and n-type building blocks offer the possibility to create ordered, spatially varying potentials in graphene. Furthermore, nanoporous networks can be used for the selective adsorption of guest molecules on graphene for sensing. Considering the technological relevance of graphene and other 2D materials, this research direction will pave the path for the transition of self-assembled networks from the field of science towards its application in technology.
2018-04-03T01:54:38.092Z
2016-12-08T00:00:00.000
{ "year": 2016, "sha1": "ec795019a791e5ce27f24c341db110695991abc5", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/nr/c6nr07912a", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3d5251e685c0316040ae21aeff9f730786d26aab", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
244089262
pes2o/s2orc
v3-fos-license
Contact force exerted on the maxillary incisors by direct laryngoscopy with McGrath video laryngoscope in predicted difficult intubation Patients with difficult airways who are going to undergo surgery under general anesthesia require special consideration from an anesthesiologist. Knowing the most significant risk of morbidity and mortality is often due to difficult cases of airway management. One of the most common complications and often becomes lawsuits in the field of anesthesia is dental trauma that occurs during the intubation process due to contact from laryngoscope blade to the teeth. This descriptive study will show the measured force exerted on the maxillary incisors at the time of performing laryngoscopy using a McGrath video laryngoscope in patients with a potentially difficult intubation (LEMON criteria ≥ 3). The contact force exerted on the maxillary incisors is measured using a special instrument. The contact force exerted on the maxillary incisors in patients with a potentially problematic airway was discovered to vary. Introduction Patients with difficult airway is quite common and some of them may undergo surgery under general anesthesia requiring intubation for endotracheal tube placement. In addition, endotracheal intubation is also carried out in several conditions other than for surgical purposes, such as in patients who experience airway problems due to many things, such as accidents in the facial area, decreased consciousness, critical illness conditions, etc. Complications due to intubation will increasingly occur in the presence of the patient's condition with difficult airway. Dental trauma is one of the most common complications due to intubation. 1,2 The incidence ranges from 0.06% to 12%. Of all these cases, maxillary incisors were the most frequently traumatized by laryngescopy. [2][3][4][5][6][7] Generally, trauma to the upper incisors occurs due to pressure on the incisors from the laryngoscope blade because it is used as a fulcrum in the intubation process. 4 Regarding the mechanism of injury, it is justified to assume that the amount of force exerted onto teeth correlate with the risk of trauma. 4,6,9 There are many scoring systems for predicting difficult airways, especially difficult conditions of intubation. The LEMON criterion is one of them and is a good method and is able to assess risk stratification in difficult intubation. 10 Results Of the 20 subjects included, 10 subjects were male and 10 female subjects. With the youngest age is 18 years and the oldest age is 60 years. Of the 20 subjects, 7 cases were categorized as overweight patients (BMI > 25 kg/m2), with the highest BMI being 46.8 kg/m2. Based on the LEMON criteria, there were 5 cases with facial trauma, 10 cases with a large incisor, 4 cases with beard in the patient, 9 cases with a large tongue, 12 cases with an interincisor distance of less than three fingerbreadths, 11 cases with a distance mentohyoid less than three fingerbreadths, 14 cases with thyrohyoid distance less than two fingerbreadths, 12 patients with Mallampati ≥ 3, 3 cases with airway obstruction, and 9 patients with limited neck mobility. other risk factor include poor dentation and pre-existing craniofacial abnormalities. 17 The complication (gum bleeding) due to laryngoscopy in this study were found in maxillary incisors tooth. Because of their anterior placement, the central incisors are subjected to the stresses of oral instrumentation, including direct laryngoscopy. They are anchored to the bone usually by a single root and have a small cross-sectional area rendering them susceptible to damage by external forces. 7 The most common cause of perioperative dental injury is a combination of preexisting dental pathology and an external force. Avulsions, fractures and dislocations occur most frequently during laryngoscopy manoeuvres described. 2 There are many factors attributing to the force applied on maxillary incisors during laryngoscopy and/or tracheal intubation. Variables that might influence the measured laryngoscope compression could be related to the patient (e.g., sex, weight, height, the narrowness of the palate, and neck thickness), the person intubating the patient (e.g., experience, manipulation techniques, and forces used), and the anesthetic technique (e.g., the degree of muscle relaxation, manual inline stabilization performed, and the type and size of the blade used). Conclusion The contact force exerted on the maxillary incisors due to Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient (s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initial s will not be published, and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Conflicts of interest There are no conflicts of interest.
2021-11-14T16:20:12.117Z
2021-08-07T00:00:00.000
{ "year": 2021, "sha1": "f108119b983273066eb634b0396a3796a7f1a0b6", "oa_license": "CCBY", "oa_url": "http://nsmc.indoscholar.com/index.php/nsmc/article/download/162/141", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7f14da169cb4a1af024723824a7e7b1142bd4649", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
261588932
pes2o/s2orc
v3-fos-license
Prognostic value of diabetes and metformin use in a real-life population of head and neck cancer patients Introduction Head and neck carcinoma (HNC) is a disease with a poor prognosis despite currently available treatments. The management of patients with this tumor is often complicated by several comorbidities. Among these, diabetes is the second most frequent and its influence on the prognosis is not known. Methods In this work, we collected data on progression free survival (PFS) and overall survival (OS) of one hundred twenty-three patients with HNC who received biweekly cetuximab maintenance treatment after first-line chemotherapy. We then compared the survival of nondiabetic patients versus diabetics’ one. Results Surprisingly, both PFS (4 vs. 5 months, HR 2.297, p < 0.0001) and OS (7 vs. 10 months, HR 3.138, p < 0.0001) were in favor of diabetic patients, even after excluding other clinical confounding factors. In addition, we also studied survivals in patients taking metformin, a widely used oral antidiabetic drug that has demonstrated antitumor efficacy in some cancers. Indeed, diabetic patients taking metformin had better PFS and OS than those not taking it, 7 vs. 5 months (HR 0.56, p = 0.0187) and 11 vs. 8.5 months (HR 0.53, p = 0.017), respectively. Discussion In conclusion, real-world outcomes of biweekly cetuximab maintenance remain comparable to clinical trials. The prognostic role of diabetes and metformin was confirmed to be significant in our series, but further prospective studies are needed for a definitive evaluation. Introduction Head and neck cancer (HNC) includes epithelial tumors that originates from oral cavity, pharynx, larynx, paranasal sinuses, nasal cavities, salivary glands.HNC is the seventh most common cancer worldwide with a generally poor prognosis (five-year survival range from 25 to 61% depending on site) (1).Available treatments include chemotherapy, combined or not with radiotherapy, the use of anti-epidermal growth factor receptor (EGFR) drugs such as cetuximab and, recently, also immune checkpoint inhibitors such as pembrolizumab and nivolumab (2).In patients with recurrent or metastatic HNC, prior to the advent of immunotherapy, the addition of cetuximab to first-line platinum (cisplatin or carboplatin) and 5-fluorouracil chemotherapy and the continuation of cetuximab as maintenance therapy in the case of tumor response or disease stabilization significantly improved overall survival (OS), progression-free survival (PFS), and response rate (RR) (3).Maintenance therapy with simplified biweekly (instead of weekly) cetuximab is a safe, effective and feasible alternative therapy in these patients (4,5). In cancer patients, comorbidities contribute to determining survival and also determine the oncological treatments that can be administered.This is especially true for HNC patients because they are often elderly patients with other comorbidities with common risk factors such as alcohol and smoking (1).One of the most common comorbidities is type 2 diabetes mellitus (T2DM), which currently affects approximately 500 million people worldwide (6).Already in preclinical models, the consequences of diabetes such as inflammation, hyperglycemia, hyperinsulinemia and increased levels of insulin-like growth factor 1 (IGF-1) have been shown to promote tumor growth (7)(8)(9).In particular, the IGF receptor has been shown to activate EGFR and to exert antiapoptotic effects (10)(11)(12).The correlation between diabetes and the incidence of some cancers such as liver, pancreas, endometrium, colon and rectum, breast, bladder cancer has been known for many years (7).Instead, the evidence is conflicting on the correlation between diabetes and the risk of developing HNC and on the prognostic value of diabetes in this tumor (13)(14)(15)(16).A relationship between the two diseases therefore seems to exist but the mechanisms are complexand it appears to be mediated or confounded by smoking, alcohol use, and body mass index (BMI)/obesity and so requires further elucidation. In addition, patients with HNC and T2DM undergoing concurrent chemoradiotherapy, compared with patients without diabetes mellitus, experienced higher rates of infection and hematotoxicity, loss of body weight, and higher treatment-related mortality (17). Several clinical and preclinical studies have demonstrated the efficacy of metformin, an oral antidiabetic, in improving survival and response rate in some types of tumors such as breast (18), colorectal (19), pancreatic (20), esophageal (21), and lung cancer (22).The hypotheses on its anticancer action mainly concern the activation of adenosine monophosphate activated protein kinase (AMPK), which inhibits a pathway involved in the proliferation of cancer cells (23).Furthermore, some mainly retrospective studies have found an impact of metformin also on the survival of patients with HNC (24-26). In the present retrospective study, we collected survival data in a population with recurrent or metastatic HNC, and compared patients with and without diabetes.Next, in the subgroup of diabetic patients, we analyzed the differences between patients taking metformin and those not taking it. Patients and methods One hundred twenty-three adult patients with recurrent or metastatic HNC were treated with biweekly (q2w) cetuximab in a single institution (Oncology Unit of Hospital "San Giovanni di Dio" in Frattamaggiore, Italy) from December 2016 to May 2019.All patients selected for treatment had histologically verified and evaluable HNC and had received prior platinum-based chemotherapy plus cetuximab, with at least stable disease after the end of this therapy.Patients received cetuximab at 500 mg/m2 over 2 h on day 1 and 15 of each 28 day cycle.Patients continued to receive treatment until disease progression, unacceptable toxicity or patient refusal.Concomitantly adverse events were recorded before each course according to the National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE) version 3.0.If patients developed grade 3-skin toxicity, the dose of cetuximab was postponed until recovery to grade 2; in those with recurrent episodes of grade 3 skin toxicity, the dose of cetuximab was reduced by 20% in the subsequent treatment cycles.Patients classified as "diabetics" were initially diagnosed according to the American Diabetes Association criteria (casual plasma glucose concentration ≥ 200 mg/dL or fasting plasma glucose ≥126 mg/dL or 2 h glucose ≥200 mg/dL after Oral Glucose Tolerance Test).BMI was calculated through the body mass (kg) divided by the square of the body height (m).It was used to differentiate underweight (<18.5 kg/m 2 ), normal (18.5-24.9kg/m 2 ), overweight (25-29.9kg/m 2 ), and obese individuals (>30.0 kg/m 2 ) according to World Health Organization (WHO). Data were collected starting from maintenance treatment with cetuximab.The retrospective study protocol was approved by the board at the study site (ASL Napoli 2 Nord).All patient information was recorded in an internal computer database.The study was performed in accordance with the Declaration of Helsinki and Good Clinical Practice guidelines.All patients signed a written informed consent and agreed with the research use of their anonymized data. Statistical analysis Survival curves were generated based on the Kaplan-Meier method.OS was defined as time from cetuximab initiation date to death of any cause, censored at last follow-up date; PFS was defined as time from cetuximab initiation date to any failure, censored at the last follow-up date.Statistical significance of survival curves was calculated using the Log-rank test.Cox proportional hazards regression models were used for multivariable analysis (MVA).Graph Pad v.9.5 was used to generate survival curves and to calculate statistics throughout the entire manuscript.A p value of less than 0.05 was considered statistically significant. Patients' characteristics 123 patients were enrolled in the analysis.Baseline patient characteristics are summarized in Table 1.The median age was 65 years with a preponderance of males over females (76.4% vs. 23.6%).48.8% had a BMI in the normal range or higher.48.8% were smokers and 31.7% had moderate to heavy alcohol consumption.Within the total population, 57 (46.3%) patients had a diagnosis of T2DM, of which 31 (54.4%) were on metformin at baseline.Furthermore, the most frequent sites of HNC were the larynx (41.5%), oral cavity (16.3%), oropharynx (14.6%) and hypopharynx (13.8%).The majority of patients had distant metastatic disease (69.9%) and all had induction chemotherapy with cisplatin (64.2%) or carboplatin (35.8%) plus fluorouracil. Efficacy and safety of maintenance treatment with cetuximab The entire study population started biweekly cetuximab as maintenance treatment after chemotherapy induction and 2).In patients with T2DM, the objective response rate was higher in patients taking metformin (29% vs. 19.2%).PFS in the general population was 4 months, while OS was 8 months.No survival differences were observed considering Body Mass Index (BMI) for underweighted versus normal and overweighted individuals (Table 2).Both PFS (Figure 1A) and OS (Figure 1B) disfavored the subgroup of patients without T2DM compared with those with T2DM, respectively 4 vs. 5 months (Hazard Ratio (HR) 2.297, p < 0.0001) and 7 vs. 10 months (HR 3.138, p < 0.0001).Furthermore, we compared the survival of diabetic patients who took metformin versus those who did not.The metformin group had a survival advantage in terms of both PFS (7 vs. 5 months, HR 0.56, p = 0.0187) (Figure 1C) and OS (11 vs. 8.5 months, HR 0.53, p = 0.0170) (Figure 1D).These data were also confirmed using Cox regression and entering variables such as sex, age, performance status, smoking habits and alcohol habits.Indeed, in the study population, diabetes is a protective factor in patients with HNC, both in terms of PFS and OS (Table 3).Conversely, for PFS, older age and smoking status have a negative effect on prognosis.Smoking, on the other hand, does not have a statistically significant effect.Analyzing the data relating to metformin with Cox regression, it is confirmed as a positive prognostic factor.This is statistically significant in terms of OS while for PFS there is only a trend in favor of its use (Table 3).Adverse events (AEs) during cetuximab treatment are shown in Table 4. Treatment was generally well tolerated with an acceptable rate of adverse events, rarely grade > 2 (according to CTCAE).The most common were fatigue, hypomagnesemia and typical cutaneous adverse events (acneiform rash, desquamation, nail disorders).No differences in toxicity, related to cetuximab treatment, were observed in the diabetic population. Discussion To date, HNCs still represent tumors with a poor prognosis despite available therapies.Due to the characteristics like location and risk factors (especially smoking and alcohol), this type of tumor often affects patients with other comorbidities.The most frequent non-neoplastic comorbidities in these patients are pulmonary disease (17.9%), diabetes mellitus (7.9%), myocardial infarction (6.7%), and peptic ulcer disease (5.2%) (27). There are several studies that have demonstrated an increased risk of HNC in patients with diabetes (13,(28)(29)(30)(31)(32) and others that have also demonstrated a worse prognosis of these patients compared to non-diabetic patients (15,29).However, there are some studies on populations with HNC that contradict these data, demonstrating that there is no correlation with diabetes, either as a risk factor (33-35) or as a negative prognostic factor (36,37). Surprisingly, in our study, patients with diabetes have a better survival, both in terms of PFS and OS.This is confirmed by excluding other confounding factors such as gender, age, performance status, and alcohol and smoking habits.These data, even considering the limitations of the study, could be influenced by two factors.One could relate to the involvement of the insulin growth factor (IGF-1) pathway.In fact, it has been demonstrated that IGF-1 influences tumor growth, having a mitogenic and antiapoptotic role (38,39).IGF-1 performs its role through its receptors (IGF-1R) and the levels of IGF-1 are influenced by the levels of circulating proteins that bind to it (IGF-BP).Since insulin increases IGF-1 levels and decreases IGF-BP levels, and insulin levels are subnormal in diabetics, this may explain the better prognosis of these patients.Indeed, some works have shown that elevated expression of IGF-BP-3 was associated with a shorter time to progression in HNC patients (40) and that levels of IGF factors were found to be maximum in stages with better prognosis in oral cancer (41).The other factor that could positively influence the prognosis of these patients is the use of metformin.In fact, this drug has been studied in numerous types of cancer for its presumed antitumor effects (41), however the presence of trials with an adequate study design and number, did not lead to a definitive conclusion regarding its efficacy.Just for HNCs, there have also been numerous studies, including metaanalyses (24-26, 42,43), that have shown a positive prognostic association between metformin use and cancer (44, 45), particularly in patients with oral cancer (46,47), hypopharyngeal carcinoma (24), laryngeal carcinoma (42,48).In our study as well, we found a better survival in diabetic patients who took metformin compared to those who took other hypoglycemic drugs, even excluding other confounding factors.In addition, metformin has been shown as a radiosensitizer in colorectal cancer, pancreatic cancer, esophagus cancer (24) but there are few data on HNC (11).However, this may be another factor that influenced the results of our study, as all of the population received radiotherapy for HNC before starting cetuximab.However, there are some studies that have not shown any benefit of metformin in patients with HNC and that should be taken into consideration (26,49). Our study has some limitations including the retrospective nature, the sample size, a selection bias due to the single medical center, and the presence of other confounding factors not considered, however it represents an important description of real clinical practice.In conclusion, the correlation between the prognosis of patients with HNC and diabetes, as well as that with metformin use, needs further study.In particular, prospective studies evaluating the use of metformin also in non-diabetic HNC patients represent an important goal in the future of oncology. FIGURE 1 Kaplan- FIGURE 1Kaplan-Meier of progression free survival (A) and overall survival (B) in the overall population.Kaplan-Meier of progression free survival (C) and overall survival (D) in the diabetic population. TABLE 2 Efficacy of maintenance treatment with cetuximab. TABLE 3 Multivariate analysis for progression free survival (PFS) and overall survival (OS). TABLE 4 Adverse events in the overall population and in the type 2 diabetes mellitus (T2DM) subgroup.
2023-09-08T15:10:41.777Z
2023-09-06T00:00:00.000
{ "year": 2023, "sha1": "ba66098c81dc83380a1a2c3a0040f29ad76dcdf5", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2023.1252407/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30ed379b78ad16511e20679d5d87c153762261d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10694561
pes2o/s2orc
v3-fos-license
Colour or shape: examination of neural processes underlying mental flexibility in posttraumatic stress disorder Posttraumatic stress disorder (PTSD) is a mental disorder that stems from exposure to one or more traumatic events. While PTSD is thought to result from a dysregulation of emotional neurocircuitry, neurocognitive difficulties are frequently reported. Mental flexibility is a core executive function that involves the ability to shift and adapt to new information. It is essential for appropriate social-cognitive behaviours. Magnetoencephalography (MEG), a neuroimaging modality with high spatial and temporal resolution, has been used to track the progression of brain activation during tasks of mental flexibility called set-shifting. We hypothesized that the sensitivity of MEG would be able to capture the abnormal neurocircuitry implicated in PTSD and this would negatively impact brain regions involved in set-shifting. Twenty-two soldiers with PTSD and 24 matched control soldiers completed a colour–shape set-shifting task. MEG data were recorded and source localized to identify significant brain regions involved in the task. Activation latencies were obtained by analysing the time course of activation in each region. The control group showed a sequence of activity that involved dorsolateral frontal cortex, insula and posterior parietal cortices. The soldiers with PTSD showed these activations but they were interrupted by activations in paralimbic regions. This is consistent with models of PTSD that suggest dysfunctional neurocircuitry is driven by hyper-reactive limbic areas that are not appropriately modulated by prefrontal cortical control regions. This is the first study identifying the timing and location of atypical neural responses in PTSD with set-shifting and supports the model that hyperactive limbic structures negatively impact cognitive function. INTRODUCTION Posttraumatic stress disorder (PTSD) is a trauma-related mental disorder or injury that stems from exposure to one or more events that involved actual or threatened death or serious injury. Although the clinical presentation varies, individuals suffering from this condition experience symptoms that include re-experiencing, avoidance of triggering situations or stimuli, negative mood and cognitions, and elevated levels of arousal and reactivity. Cognitive symptoms are also very commonly reported, and include impaired concentration, affect and increased impulsivity. This condition results in significant distress as well as impairments in functioning 1 (for review, see Pitman et al. 2 ). Structural neuroimaging results have been equivocal. The most robust finding is that the hippocampi are smaller in patients with PTSD; 3,4 however, these differences may have existed before the PTSD-inducing event. 2,5 Further, there have been suggestions that the volumes of the anterior cingulate 6,7 and medial prefrontal cortex (PFC) 8,9 are smaller in PTSD. Using diffusion tensor imaging, white matter structure was noted to be poorer in areas near the anterior cingulate, PFC and posterior angular gyrus. 10 However, there is no clear consensus on the neuroanatomical changes in PTSD in the literature and no definitive structural 'biomarker' for PTSD (for a review, see Dolan et al. 11 ). Functional neuroimaging studies using positron emission tomography, single-photon emission computed tomography or functional magnetic resonance imaging (fMRI) have elaborated a model of atypical neurocircuitry in PTSD. This model proposes that the amygdalae are hyper-reactive and generate a heightened fear response, whereas the medial PFC including the rostral anterior cingulate cortex, is hypo-responsive, and thus fails to appropriately inhibit the amygdala. Finally, the bilateral hippocampi are thought to be hyper-reactive within this circuit and responsible for generating intrusive memories (for reviews, see Dolan et al. 11 and Hughes and Shin 12 ). This model is supported by a meta-analysis of fMRI studies; however, these authors raise the caveat that it is difficult to understand PTSD using protocols that activate a largescale spatially distributed neural network, and there is value in focusing on the function within a specific network. 13 Although PTSD is viewed as a disorder of fear regulation, the impact on neurocognitive functions is well documented, particularly in the domains of intellectual ability and executive functioning. 11,14,15 Interestingly, although the memory and attentional aspects of executive control are clearly impacted by PTSD, the effect of PTSD on the mental flexibility component of executive function is not as clear. Early behavioural studies examining PTSD and mental flexibility did not find group differences in performance. 16,17 However, a more recent fMRI study reported that individuals with PTSD failed to activate the 1 right insula when performing an affective set-shifting task, 18 whereas elite military warriors without PTSD showed increased right anterior insula activation, perhaps reflecting their ability to perform well in highly stressed military situations. 19 The right insula has been identified as a key hub region for monitoring and switching; 20,21 we postulated that by using magnetoencephalography (MEG), a neuroimaging modality with high temporal and high spatial resolution, we could better understand the role of the right insula, as it pertains to set-shifting, within the context of the neurocircuitry of PTSD. The ability to shift sets is an important feature underlying mental flexibility, a key executive function. 22 fMRI studies have identified brain areas in prefrontal, dorsolateral frontal 23,24 and posterior cortical regions 25,26 as important in the successful completion of this task. Furthermore, using MEG, we found activations in bilateral dorsolateral prefrontal cortices as early as 100 ms poststimulus onset, with recruitment of bilateral posterior parietal cortices and these activations were sustained until approximately 500 ms post stimulus. 27 In the current study, we used this spatially and temporally sensitive neuroimaging approach to focus on the specifics of the neural processes underlying set-shifting in soldiers with PTSD compared with matched control soldiers. We employed a protocol with an easy (intra-dimensional) and a more difficult (extradimensional) shift, and applied advanced analysis methods to localize the spatiotemporal progression of activation in the two groups. We hypothesized that the sensitivity of MEG would allow us to determine whether the abnormal neurocircuitry in PTSD negatively impacted brain regions implicated in set-shifting. This is important as this may be a mechanism by which associated cognitive deficits are generated. MATERIALS AND METHODS Participants Participants were active duty service members from the Canadian Armed Forces and included 22 soldiers diagnosed with PTSD (all males; mean age = 33.1 years ± 5.9 s.d.; range 27-45 years) and 24 control soldiers (all males; mean age = 37.6 years ± 6.8 s.d.; range 26-48 years). All participants were veterans having served in Afghanistan and/or Bosnia. For both groups, exclusion criteria included any history of seizures, traumatic brain injury, other neurological disorders and standard neuroimaging (MRI and MEG) safety exclusions. Participants taking anticonvulsant medications, benzodiazepines, or GABA antagonists were also excluded from the study. Control soldiers with an active substance use disorder were also not included. Individuals with PTSD were diagnosed (DSM IV Axis I disorders, American Psychiatric Publishing) with a semi-structured clinical interview and psychometric testing by a psychiatrist or psychologist at a Canadian Armed Forces Operational Trauma and Stress Support Centre. These individuals were provided with information describing the study and asked to self-identify and volunteer if they wished to participate. Care was provided regardless of the participation decision. After expressing interest in volunteering for this study, potential participants were re-screened by a Canadian Armed Forces psychiatrist either in person or by teleconference and their medical records re-reviewed to confirm PTSD. For all participants in the PTSD arm, onset of PTSD was traced to an operationally related traumatic event (criterion A1). All participants were Afghanistan veterans, and 450% had participated in two or more missions. Of the participants with PTSD, 69.5% had a comorbid diagnosis of depression, 27.3% of substance abuse disorder, and 18.2% with another anxiety disorder. Control soldiers were recruited through flyers and advertisements posted at Canadian Forces bases in Ontario. Before acceptance into the study, controls were screened over the telephone using the Defense and Veterans Brain Injury Center traumatic brain injury screening tool 28 to rule out traumatic brain injury and the PC-PTSD (primary care screen; www.ptsd.va.gov) to rule out PTSD. Control soldiers were matched for years of service, experience in Afghanistan and number of deployments. All testing was conducted in the MEG Lab at the Hospital for Sick Children and received institutional ethics approvals from both the Hospital for Sick Children and Defence Research and Development Canada. All participants gave informed written consent. Neuropsychological and clinical assessments All participants completed a short battery of neuropsychological tests as well as brief clinical assessments. The tests, their means and standard deviations for each group are contained in Table 1. A between-groups t-test identified significant differences between the control and PTSD soldiers on measures of anxiety (GAD-7) and depression (PHQ9). Although IQ was significantly different, both groups were in the same upper normal range. Scores on the alcohol use questionnaire (Alcohol Use Disorders Identification Test) did not show any significant differences. The posttraumatic stress disorder checklist-DSM IV version S was only conducted in the PTSD group and confirmed significant PTSD symptoms in this group. Stimuli and task Subjects completed a version of the Intra-Extra Dimensional Set Shift Test adapted from the Cambridge Neuropsychological Test Automated Battery (CANTAB, Cambridge Cognition, Cambridge, UK 34 ) and modified for MEG. 27 The stimuli consisted of 36 images where each image could be described by two dimensions: colour and shape. The subject was required to match along one dimension. After several trials, the match parameter shifted. There were two types of shifts: intra-dimensional and extra-dimensional. Intra-dimensional shifts were easier and within the same dimension, that is, colour-to-colour; whereas extra-dimensional shifts were between dimensions, that is, colour-to-shape. Subjects completed 50 intra-and 50 extradimensional shifts randomly occurring within 370 total number of trials. Stimuli were presented using Presentation software (NBS, Berkeley, CA, USA). The task was self-paced with an interstimulus interval that was randomly jittered between 0.8-1.2 s. MEG data acquisition Before entering the MEG shielded room, subjects were trained on the task. Participants were tested supine and MEG data were recorded continuously (600 Hz sampling rate, DC-100 bandpass) on a 151-channel whole-head CTF MEG (MISL, Coquitlam, BC, Canada). A T1-sagittal MPRAGE structural MR was obtained on a 3 T scanner (Siemens AG, Erlangen Germany) to allow co-registration of the MEG data to each subject's own brain anatomy. MEG data analysis The continuous data for each participant was epoched into trials by timelocking to stimulus onset and creating a trial length from 200 ms prestimulus to 1100 ms poststimulus onset. The trials were sorted into shift and non-shift trials. Non-shift trials were discarded. Retained trials were sorted into intra-and extra-dimensional shift, and only correct trials for each shift type were submitted to further analyses. Data were bandpass filtered from 1-40 Hz 27 and global field power (GFP) plots (root mean 35 beamforming (4 mm grid reconstructed to encompass the whole brain volume with a headmodel fit to the inner skull surface) to localize neural sources active during each of these windows. The results for each subject were normalized into stereotaxic space using SPM8 (www.fil.ion.ucl.ac.uk/spm/ software/spm8) and averaged across subjects by conditions. These were submitted to nonparametric statistical testing (2946 permutations) and corrected for multiple comparisons using a single-threshold maximal statistic (t-max). 36,37 This has been demonstrated to provide strong control for family-wise type I errors. 38 Further, image contrasts for each time window were computed between groups and subjected to statistical testing. Analysis of time course of activation Coordinates of brain locations that showed significant activations were noted, and the time courses for these activations were reconstructed. This was accomplished by unwarping the Talairach coordinates back into each subject's brain space, calculating the time courses at the peak location of interest, then rectifying and averaging the resultant waveforms across the group. The differences between the control and PTSD groups at each time point in the time courses were permuted and tested for significance. Latencies where the difference in field strength remained significant after correction for multiple comparisons were marked on the time course. These indicated the locations and the latencies when brain activations differed significantly between the groups. Behavioural results Reaction times for the PTSD and control groups for the intra-and extra-dimensional shifts were submitted to a 2 × 2 mixed factorial analysis of variance with group as the between-subject variable and condition as the within-subject variable. The extradimensional shift (668 ms ± 19.9 s.d.) required significantly longer (F(1,88) = 6.36, P o 0.02) to complete than the intra-dimensional shift (615 ± 19.9), but there was no main effect between groups and no significant interaction. The accuracy data also were submitted to a 2 × 2 mixed factorial analysis of variance. Performance on the extra-dimensional shift (85.6% ± 2.4 s.d.) was significantly less accurate (F(1,88) = 8.11, Po 0.01) than on the intra-dimensional shift (91.8 ± 1.7), and soldiers with PTSD (84.6 ± 2.9) were significantly less accurate (F (1,44) = 14.34, P o0.001) than the controls (92.8 ± 1.2). Source analyses Talairach coordinates, anatomical labels and Brodmann labels for significantly activated (P o 0.01, corrected) brain locations for each time window in the intra-dimensional shift condition are contained in Table 2 (top half). The cells are colour coded where yellow indicates activation in dorsolateral frontal cortex, green represents posterior parietal cortex and blue signifies insula. Table 2 shows that for both control and PTSD soldiers, performance of the set-shifting task recruited bilateral insulae and left posterior parietal cortex. Further, the control soldiers recruited left dorsolateral frontal cortex, whereas the soldiers with PTSD required bilateral dorsolateral frontal cortices. Brain regions involved in extra-dimensional shifting are listed in the bottom half of Table 2, using the same colour-coding scheme. Immediately, it is clear that the harder extra-dimensional shifts required a greater number of brain regions for both groups, although the expected regions are seen. For the control soldiers, activated areas included the left insula, left posterior parietal cortex and bilateral dorsolateral frontal cortices, whereas the individuals with PTSD required bilateral insulae, left posterior parietal regions and bilateral dorsolateral frontal cortices. In Table 2, there are a number of cells coded with the colour brown. These cells indicate neural regions that are not typically seen in a set-shifting task. Specifically, the left posterior cingulate showed prominent and significant activation in both groups for both types of set shifting. To explore the involvement of the left posterior cingulate, the time course of activation in this location was reconstructed for the two groups and tested for differences. Figure 1 reveals that the posterior cingulate was activated earlier and to a greater extent in the PTSD as compared with control soldiers. Between-groups image contrast To directly compare the differences in neural activation between the control and PTSD groups for the two conditions, we submitted the source localization results to an image contrast where significant differences (P o0.01, corrected) between the images in each of the time windows were identified. For the intradimensional shift, there were three regions with significant group differences: the right insula (BA13), the left inferior frontal gyrus (BA 47) and the right parahippocampal gyrus (BA35). For the extradimensional shift, only two regions showed significant differences between the PTSD and control soldiers. This was the left posterior cingulate (BA 30/31) and the left parahippocampal gyrus (BA 19). As mentioned above, the posterior cingulate is not typically seen in a set-shifting task, nor are the parahippocampal gyri. To explore the involvement of these paralimbic structures, the time course of activation for the right and left parahippocampal gyri were reconstructed and shown in Figure 2. It is clear from this figure that the parahippocampal gyrus is more activated in the PTSD compared with the control group. Interestingly, the betweengroups differences are more pronounced with intra-rather than extra-dimensional shifting. Exploring the insula with reconstructed time course data To test the hypothesis proposed by Simmons et al. 19 that elite soldiers without PTSD show increased right insula activation, we reconstructed time courses in the right insula and statistically compared them between groups. To ensure that this was not due to an overall greater activation in one group, we also reconstructed time courses for both groups in the left posterior parietal cortex, the other key hub region for set-shifting. These time courses are contained in Figure 3 and show that the right insula is more activated in the control than the PTSD group, although both groups showed similar neural involvement. Further, it is clear that the suppressed involvement of right insula in the PTSD group is specific to right insula and is not seen in the left posterior parietal area, the other canonical set-shifting region. DISCUSSION In this study, using the excellent temporal and spatial resolution of MEG, we determined significant differences in the brain regions Figure 1. Reconstructed time courses from the left posterior cingulate, an area that was identified as active in this set-shifting task, although not typically seen on this kind of protocol. For intradimensional shifting, the PTSD group shows significantly greater activation in an early time window. For the extra-dimensional shifting, the between-groups difference is no longer significant due to the increased activation in this area in the military controls. Possibly, this increased activation reflects the increasing difficulty of the extra-dimensional shift, which is manifest as a stress-related increase in paralimbic regions for the controls. PTSD, posttraumatic stress disorder. Figure 2. Reconstructed time courses from the right and left parahippocampal gyri, regions that were identified as significantly different between groups on an image contrast. For intra-dimensional shifts, the right parahippocampal gyrus shows a significantly greater response in the PTSD group, whereas for the extra-dimensional shifting, both groups show an increased response, although the increase is greater in the controls such that they reach a similar level as the PTSD. Possibly, this reflects the reaction of the paralimbic structures to the stress of completing this more difficult condition of the task. PTSD, posttraumatic stress disorder. Figure 3. Reconstructed time courses from the right insula and left supramarginal gyrus to test the hypothesis of whether the insula is specifically recruited in unaffected individuals to maintain task performance. As the supramarginal gyrus does not show significant differences between PTSD and controls, this suggests that the increased activation in the controls is localized to the insula and is not a general global increase. PTSD, posttraumatic stress disorder. recruited during performance of a set-shifting task between soldiers with PTSD and matched military controls. Our behavioural data demonstrated that although soldiers with PTSD had reaction times that were comparable with matched controls, their performance was significantly less accurate. This represents the classic speed accuracy trade-off where increased task difficulty forces the participant to choose between maintaining speed or accuracy. In this case, soldiers with PTSD found the extradimensional shift sufficiently more difficult that they traded accuracy for maintenance of reaction time. Initial source analysis in the two groups identified similar areas of activation that proceeded from dorsolateral frontal cortex to insula to posterior parietal cortices. These areas are consistent with both fMRI and MEG reports in the literature for setshifting [23][24][25][26] and our work in civilian adults. 27 However, both our source analyses of the spatiotemporal progression of set-shifting and our analysis of the between-groups contrast revealed significant activations in the paralimbic cortex, an area not typically activated for set-shifting. Specifically, we saw activations in posterior cingulate, parahippocampal gyri and regions in the temporal lobes. This finding was both surprising and reassuring. Surprising in that the set-shifting task is not an emotional, affective or traumatic task. The stimuli consisted of simple shapes with colours and are entirely innocuous. These findings are reassuring in that our identification of these regions fits well with proposed models of the neurocircuitry of PTSD. For example, reviews of Hughes and Shin 12 and Patel et al. 13 concur that the amygdalae and hippocampi are hyper-reactive, whereas the medial PFC is hypo-responsive and fails to inhibit the limbic structures. Further, diffusion tensor imaging studies reported reductions in white matter volume in the cingulum and superior longitudinal fasciculus in PTSD. 39 In our case, we did not find amygdalae responses, which fit with our presentation of nonemotional innocuous stimuli. However, we found a dissociation of the response in the paralimbic structures-specifically, we found increased activation in the cingulate and parahippocampal cortex in the group with PTSD, whereas the medial PFC, particularly the insula, were significantly less active. As suggested by this model, posterior parietal regions are not substantively different in PTSD. Although findings in the literature suggest that increased anterior cingulate cortex activity is a biomarker reflecting a familial risk of developing PTSD after trauma, 2 our data do not support this as we did not see greater anterior cingulate activation in the PTSD compared with the controls that would have differentiated the groups. In fact, both groups showed posterior cingulate involvement, and this was more pronounced in PTSD. However, the studies used different tasks, thus, it would be important to replicate the findings with similar tasks. Also, these prior studies were conducted with fMRI and thus do not have the same temporal resolution as MEG. Therefore the reported effects may be very slow or late whereas MEG recordings can capture fast and early neurophysiological activity. Studies using fMRI, by Simmons et al., 18,19 showed that individuals with PTSD failed to appropriately and adequately activate the right insula when performing a set-shifting task. Further, these authors proposed that the right insula was a key region for resilience against PTSD and maintenance of performance during stress situations. We specifically tested this hypothesis by re-constructing time courses in the right insula as well as in the left supramarginal gyrus. We chose the left supramarginal gyrus as this is another established hub area for set-shifting, but it has not been implicated in PTSD. We found a significant difference with a greater and earlier activation in the right insula in the control soldiers for both intra-and extradimensional shifting; we did not find significant differences between groups in the left supramarginal gyrus suggesting that the control soldiers specifically recruited the insula, and this is not simply due to widespread, greater activation in the controls. There are a few caveats to consider when interpreting our findings. One is with regard to our exclusion of some, but not all, medications. We excluded participants who were taking anticonvulsants, benzodiazepines and GABA antagonists; however, participants were not free from all medications. We acknowledge that this is not ideal, and we point the reader to a discussion of the benefits and costs when deciding the exclusions around participant medications. 40 On a similar note, all participants in the PTSD arm of our study were in active or maintenance psychotherapy. Again, though more ideal, testing individuals with PTSD who had not undergone some therapy would not have been feasible. Finally, 70% of our cohort had comorbid major depressive disorder; thus, it could be suggested that our findings captured cognitive dysfunction associated with depressive symptomology and not specifically PTSD. However, a classic study using singlephoton emission computed tomography demonstrated significantly reduced regional blood flow to paralimbic structures in clinical depression with underactivation in this condition. 41 Although blood flow does not correlate directly with the MEG neurophysiological response, it is unlikely that underactivity in single-photon emission computed tomography translates into an excessive overactivity in MEG. Thus we think that we have captured cognitive dysfunction associated with PTSD and not depression; however, this needs to be directly confirmed. Finally, a recent study demonstrated that, to some extent, the neuropsychological impairments induced by PTSD can be improved with treatment. 42 One factor that motivated our study of the neural underpinnings of set-shifting in PTSD is the fact that set-shifting, and mental flexibility, is a core component of executive functions. Individuals who struggle with mental flexibility may find that their cognitive difficulty with shifting interacts with other executive control domains, for example, memory processing and inhibition. If we can elucidate the brain regions underlying abnormal set-shifting in PTSD, monitoring the impact of therapy on their neural responses may serve as a biomarker for tracking therapy efficacy. In conclusion, by capitalizing on the high spatial and temporal resolution of MEG, we have determined that the core brain regions underlying set-shifting are comparable between soldiers with PTSD and controls; however, the individuals with PTSD demonstrate a significant and atypical involvement of paralimbic regions. This may be one mechanism that impedes performance on a set-shifting task in PTSD; possibly, this contributes to difficulties with mental flexibility, which may underlie deficits in other cognitive executive functions.
2017-11-08T22:03:20.915Z
2014-08-01T00:00:00.000
{ "year": 2014, "sha1": "9c82d283f0b23f0133cd041a9dc5877643a6a694", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/tp201463.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7919cb8e456edf9f9343b1c63336ab989c4aae42", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
44062353
pes2o/s2orc
v3-fos-license
Age effect on the prediction of risk of prolonged length hospital stay in older patients visiting the emergency department: results from a large prospective geriatric cohort study Background With the rapid growth of elderly patients visiting the Emergency Department (ED), it is expected that there will be even more hospitalisations following ED visits in the future. The aim of this study was to examine the age effect on the performance criteria of the 10-item brief geriatric assessment (BGA) for the prolonged length of hospital stay (LHS) using artificial neural networks (ANNs) analysis. Methods Based on an observational prospective cohort study, 1117 older patients (i.e., aged ≥ 65 years) ED users were admitted to acute care wards in a University Hospital (France) were recruited. The 10-items of BGA were recorded during the ED visit and prior to discharge to acute care wards. The top third of LHS (i.e., ≥ 13 days) defined the prolonged LHS. Analysis was successively performed on participants categorized in 4 age groups: aged ≥ 70, ≥ 75, ≥ 80 and ≥ 85 years. Performance criteria of 10-item BGA for the prolonged LHS were sensitivity, specificity, positive predictive value [PPV], negative predictive value [NPV], likelihood ratios [LR], area under receiver operating characteristic curve [AUROC]). The ANNs analysis method was conducted using the modified multilayer perceptron (MLP). Results Values of criteria performance were high (sensitivity> 89%, specificity≥ 96%, PPV > 87%, NPV > 96%, LR+ > 22; LR- ≤ 0.1 and AUROC> 93), regardless of the age group. Conclusions Age effect on the performance criteria of the 10-item BGA for the prediction of prolonged LHS using MLP was minimal with a good balance between criteria, suggesting that this tool may be used as a screening as well as a predictive tool for prolonged LHS. Background A growing number of older adults (i.e., age 65 and over) visit the emergency departments (EDs) [1]. In Europe, they account for around 20% of all EDs visitors [1,2]. These older ED visitors, particularly the oldest group (i.e., age 85 and over), generally have a longer length of hospital stay (LHS) after their ED discharge to acute care wards compared to younger ED visitors [1][2][3]. The high morbidity burden and related-disabilities expose older patients to an increased risk of non-fatal health outcomes like a long LHS [2,4,5]. With the rapid growth of the oldest segment of ED visitors, hospitalization after an ED admission is expected to be even greater in the future and, thus, hospitals need to confront this new challenging issue [1][2][3]6]. One way to reduce LHS is early identification of older ED visitors at greater risk of prolonged LHS after an ED discharge to acute care wards [1,5,6]. This screening is a crucial step for targeting appropriate interventions to prevent or decrease the occurrence of non-fatal health outcomes. The predictive tools designed for this purpose should provide a relevant stratification of risk and give information early; ideally before the hospital admission in order to avoid or plan the admission [4,7]. The use of clinical information collected by a physician has been shown to be the best strategy to develop predictive tools of unplanned hospital admissions compared to self-reported and administrative data collection [8,9] A limited number of studies have used tools aimed at identifying older patients at greater risk of prolonged LHS after an ED visit, with low predictive accuracy [2,3,5,6]. Recently, the 10-item Brief Geriatric Assessment (BGA), was reported to have a high specificity (97%) but a lower sensitivity 63% [10]. This study reported the best criteria performance to date. This result was explained in part by the use of artificial neural networks (ANNs), and in particular the modified multilayer perceptron (MLP) [10]. Indeed, ANNs analysis is particularly adapted to predict an inherent complex event like prolonged LHS [11,12]. The main limit of this previous study was the unbalance between sensitivity and specificity, which could be related to the high amount of data required by ANNs [11,12]. In addition, because the risk of hospitalization increases with age, it could be suggested that the best balance with greater values of performance criteria could be reported specifically in the oldest age group (i.e., age 85 and over) of ED users [1][2][3][4]. The reported study aims to examine the effect of age on the predictive abilities (i.e., sensitivity, specificity, positive predictive value [PPV], negative predictive value [NPV], likelihood ratios [LR], area under receiver operating characteristic curve [AUROC]) of the 10-item BGA for the prolonged LHS using MLP in geriatric ED visitors. Participants A total of 1117 older patients (i.e., aged ≥ 65 years) were recruited upon their hospitalization after an ED visit in a University Hospital (France) from January 2013 and December 2013. This study is an ongoing study which began in 2011 and its procedure for participant's recruitment has been previously described in detail [6,10]. To be included, patients had to be hospitalized on acute care wards after an ED visit, age 65 years and over, and willingness to participate in research. Patients who died during hospitalization were excluded. Assessment The 10-item BGA was fulfilled upon admission to the ED and was composed of the following items: age ≥ 85 years, male gender, polypharmacy defined as ≥5 drugs per day, use psychoactive drugs (i.e., benzodiazepines, antidepressants or neuroleptics), history of falls in previous 6 months, temporal disorientation (i.e., inability to give the month and/or year), presence of acute organ failure plus reason for admission, living situation (home versus institution), and non-use of formal and/or informal home-help services. The nature of the acute organ failure for ED visit was categorized in five groups: cardio-vascular diseases, respiratory diseases, digestive diseases, neuropsychiatric diseases, and other acute diseases (Table 1). Other acute diseases referred to a heterogenous groups of diseases including traumatic injuries, hepatic failure, hematological failure and kidney failure. Outcome measure The LHS was calculated using the administrative registry of the University Hospital and corresponded to number of days between the first day of ED visit and the last day of hospitalization on an acute care ward. Prolonged LHS was defined as being in the top third of LHS, which corresponded to more than 13 days in the studied sample. The main issue to identify this threshold value is that there is no consensus on the definition of a prolonged length of hospital stay is in geriatric acute care unit. The absence of definition is due to the fact that a prolonged length of hospital stay depends on an accumulation and complex interplay between several variables. These variables are related to the health status of patients but also to the environment where they are hospitalized (e.g., flux of patients, number of health professionals, type of hospital, organization of care etc.…). Thus, the unique solution to determine this threshold is to use the consensus methods of tertilization [6,10]. Standard protocol approvals, registrations, and participant consents Patients recruited in this study provided themselves a verbal consent or received help from their trusted person. The consent to participate was recorded in the patients' digital files. Ethical Committee of Angers, France, approved the entire procedure. Statistical analysis Participants were split into two subgroups based on the presence or absence of a prolonged LHS. The top third of LHS defined the prolonged LHS (i.e., > 13 days). Univariate logistic regression models were used to examine the association between prolonged LHS (dependent variable) and 10-item BGA (independent variables). Artificial neural networks (ANNs) are inspired by animals' brain and provide computational processing based on machine learning. ANNs are more appropriate to examine "chaotic" events, such as prolonged LHS, because they are not linear statistical models. These systems are interconnected and composed of multiple layers. Nodes from one layer are connected to all nodes in the following layer, but there were no lateral connections within the layer (Fig. 1). The output layer comprised one neuron, indicating the presence or absence of prolonged LHS.The "neuralnet: Training of neural networks" R package was used for Modified multilayer perceptron (MLP) combining with a specific algorithm (9,10). To perform ANNs analysis, the sample of participants was randomized in two subgroups (i.e., a training group and a testing group). There was no significant difference between training and testing group (data not shown). Between-group comparisons were performed using unpaired t-test, Pearson's Chi-squared test with Yates' continuity correction, as appropriate. Four age groups were identified: ≥ 70, ≥ 75, ≥ 80 and ≥ 85 years old. Performance criteria were sensitivity, specificity, PPV, NPV, LR+, LR-and AUROC. All statistics were performed using R 3.1.0 and Net Beans IDE 8.0. Results There was a trend for a greater mean age (P = 0.0699), a greater prevalence of temporal disorientation (OR = 2.65, P < 0.001) in participants with prolonged LHS compared to those with short LHS (Table 2). In addition, participants with prolonged LHS visited the ED less often for digestive diseases (OR = 0.48, P = 0.0189) and more often for other diseases (OR = 1.46, P = 0.089) compared to those with short LHS. (Table 3). Discussion The findings show that effect of age was minimal on predictive abilities of the 10-item BGA. These results suggest that analysis provided by ANNs may enable to use 10-item BGA as a screening tool but also as a predictive tool to identify older patients at higher risk of prolonged LHS, whatever their age. The best criteria performances for prolonged LHS were shown with patients aged 75 years and over. This is an unexpected finding because it was hypothesized that greater values of criteria performance could be reported in the oldest segment of ED users. This result is discordant with Fig. 1 General structure of modified multilayer perceptron in this study previous studies which reported a strong association between age and the risk of prolonged LHS [1][2][3][4]. Age has previously been identified as an important predictor for prolonged LHS [10,13,14]. For instance, in a similar sized cohort of patients admitted to ED (993 patients, mean age = 87.04 years) age and gender explained 21.6% of area under receiver operating characteristic curve value [10]. In the same way, Campbell et al. reported in a larger cohort of patients admitted to ED (1626 patients, mean age = 78.7 years) that age over 85 years was strongly only combinations involving at least 10 participants were considered associated with prolonged LHS (OR = 7.6, P < 0.001) [14]. The association between increased age and prolonged LHS has been explained by incident disabilities that exceed 50% in hospitalized patients aged 85 years and over [14][15][16]. This finding is consistent with Sourial et al. who showed that age and gender had the highest contribution (C statistic values from 0.51 to 0.67) in predictive accuracy of incident disability in a cohort composed of 6657 patients (mean age = 73.68 years) [17]. A possible explanation of the discordance about age effect on the prediction of prolonged LHS shown in our study compared to previous studies could be related to the profile of population recruited, which is oldest old patients with a mean age around 85 years. Moreover, ANNs provide a different statistical approach that consider the complex interplay between all items [11,12,18]. Indeed, previous results of ANNs reported that using numerous variables increased predictive accuracy (area under cover values lie between 84.1 with 9 items and 90.5 with 10 items) but also modified the contribution of demographic items in the predictive performance (from 12.8% with 9 items to 21.3% with 10 items) [10]. Categorizing age groups provide an additional variable that limits the analysis of ANNs to a single age group and may modify the distribution and weight of 10 items in contribution of the predictive accuracy. Thus, ANNs take into account the variations in the contribution of all types of variables (demographic items, acute or chronic diseases, and environmental items) to increase predictive performance and to learn to recognize patterns of prolonged LHS in each age group. Our findings underscored that regardless of age, values of criteria performance were high (sensitivity> 88%, spe-cificity≥ 96%, PPV > 87%, NPV > 96%, LR+ > 22; LR-≤ 0.1 and AUROC> 93). To the best of our knowledge, the current study demonstrates the best performance and balance between criteria reported for prediction of LHS after an ED visit. This result is discordant from a recent previous study which reported lower values and an unbalance between sensitivity and specificity [10]. The main explanation as suggested in our hypothesis could be related to a difference in the number of participants. In our study, we included 1117 individuals which likely increased the accuracy of prediction. It has been shown that ANNs may provide accurate information on an event only if there is a sufficient quantity of data points to be analysed [11,12]. In order for a screening test to be applicable, it requires high level sensitivity to limit false-negative results. With sensitivity above 89% in all age group conditions, the 10-items BGA is a useful screening tool that can be applied to identify early older ED users at higher risk of prolonged LHS after their discharge to acute care wards. In addition, our results demonstrated a high specificity above 95%, which implies that there is a low false positive rate when applying the 10-items BGA to predict length of hospitalization. Thus, the combination of both high sensitivity and specificity indicates that the 10-items BGA is not only a simple screening tool but also a diagnostic tool with excellent predictive accuracy. The ability to analyse our data with the use of ANNs methods analysis is the main explanation for these findings. Indeed, ANNs are data analysis tools developed to overcome the limitations of traditional linear models as a method to predict health events [18]. ANNs are computational models which are capable of machine learning and pattern recognition [7][8][9][10][11][12]. Because they apply non-linear statistics to pattern recognition, ANNs are particularly adapted to "chaotic" events like prolonged LHS. Nowadays, the advances generated by ANNs combined with improvement of computer technology affords the opportunity to explore new perspectives using ANNs as decision-making diagnostic aids for physicians. Thus, these results can be applied directly to clinical practice because they can be developed as software applications for computers and hand held devices. The 10-item BGA may provide answers to facilitate clinical decision-making process because this tool provides a risk stratification of patients at risk of non-fatal health outcomes. Such information may be relevant to make the right decision for the patients like the discharge to home or to a medical ward, and to continue the appropriate interventions in the right patients and at the right time by the right professionals (i.e., geriatric intervention versus no geriatric intervention). Our results also showed that patients with digestive diseases had a shorter LHS compared to patients admitted for other diseases. One explanation could be that admissions for digestive diseases are more often semi-urgent or non-urgent [19]. This low degree of urgency may explain a lower LHS. In contrast, "Other diseases" as a reason for admission was associated with an increases LHS. This group refers in part to traumatic injuries related to falls. Unlike young patients, the most common mechanism for traumatic injuries in older patients is due to fall [20]. Falls have been identified as major cause of unintentional injury leading to prolonged LHS and death, especially over 80 years, that could explain our results [21]. The strengths of this study include the large number of participants, the prospective cohort design, the hard outcome represented by prolonged LHS, and the use of sophisticated new statistical models. However, limitations need to be considered including recruitment of participants from a single center and the fact that important items related to prolonged LHS could have been forgotten. Besides, we included inpatients who died during their hospitalization and those discharged in another hospital. Both date of death or transfer to another hospital were considered as the last day of hospitalization. Thus, a bias might exist because of a suspected higher complexity of those patients. Conclusion Age stratification provided minimal effect on the abilities of the 10-item BGA to predict prolonged LHS using ANNs. Indeed, ANNs provided homogeneous predictive performance that enable to use 10-item BGA as a screening tool but also as a predictive tool to identify older patients at higher risk of prolonged LHS, whatever their age.
2018-05-31T15:10:01.759Z
2018-05-30T00:00:00.000
{ "year": 2018, "sha1": "e2d3359148f0ee90b8761de1a37999774a93878c", "oa_license": "CCBY", "oa_url": "https://bmcgeriatr.biomedcentral.com/track/pdf/10.1186/s12877-018-0820-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e2d3359148f0ee90b8761de1a37999774a93878c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246680059
pes2o/s2orc
v3-fos-license
An Executable Formal Model of the VHDL in Isabelle/HOL In the hardware design process, hardware components are usually described in a hardware description language. Most of the hardware description languages, such as Verilog and VHDL, do not have mathematical foundation and hence are not fit for formal reasoning about the design. To enable formal reasoning in one of the most commonly used description language VHDL, we define a formal model of the VHDL language in Isabelle/HOL. Our model targets the functional part of VHDL designs used in industry, specifically the design of the LEON3 processor's integer unit. We cover a wide range of features in the VHDL language that are usually not modelled in the literature and define a novel operational semantics for it. Furthermore, our model can be exported to OCaml code for execution, turning the formal model into a VHDL simulator. We have tested our simulator against simple designs used in the literature, as well as the div32 module in the LEON3 design. The Isabelle/HOL code is publicly available: https://zhehou.github.io/apps/VHDLModel.zip I. INTRODUCTION VHDL is one of the most widely used hardware description languages in hardware specification, verification and documentation. However, VHDL is known to have a partially blurred semantics which is defined in plain English [1], [2]. Formal verification, on the other hand, is usually performed in logic. To close this gap, a formal model for VHDL is needed to verify properties of interest for hardware designs. As a concrete motivation, this research work is a step towards building a verified execution stack ranging from CPU, micro-kernel, libraries, to applications. We are interested in verifying correctness and security properties for those components. Since the complexity of our intended goal is high, we use a multi-layer verification approach where we would formalise each layer separately and would use a refinementbased approach to show that important properties proved at the top level (applications) are preserved to the bottom level (CPU). We choose to formalise the XtratuM [3] micro-kernel that runs on top of a multi-core LEON3 processor [4], which is designed in VHDL. A formal model for VHDL is thus vital for the low-level verification in a verified execution stack. We build our formal model in theorem prover Isabelle/HOL in three layers. In the bottom layer, the syntax in our model is influenced by the model of Umamageswaran et al. [5], except that 1) we focus on a synthesisable subset of VHDL while they model a timed language for simulation; and that 2) we model sub-program calls, which are not treated by Umamageswaran et al. We identify key concepts in the VHDL design of LEON3 and give a "core syntax" from which more complicated language constructs can be obtained. This is similar to the dynamic model in [5], however, they modeled VHDL in denotational semantics, whereas we give a novel operational semantics for VHDL, called "core semantics", which essentially converts VHDL statements to Isabelle/HOL functions. This idea is similar to the ACL2 model for VHDL [6], but we model many features that are missing in their work, such as sensitivity list for processes, loops, etc. To support hierarchical designs and compositional verification, the next layer extends the core with the syntax and semantics for components [7]. The top layer further extends the model with the necessary VHDL features used in the LEON3 design and translates the more complex syntax to the core syntax. As everything is modelled in Isabelle/HOL, we do not rely on external tools to perform heavy translation tasks, only a simple, mostly syntactical conversion from VHDL to Isabelle/HOL is required, which is much easier to handle. This work is a part of a research project called Securify, which aims to verify an execution stack ranging from CPU, micro-kernel, libraries to applications. The project adopts a multi-layer verification approach where we formalise each layer separately and use a refinement-based approach to show that properties proved at the top level are preserved at the lower levels. This work closely connects with the other components of the project such as the formal modelling and verification of verilog [8] and the SPARCv8 instruction set architecture for the LEON3 processor [9], [10], a verification framework for concurrent C-like programs [11], and automated reasoning techniques for separation logic [12]- [14]. For easy integration, these related sub-projects partly determine our software choices such as Isabelle/HOL and hardware choices such as LEON3 and VHDL. The rest of the paper is organized as follows. In the next section, the core syntax of our language is defined. The core semantics of the language is defined in the Section IV. To simulate designs, a formal semantics is defined for it in Section V. Different components of the architecture are formalized in Section VI. The complex syntax of the language is given in the Section VII. A detailed experimental analysis is carried out in Section VIII. A literature survey is included in the related work Section II and the paper is concluded in the Section IX. II. RELATED WORK There are a number of papers on formalising hardware description languages in theorem prover. Braibant et al. [15] defined a simplified version of the language Bluespec in theorem prover Coq. Their simplified version of Bluespec, called Fe-Si, is deeply embedded language. In a recent effort, we defined a domain-specific language dubbed as VeriFormal [16]. The language VeriFormal is a formal version of Verilog which is deeply embedded in Isabelle/HOL. It is available with a translator that translates Verilog designs into the syntax of VeriFormal. As the syntax of this language has been defined in a functional style, an automatically extracted version is executable, hence servers as the simulator with formal foundation. Similarly, there are formalized versions of the VHDL, however, some are less relevant as they focus on timed VHDL models (e.g., [17]) or they focus on the theory behind formal semantics rather than the mechanisation of it, whereas we are mainly interested in formalising the functional part of the LEON3 design in a theorem prover. We focus on a synthesisable subset of VHDL which does not involve statements such as wait and delayed assignment. Eisenbiegler et al. gave a formal model for a synchronous VHDL subset called ABC-VHDL [18], which divides VHDL statements in three types: A statements, including null, variable assignment, and signal assignment, never reach a wait statement during execution; B statements sometimes reach a wait; while C statements, namely wait statements, always reach a wait. The authors modelled VHDL statements as functions that describe the transition from one clock tick to the next, they also implemented a translation from their model to HOL. Goldschlag surveyed and formalised a few important VHDL concepts, including signal assignments for both timed and untimed models, delta delays, resolution functions, components, and a few extensions [19]. Breuer et al. [20] proposed a refinement calculus for VHDL, effectively reducing the verification of VHDL to a problem in temporal logic. Their model handles signal assignments, wait, null, if, while, and process statements at its core, and they gave a denotational semantics for their language. For some mechanised examples, van Tassel embedded the simulation cycle of a VHDL subset called Femto-VHDL in HOL [21]. The Femto-VHDL subset contains simplified conditional statements and signal assignments (with delay) for the sequential part, and process statements for the concurrent part. Bawa and Encrenaz gave a VHDL translation to Petri nets [22]. Their model, although does not support features such as subprogram calls and components, does include most of features surveyed in related work and has a rather strong tool support. Ralf Reetz's deep embedding of VHDL into HOL [2] covers a significant subset of VHDL and includes the elaboration and execution processes. Kloos and Breuer's book gives an excellent review of related work in that era [23]. Two other VHDL models are worth mentioning: Umamageswaran et al.'s book [5] documented their VHDL model in PVS. Their syntax covers a rich subset of VHDL, for which they gave a denotational semantics, as they mainly concern timed models. Their model is divided in two layers: a static model which covers a complicated syntax; and a dynamic model which is a much simpler subset. They gave a reduction algebra to covert a static model to a dynamic model. Their model is capable of proving some interesting properties, such as the equivalence of two VHDL designs. Another deeply developed work is the ACL2 model for VHDL [6], [7], [24]. The ACL2 model focuses on a synthesisable subset of VHDL, which is very close to our line of work. This model can handle some rather involved examples, such as modules to compute factorial and power. Furthermore, the authors also extended their work to cover components in VHDL. This is an important step, as it enables compositional verification. The above work laid a solid foundation for the research in this area. However, this cannot be used directly in the projects like verifiable execution stack for two major reasons: First, most of the related papers were published in the 1990s and their detailed reports and source code could not be retrieved. Some authors confirmed with us that their source code was lost. Second, most of the related work uses a rather "abstract" syntax. Moreover, many models assume that the VHDL code is elaborated. This is nice when demonstrating the technique, but real industrial designs often contain many features that are not covered by those models, such as assignments with a range specification (rarely supported, except by [24], [25] etc.) and "others", vector member access, records, types in the std logic 1164 library, cases, among many others. One can argue that these features can be translated to some of the previous formal models, but it would have required to verify the translation or the elaboration process, which may not be straightforward. Therefore, while the related work focus on simplified models and elegant theories, we go on the opposite direction and model VHDL with complicated features used in industry designs. III. CORE SYNTAX In this section, we identify a core subset of VHDL as the base of our model. This subset can be extended with many features that are widely-used in LEON3 designs. For space reasons, the reminder of this paper only introduces our model at a high level and all the definitions are not expanded and explained. Our core model captures the basic VHDL types (boolean, bit, char, integer, positive, natural, real, time, positive, natural, string, bitstr, boolstr) and operations over these types (logical, relational, shift, and arithmetic operations). Our language also supports other widely-used VHDL types such as signed, unsigned, std_logic, std_ulogic, std_logic_vector and std_ulogic_vector. These types are modelled by built-in Isabelle/HOL types and the operations are modelled as Isabelle/HOL functions. This is similar to the treatment in the ACL2 model [24]. We define expression in Figure III, where e is a shorthand for expression. In exp_nth, the first expression must be of a vector type and the second expression must have the type natural. In exp_sl, the first expression must be a vector, the last two must be naturals respectively indicating the index of the first element in the subvector and the length of the subvector. We introduce the last two types of expressions because VHDL overloads the vector concatenation operator and the append operator. In Isabelle/HOL, when appending a list of type 'a list to an element of type 'a, we explicitly convert the element to a singleton list, then concatenate it with another list. The two types of vectors are distinguished: a list is used for big endian vectors (corresponds to to) ; a reversely stored list is used for little endian vectors (corresponds to downto). The last type of expression is for record types of signals, ports and variables. We deal with record types as lists. For example, a signal record instance corresponds to a list of signals or nested signal records as its members. Members of a record can be accessed by checking their names, which are string identifiers. A similar treatment is implemented for variable records. Record types of signals and ports are inductively defined as follows: datatype seq_stmt = sst_sa name sp_lhs amst_rhs signal assignment |sst_va name v_lhs asmt_rhs variable assignment |sst_if name condition if statement "seq_stmt list" "seq_stmt list" |sst_l name condition while loop "seq_stmt list" |sst_fn name v_clhs function call subprogcall |sst_rt name asmt_rhs return statement |sst_pc name subprogcall procedure statement |sst_n name name condition next statement |sst_e name name condition exit statement |sst_nl null statement datatype spl = spl_s signal | spl_p port | spnl "(name × (spl list))" The above definition of expression does not include functions. Our core model treats functions as a type of statements instead and we restrict them to the form of "variable := function call". The core syntax includes the sequential statements in Figure III. Every statement has a name, which is an identifier of type string. The left hand side sp_lhs (resp. v_lhs) of an assignment may be a signal/port (resp. variable) possibly with a discrete_range. The right hand side asmt_rhs is either an expression or of the form others => expression. In the if statement, the condition is a boolean expression, the two seq_stmt lists are for the "then" part and the "else" part respectively. The seq_stmt list in the while loop is the loop part. In function calls and procedure calls, subprogcall is defined as subprogcall = "(name × (v_clhs list) × type)", where name is the string identifier for the subprogram, v_clhs list is the list of arguments, which can only be variables in the core model, and type is the return type of function calls (not used in procedure calls). Return statements are only used in functions, they simply return the asmt_rhs part, which is later assigned to the v_clhs part in the function call. In next and exit statements, the first name is the identifier of the statement, and the second name is the identifier of the loop statement to be exited. As in most previous VHDL models, our core model only considers one type of concurrent statements: process statement. Other concurrent statements can be translated to this one, as will be shown in Section VII. Process statements are defined as below. datatype conc_stmt = cst_ps name sensitivity_list "seq_stmt list" Note that we support a list of signals/ports as the sensitivity_list in the process statement. Models without this, e.g., [6], are restricted to activate the process with a single signal/port, which may not be practical. Finally, a VHDL file corresponds to a model of type vhdl_desc, which is a tuple of (environment × res_fn × conc_stmt_list × subprogram list), where environment is a record containing the list of signals/ports, variables, and types; res_fn are the resolution functions; conc_stmt_list is the list of concurrent statements in the code; and subprogram list is the list of subprograms (functions and procedures) in the design. IV. CORE SEMANTICS After the core syntax of the language is defined, the core semantics is created to interpret expressions written following the syntax of the language. A vhdl_state is a record consisting of the following fields (we omit the types below): This definition is inspired by the dynamic model in [5]. We distinguish the current value, effective value, and driving value of signals/ports. In the VHDL LRM [1], the effective value is "the value obtainable by evaluating a reference to the signal within an expression". Taking a cue from [5], the effective value of a signal/port is computed using the driving values contributed by every process statement that drives the signal. The driving value of a signal is defined as "the value that the signal provides as a source of other signals" [1]. In [5], the driving value of a signal/port, contributed by every process statement that drives the signal, is computed by passing the initial value of the signal through the list of sequential statements of the process. These will be detailed in the semantics. The operational semantics for if statements is straightforwardly modeled as conditional statements in Isabelle/HOL. While loops, however, requires some care to accommodate next and exit in loops. The next_flag and exit_flag in the state are both of type name × bool, the former records the identifier of the loop, the latter is set to true when a next/exit statement is executed. The execution of a loop, where p is the current process statement, s is the current sequential statement, subps are the subprograms in the VHDL design, and state is the current state, is modelled in Figure IV. The if part means that there is an exit flag active for this loop, so we reset flag to false and change nothing else in the state, i.e., exit this loop. The first else if indicates that an exit flag is active but is not for this loop, that is, we need to exit an outer level loop. So we exit the current loop by simply returning state. The second else if means that a next flag is active for the current loop, so we execute the loop again from beginning by invoking the function rec_loop, and reset the next flag. The third else if says that a next flag is active, but it is for an outer level loop. So we exit the current loop without resetting the next flag. In the else case, we execute the current loop. The function rec_loop first checks the loop condition, if the condition holds, then we sequentially execute the statements in s of process p, and go back to exec_loop_stmt. Note that a next/exit statement not only sets the flags, but also ignores the remaining statements in the loop, and calls exec_loop_stmt. As mentioned earlier, in the core model we only support function calls of the form "v := function call". This allows us to model function calls and procedure calls in a similar way. For a function call sst_fn n v spc, where n is the name, v is the variable on the left hand side of the assignment, and spc is the function call, we first match n with the names of subprograms in vhdl_desc, and find the function to be called. Since all variables are globally visible in our model, we can pass arguments and return values via variable assignments. For example, for a function f(x;y), which is called by f(i,j) with arguments i,j, we create assignments x := i; y := j and execute them before executing the function. We then execute the body of the function and obtain an expression e from the return statement. Lastly, we create an assignment v := e, which will be executed after the function execution is finished. Although rarely used in the LEON3 design, it is possible to define recursive functions, in which case function arguments (i,j in the above example) will be overwritten in nested function executions in our model. To solve this, we execute the function body by passing a copy of the current state as a parameter, thus we can retrieve the values of function local variables from the original copy of the current state. In case of procedure calls, where there are no return values, we create a variable assignment for each out direction parameter and execute these assignments after the procedure execution is finished. For a procedure p(x : in; y : out; z : out) (we omit types here), which is called by p(i,j,k), we execute an assignment x := i before executing the procedure, and execute assignments j := y; k := z after executing the procedure. Compared to the operational semantics for programming languages, a major difference in an operational semantics for VHDL is in the signal assignment statement, in which we assign the right hand side to the driving value of the signal. The field state_dr_val in a state has the type sigprt => conc_stmt => val option, where sigprt is either a signal or a port, conc_stmt corresponds to a process, which is the only type of concurrent statement in the core model, and val option is either Some value or None. That is, each driving value of a signal is tied to a process that drives it. Due to the already rather involved syntax, the semantics for signal assignments considers a number of cases. • Assignments for signals and ports of a record type are translated to assignments for each member in the record. • If the left hand side is a signal/port sp which may or may not be a vector, and it does not specify a range, we consider the following cases: -If the right hand side is an expression e, we first evaluate the expression using a function state_val_exp_t, and then assign the value as the driving value of the signal/port for the current process. This is realised as follows: state(|state_dr_val := (state_dr_val state) (sp := ((state_dr_val state) sp) (p := state_val_exp_t e state))|) -If the right hand side has the form others => e, then we need to make a list (using the function mk_list) where each member is the value of e, and assign this list as the driving value of the left hand side for the current process. In this case, we replace the last line of the above case with below, where vv is the value of e, length vl is the length of the vector on the left hand side, and val_list is simply the constructor for vector values: (p := Some (val_list (mk_list vv (length vl)))))|) We compute the effective value of a signal/port based on its driving values. This is implemented in Figure 4, where sp is a signal/port and desc is the VHDL model. If a signal/port has no drivers, then its value is always the initial value, which must be its current value. If it has exactly one driver, then the effective value is the driving value. Otherwise, we resolve the driving values using a resolution function rf. Unresolved signals/ports have the value None. Variable assignments are similar to signal/port assignments, except that variables do not have driving values and effective values, we only record the current value of variables. V. SEMANTICS FOR SIMULATION In a simulation cycle, we execute all active processes in a sequential order. This order should have no effect on the outcome. A process is active if there is a signal/port in its sensitivity list that has been changed since the last execution, i.e., its current value differs from its effective value. We then compute new effective values and check active signals after this round of computation. The function update_sigprt copies the effective values of signals/ports to their current values. After a round of execution, if a process's sensitivity list has an active signal/port, this process is then resumed and executed again. The cycle ends when all the processes' sensitivity lists do not have any active signals/ports. This is realised below, where sps is a list of active signals/ports. function resume_processes where "resume_processes desc sps state = ( let state1 = exec_proc_all (snd (snd desc)) sps state; state2 = comp_eff_val (env_sp (fst desc)) desc state1; act_sp1 = active_sigprts desc state2; state3 = update_sigprt (env_sp (fst desc)) desc state2 in if has_active_processes desc act_sp1 then resume_processes desc act_sp1 state3 else state3)" Executing a simulation cycle consists of checking the active signals/ports in each process, and executing a process if it has active signals/ports. This is modeled as follows: definition exec_sim_cyc where "exec_sim_cyc desc state ≡ let act_sp = active_sigprts desc state in if has_active_process desc act_sp then resume_processes desc act_sp state else state" The semantics for simulation is a straightforward recursive function: fun simulation where "simulation 0 desc state = state" |"simulation n desc state = simulation (n-1) desc (flip_clk (exec_sim_cyc desc state))" Since most designs use a clock signal to synchronise certain processes, we simulate the flip of a clock by the function flip_clk. Example:: The VHDL code below demonstrates the difference between VHDL semantics and common semantics for programming languages. VII. COMPLEX SYNTAX Most formal models for VHDL apply on elaborated code or some simplified syntax, and use external software to convert more complicated syntax to the syntax accepted by the model. However, this route may lead to a low confidence on the correctness of the formal method, because the external software may not be formalised and may contain errors. To partiallyovercome this, we provide a layer to extend our model with more complicated syntax. As this layer is formalised as a part of our formal model, we can verify the correctness of the translation in the future. Having this layer also improves the extensibility of our model: we only show a few treatments here, but more can be added if one wants to adopt our model in other situations. In addition to the core sequential statements, we further support more complicated if statements with optional elsif parts; case statements; and for loops. The new sequential statement type is called seq_stmt_complex. These new language constructs are defined as follows: |ssc_if name condition "seq_stmt_complex list" "elseif_complex list" "seq_stmt_complex list" |ssc_case name expressoin "when_complex list" "seq_stmt_complex list" |ssc_for name expression discrete_range "seq_stmt_complex list" where elseif_complex = ssc_elseif condition "seq_stmt_complex list" when_complex = ssc_when choices "seq_stmt_complex list" In the extended if syntax, the first and last seq_stmt_complex list are the "if" part and "else" respectively, and the elseif_complex list is the "else if" part. The case statement matches the expression with the choices (which is a list of expressions) in the when_complex list, and executes the corresponding list of sequential statements when a match is found. If no matches are found, the "others" part, which is the last part of the syntax, is executed. The for statement executes the list of sequential statements repeatedly while incrementing the expression within the discrete_range. The translation from the above syntax to the core syntax is straightforward and is not discussed here. We add two types of concurrent statements which are widely-used in the LEON3 design: concurrent signal assignments and generate statement. They have the following forms respectively, where conc_stmt_complex is the type of the new concurrent statement syntax: |csc_ca name sp_clhs "casmt_rhs list" asmt_rhs |csc_gen name gen_type "conc_stmt_complex list" The left hand side sp_clhs of the assignment can either be a sp_lhs or a spl. The right hand side asmt_rhs is the same as the right hand side of sequential assignments. In the middle part, each casmt_rhs is of the form as_when asmt_rhs condition, which corresponds to "asmt_rhs when condition else" in the VHDL syntax. As in [5], a concurrent signal assignment is translated to a process with signal assignments nested in if statements. Consider the following concurrent signal assignment: s <= x when i > 0 else y when j = 5 else z We translate this assignment to a process statement as below, where we put signals i,j in the sensitivity list of the process: thisproc: process (i,j) begin if (i > 0) then s <= x; elsif (j = 5) then s <= y; else s <= z; end if; end process thisproc; We consider two types (of gen_type) of generate statements: for generate, and if generate: for_gen expression discrete_range |if_gen expression Unlike the usual elaboration process, we translate a generate statement to a list of process statements. For an if generation, we evaluate the expression and create a list of process statements which correspond to conc_stmt_complex list if the expression is evaluated to be true. The translation of for generations are more tricky. For example, if the expression is e and the discrete_range is 1 to 10, then we need to create 10 process statements for each member of conc_stmt_complex list, and globally replace e with the corresponding iteration number in the process statement. For example, consider the generate statement below, where p1, p2 are two process statements: thisgen: for i in (0 to 9) generate begin p1; p2; end generate thisgen; We need to generate 10 process statements based on p1: p1[0/i], · · · , p1[9/i], where [y/x] means that x is globally replaced by y. Similarly, we generate 10 process statements based on p2. We also provide abbreviations for our syntax to ease the translation process. Following is a small portion of the div32 unit in the LEON3/GRLIB source code [26]. divcomb : process (r, rst, divi, addout) · · · begin · · · case r.state is when "000" => v.cnt := "00000"; if (divi.start = '1') then v.x(64) := divi.y(32); v.state := "001"; end if; · · · end process; In our Isabelle/HOL model, this piece of code is given in Figure VII. It is easy to observe the resemblance between our model and the actual VHDL code. In the above case, most of the conversion is purely syntactical, except that we use v.x(64 downto 64) to access the 64th element in the vector v.x, as opposed to the original code v.x(64). VIII. EXPERIMENT AND TESTING We use the Isabelle/HOL code export feature to automatically extract executable OCaml code for our model. This enables us to run our model as a VHDL simulator for testing purposes. For small scale examples, we have tested the VHDL code for the factorial function in [6]. This design consists of two processes: mult models a multiplier, and doit controls the computation. The next tested design is the power function given in [7]. Similar to the factorial design, the power function design has a process for multiplication and a process which models a finite state machine to control the computation using the multiplier process. We have also tested a variant of the power function design in [7] that contains two entities, one for computing multiplication, the other one uses the multiplication entity as a component and computes the power of its inputs. A larger tested example is the div32 unit in the LEON3/GRLIB source code [26]. This unit implements a SPARCv8 compliant 64-bit by 32-bit division, which leaves no remainder and uses the non-restoring algorithm. The VHDL code features most of the concepts captured in our model, including (operations on) records, concurrent assignments, signal and variable assignments, vectors, arithmetic and logical operations, if and case statements, process and generate statements, etc. The all the above tested examples, our Isabelle/HOL VHDL model successfully processes the VHDL code and generates executable code in OCaml. We have then performed extensive testing on the generated OCaml program using a large number of input parameters, including corner cases. In all the tested cases, the executable program yields correct outcome of the arithmetic functions. IX. CONCLUSION This paper describes a formal model of the VHDL language in Isabelle/HOL. Our model is composed of a core of the most important syntax and semantics, and various extensions of the core model that handle subprogram calls, components etc. for large and modular designs. The formalisation is coded in Isabelle/HOL, which means that this model can be used to formally prove properties (such as correctness) for VHDL designs. Our model is carefully crafted to support the code export feature of Isabelle/HOL. This leads to an executable OCaml program generated from the formal model. The program can be seen as a VHDL simulator that strictly complies with the syntax and semantics defined in the model. We have tested our model through this program by running design components of the LEON3 processor and checking results.
2022-02-10T06:47:57.393Z
2022-02-08T00:00:00.000
{ "year": 2022, "sha1": "37b0b6db785f8c37460e2bb80da138c1443af5b4", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "6d612de4abcdbe766a861b0cbacc0b4946dab1c6", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
245625410
pes2o/s2orc
v3-fos-license
Systematic Review of Dementia Support Programs with Multicultural and Multilingual Populations Background: Dementia care programs have become more common due to a growing number of persons living with dementia and lack of substantial benefit from pharmacologic therapies. Cultural and language differences may present barriers to access and efficacy of these programs. In this article, we aimed to systematically review the current literature regarding outcomes of dementia care programs that included multicultural and non-English speaking populations. Methods: A systematic review was conducted using four scientific search engines. All studies included in the review are English language, randomized control trials evaluating various care coordination models. The initial search strategy focusing on studies specifically targeting multicultural and non-English speaking populations resulted in too few articles. We expanded our search to articles that included these populations although these populations may not have been the focus of the study. Results: Seven articles met inclusion criteria for final review. Measured outcomes included emergency room use, hospitalizations, provider visits, quality of life indicators, depression scores, and caregiver burden. Conclusions: Dementia care programs demonstrate significant ability to provide support and improve outcomes for those living with dementia and their caregivers. There is limited research in this field and thus opportunity for further study in underserved and safety net populations including more high-quality randomized controlled trials with larger sample sizes. Introduction According to the World Alzheimer Report, it is estimated that by 2030 the global number of persons living with dementia will be 74.7 million and increase to 131.5 million by 2050 [1]. Those living with dementia may suffer devastating outcomes including inappropriate and potentially harmful medication use, frequent hospitalizations, and aggressive end-of-life care inconsistent with their goals of care [2][3][4][5][6]. Dementia has become a public health issue that affects not only those with dementia but also those who love and care for them. Caregivers experience high levels of stress and burden, which can negatively impact their physical and emotional health [4,5,7,8]. Navigating the healthcare system for persons with dementia is a challenge and usually falls on the caregiver. To mitigate these challenges, dementia care programs and dementia care-coordinators are increasingly used to address the interdisciplinary needs of those with dementia and their caregivers [6,9]. Studies show that caregivers who can access support and resources experience benefits, including improved understanding of dementia, care plans, and reduced caregiver depression, fatigue, and feelings of isolation [8,10,11]. Whereas using dementia medications has not led to substantial improvements in clinical meaningful outcomes, dementia care programs have shown benefit [12]. However, the structure, components, and efficacy of these programs vary. Furthermore, barriers to access to these programs exist, including language, culture, and geographic disparities [6][7][8]. To better understand the components and the effectiveness of these types of programs, we conducted a systematic review and evaluation of the current published literature pertaining to dementia care programs that included multicultural and multilingual populations and their outcomes. Data Sources A systematic review was conducted to investigate outcomes of dementia support programs for persons with dementia and their caregivers. While our initial search was for studies that targeted multicultural or non-English speaking populations, there were no articles that specifically met this criterion. Therefore, we changed our search to articles that included multicultural and non-English speaking populations. Literature search was conducted using PubMed, MEDLINE, CINAHL, and PsycInfo. Key terms included dementia and care coordinator. For PubMed, the "similar articles" feature was used to expand the search. Searches were then limited to peer reviewed journal articles, which were written in English and published after 2005. Inclusion and Exclusion Criteria Articles were included if the article was a randomized control trial and investigated an intervention that targeted support for persons with dementia and/or their caregivers. The trial also had to include multicultural or multilingual populations. Observational studies, reviews, editorials, commentaries, and case studies were excluded. Articles published outside the United States were included. Study Selection The primary author reviewed the titles and abstracts of all retrieved articles to assess for relevance prior to reviewing the article in full. Articles were included for full review if it was unclear from their title or abstract if a specific intervention was used in an interventional design. During full review, articles were eliminated if they investigated the same dementia support program. Only the article that scored the highest quality based on the Modified Downs and Black checklist was included. A total of seven articles were included in the final review ( Figure 1). Due to the wide range of interventions and outcomes examined it was unfeasible to pool results for a quantitative meta-analysis. Data Abstraction Data abstraction was performed on seven articles by the primary author and included: population, clinical setting, sample size, intervention and comparison group, measured outcome, and major findings ( Table 1). The number of multicultural or non-English speaking populations included in the study was also noted. Quality Appraisal The seven studies included in this review were systematically appraised using the modified Downs and Black checklist. This tool may be used to evaluate both randomized and non-randomized control trials by scoring quality of reporting, external validity, bias, confounding variables, and power, although this review article only includes randomized controlled trials [13]. The maximum score for this checklist is 28. Modified Downs and Black score ranges mirrored those reported in previous studies: ≥20 very good; 15-19 good; 11-14 fair; ≤10 poor [14]. Geriatrics 2022, 6, x FOR PEER REVIEW 3 of 12 Data Abstraction Data abstraction was performed on seven articles by the primary author and included: population, clinical setting, sample size, intervention and comparison group, measured outcome, and major findings ( Table 1). The number of multicultural or non-English speaking populations included in the study was also noted. Comparison group: no assistance from care-coordinator. In-person, self reported interviews administered at baseline, 9 months, and 18 months to assess utilization of acute care/inpatient, outpatient, and home-and community-based services. No significant group differences in acute care/inpatient or total outpatient services use. Intervention group had significantly increased outpatient dementia/mental health visits from 9 to 18 months compared to controls. Intervention group had more home and community-based support service use from baseline to 18 months. Rates of nursing home placement did not differ between groups. Article Selection Utilizing the search strategy described above, a total of 1404 articles were initially identified. Articles were excluded if they were published prior to 2005 or not written in English. After applying these filters, 1193 articles remained. After removing articles that were not peer reviewed and not randomized controlled trials, 67 articles remained. The titles and abstracts of these 67 articles were screened and 31 articles were retained for full screening. After removing duplicates and reviewing the articles for relevance and inclusion of multicultural or non-English speaking populations, seven articles remained for full review. Study Design The final review included studies that were published between 2006 and 2019. Study durations ranged from one month [8] to two years [6]. Although all studies were randomized, one study did not provide details surrounding randomization [8], one study was randomized at the level of the provider [15], another study was randomized at the level of the study site [16], while all other studies were randomized at the level of the participant [6,[17][18][19]. While most studies were conducted in the US, two were conducted outside of the US including Australia [17] and Mexico [8]. Setting All interventions occurred at the person's home except for in the study conducted by Callahan et al., which included a mixture of home-based support and office visits. Support interventions included care coordination, needs assessments, linkage to resources, providing education, emotional support, or a combination of these. Additionally, these interventions were provided in varied ways including use of a culturally sensitive educational website [8], use of a therapist or certified interventionalist to teach problem solving techniques [19], or more commonly, use of a care manager to provide care coordination with needs assessments, screenings, education, and linkage to resources [6,[15][16][17][18] Outcomes Study outcomes fell into two groups: health care utilization or clinical outcomes. Health care utilization was evaluated in four studies, specifically emergency room use, hospitalizations, or provider visits [6,15,16,18]. Clinical outcomes, including quality of life indicators, depression scores, and caregiver burden scores, were evaluated in five studies [6,8,15,17,19]. Care Team Members Three of the five studies used a care manager to provide care coordination used licensed clinical persons in the role of the care manager, such as social workers, registered nurses, and nurse practitioners. Only two studies used nonclinical persons as the care manager. In one study, the nonclinical person was supported by an interdisciplinary clinical team consisting of an RN and geriatric psychiatrist [18]. In the other, Possin et al. utilized an unlicensed care team manager who was provided with 40 h of training and had access to higher level clinical providers (e.g., RNs, pharmacists, social workers) if needed. While care coordination had mixed results with improvements in health care utilization, care coordination demonstrated positive clinical outcomes regardless of whether the care coordinator was a licensed clinical person. Health Care Utilization Five studies investigated care recipient's health care utilization, which included number of ED visits, utilization of acute care, inpatient, outpatient, and home-and communitybased services. Bass Clinical Outcomes Studies evaluating clinical outcomes showed consistently positive results. Two studies resulted in less caregiver depression after participation in a dementia care program [6,8]. Three studies also found a reduction in caregiver burden [6,8,19]. Callahan et al. showed significant improvement in behavioral neuropsychiatric inventory (NPI) scores and caregiver stress and Xiao et al. reported improved quality of life measures. Of note, Czaja et al. found that almost three times as many participants in the intervention group reported significant improvements in positive aspects of caregiving after participating in an at-home, technology-based education platform for dementia care. These studies varied in their intervention from face-to-face visits [17] to virtual methods, including telephone-based [6,17], video-based [19], and web-based [6,8] visits. Type of Educational Materials All studies involved providing caregivers with education on how best to care for their family or loved one living with dementia. However, the type of education differed. Possin et al. educated caregivers about dementia [6] whereas four studies focused on strategies for managing challenging behaviors exhibited by persons with dementia [15,[17][18][19]. Pagan-Ortiz et al. provided both types of education. Multicultural or Non-English Speaking Participants Although all studies included populations known to have barriers in accessing care including racial minorities or non-English speaking participants, only two studies investigated outcomes specific to these populations [8,19]. Quality of Studies The Black and Downs scores of the seven studies included in this review ranged from 10 to 23 with a median score of 19. Based on this method of appraisal four studies received a "very good" quality rating [6,15,17,18], two studies were "good" [16,19], none were "fair", and one study was "poor" [8]. Discussion With the increasing population of older adults living in the U.S and corresponding rise in numbers of adults living with dementia, there has been a growing interest in and need for care interventions for persons with dementia (PWD) and their caregivers. Collaborative care models and multicomponent interventions have been shown to improve caregiver burden and depression, PWD quality of life, and decreases in resource utilization, such as ED visits, hospitalizations, and nursing home placement. Currently there is little information to determine whether these interventions are equally effective for multicultural populations and rural communities/communities with low resources. This review originally intended to review the efficacy of these interventions in multicultural or non-English speaking populations. However, this strategy was too limiting, and we expanded this review to evaluate studies that included these populations. This systematic review highlights the value of dementia care programs in a variety of domains ranging from psychosocial and quality of life measures of persons with dementia and caregivers alike [6,8,17,19] to health care utilization [6,16]. A limitation identified in this review is that although most studies included for review are randomized controlled trials, not all studies were randomized at the level of the participant. It is well established that caring for persons with dementia results in physical and psychological strain on caregivers. Challenges include helping with activities of daily living, managing psychological and behavioral symptoms of those with dementia, and perceived changes in relationship between caregivers and the person with dementia [9,10,17]. As the disease severity progresses over time, caregivers require ongoing assistance to help address challenges regarding education, daily care practices, other care services, as well as their own emotional and psychological well-being [8,11,17]. These needs may be addressed by dementia care programs. In addition to the stressors of caring for a loved one with dementia, underrepresented multicultural populations and non-English speaking caregivers face added barriers to care. Communication barriers have been identified as a barrier to non-English speaking caregivers and families from seeking supportive services [14]. Furthermore, many resources for caregivers are designed to target the predominant culture and those who speak English [17]. Mixed race populations are understudied in trials regarding dementia care programs [15,20]. In conducting this review, it was apparent that there is limited research targeting underserved and safety net populations in this area. Although a number of studies mentioned minority populations, only two specifically targeted minority populations [8,19], further highlighting the need for more research in this area. In the two studies that included these populations, Pagan-Ortiz et al. utilized a website to provide culturally sensitive dementia education and support for Hispanic families, and Ceja et al. used certified interventionalists to teach problem solving strategies to Hispanic and African American caregivers. The Pagan-Ortiz et al. study showed no statistically significant outcomes in self-mastery, social support, caregiver burden, or depression. Of note, study participants in Pagan-Ortiz et al. were mostly located in Mexico or Puerto Rico. Only five participants were recruited in Massachusetts. Hispanic populations in the United States face different barriers to care than in Mexico or Puerto Rico where Hispanic culture is predominant. Of the interventions listed in Table 1, use of a culturally sensitive website is the least intensive and requires more initiative on the part of the caregiver to engage with the program. Ceja et al. showed decrease caregiver burden and increased appreciation for the positive aspects of caregiving and satisfaction with social support. This more intensive intervention showed positive outcomes in underrepresented populations. This is the only article we found that showed positive outcomes specific to multicultural populations. This highlights a need for further randomized trials in populations that face barriers to accessing care in the U.S. The other studies that included multicultural or non-English speaking populations did not report outcomes specific to these specialized populations as the sample size for these populations was not large enough. Another study, carried out by Chodosh et al., was not included as it was not a randomized control trial, but offered support for low-income Hispanic and Black communities in Los Angeles that partnered with the Alzheimer's Association and conducted either in person or phone visits for care coordination. This study showed improved caregiver burden and problem behaviors [21], which is promising, but again highlights the need for further randomized controlled trials of dementia support programs in these populations. Despite the lack of diversity in the trials presented here, several dementia care programs, whether through face-to-face clinical coordinators or by virtual means, have shown substantial benefit in quality of life measures. In one study conducted by Callahan et al., participants received 1 year of care management by an interdisciplinary team led by an advanced nurse practitioner integrated within the primary care setting. This study demonstrated that a comprehensive care approach resulted in clinically significant improvements in behavioral and psychological symptoms of dementia and reduction in caregiver stress. Ensuring that caregivers are properly supported has demonstrated positive outcomes for both persons with dementia and their caregivers [6]. Care coordinators may also assist in the fragmentation of medical care, provide resources, and potentially reduce healthcare costs [18]. In general, those living with dementia have higher rates of ED visits and hospitalizations, which may yield undesired consequences, including delirium, falls, medical complications, functional decline, and nursing home placement [16,[22][23][24][25][26][27][28]. In a study by Bass et al., a program called Partners in Dementia, which was a collaboration between Veterans Affairs medical centers and the Alzheimer's Association created to address the needs of persons with dementia and their caregivers, showed a reduction in hospital admissions and ED visits with corresponding healthcare costs [6,16]. As the number of individuals with dementia increases and caregivers are recognized as a precious resource, using all available tools to assist caregivers may mitigate the challenges they face [29,30]. Technology allows for the opportunity to provide tailored support and evidence-based interventions to caregivers [29,31]. Online communities are a feasible way for geographically dispersed groups to meet online for education, support, and social connection [8]. Possin et al. created a telephone based collaborative dementia care program called Ecosystem to provide education, support and care coordination to caregivers and persons with dementia. This study found that dementia care management delivered over the telephone and internet may reduce growing societal and economic burdens of dementia. In a study by Xiao et al., coaching and support provided to caregivers over the phone improved caregivers' sense of competence in managing dementia and their mental wellbeing. Additionally, programs that utilize technology allow for the opportunity to reach individuals in rural areas that may otherwise not have the opportunity to participate in a dementia care program [29]. Technology is more easily accessible for persons with dementia and caregivers who otherwise would be unable to access these resources due to barriers in transportation and proximity to physical resources. Conclusions In summary, dementia care programs provide significant benefit to those living with dementia and their caregivers. Dementia care programs, whether through face-to-face coordination or virtual means, show significant promise in providing improvements in access to resources and quality of life measures for persons with dementia and caregivers alike. Furthermore, virtual based programs may be particularly helpful in underserved or safety net populations as this may improve access to dementia care programs. As research in this field is limited, more high quality studies using larger sample sizes are needed. Additionally, there is a particular need for further research in the development and efficacy of dementia care programs in multicultural and multilingual populations.
2022-01-02T16:04:09.195Z
2021-12-30T00:00:00.000
{ "year": 2021, "sha1": "474ac7d3ddbe3eac1e8f119679cd78d05f21dc8b", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "ebce75ac56d23d9492c0a76aa52c345a87141a04", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19180126
pes2o/s2orc
v3-fos-license
Using Context-Aware Ubiquitous Learning to Support Students ’ Understanding of Geometry The use of technology is becoming ubiquitous throughout today’s society. As philosophies and practice move toward learner-centred pedagogies, technology, in a parallel move, is now able to provide new affordances to the learner, such as mobile learning (m-learning), that can be used to provide learning that is personalized, contextualized, and unrestricted by temporal and spatial constraints (Crompton 2013a). These affordances of m-learning are being explored by researchers and practitioners as a pedagogical approach for teaching and learning difficult concepts. Geometry, the mathematical concept chosen for this study, is a complex subject incorporating many challenging mathematical concepts. Angle concepts are particularly difficult for students to grasp (Battista, 2007; Clements, 2004). Empirical evidence has led scholars to suggest that real-world connections can provide a way to make abstract mathematical concepts comprehensible to students by contextualizing typically decontextualized learning. Recent technological advancements have led to context-aware ubiquitous learning (context-aware u-learning; Hwang, Wu, & Chen 2007; Yang 2006), a form of mobile learning that provides a means by which users of mobile devices can study real-world phenomena, while using the mobile devices to provide timely and effective computer support (Lonsdale, Baber, Sharples, & Arvanitis 2004). There is a paucity of research to explore how mobile devices can be used in this way to support students’ understanding of angle. The purpose of this study is to ameliorate this gap in scholarly understanding and to develop an empirically-based instruction theory of how contextaware u-learning can be used to support the teaching and learning of angle and design guidelines of developing context-aware u-learning activities. Introduction The use of technology is becoming ubiquitous throughout today's society. As philosophies and practice move toward learner-centred pedagogies, technology, in a parallel move, is now able to provide new affordances to the learner, such as mobile learning (m-learning), that can be used to provide learning that is personalized, contextualized, and unrestricted by temporal and spatial constraints (Crompton 2013a). These affordances of m-learning are being explored by researchers and practitioners as a pedagogical approach for teaching and learning difficult concepts. Geometry, the mathematical concept chosen for this study, is a complex subject incorporating many challenging mathematical concepts. Angle concepts are particularly difficult for students to grasp (Battista, 2007;Clements, 2004). Empirical evidence has led scholars to suggest that real-world connections can provide a way to make abstract mathematical concepts comprehensible to students by contextualizing typically decontextualized learning. Recent technological advancements have led to context-aware ubiquitous learning (context-aware u-learning;Hwang, Wu, & Chen 2007;Yang 2006), a form of mobile learning that provides a means by which users of mobile devices can study real-world phenomena, while using the mobile devices to provide timely and effective computer support (Lonsdale, Baber, Sharples, & Arvanitis 2004). There is a paucity of research to explore how mobile devices can be used in this way to support students' understanding of angle. The purpose of this study is to ameliorate this gap in scholarly understanding and to develop an empirically-based instruction theory of how contextaware u-learning can be used to support the teaching and learning of angle and design guidelines of developing context-aware u-learning activities. Mobile learning extending traditional pedagogies Mobile Learning (m-learning) offers many new opportunities in the evolution of technology enhanced learning (Looi, et al. 2010). The mobile market continues to provide a torrent of new or revised devices and applications. These technologies are seeping into educational settings as their affordances are becoming recognized for the way in which they extend pedagogical boundaries. From a review of the research surrounding m-learning pedagogies, Traxler (2011) found five distinct trends on how mobile devices can be used to offer learning that provides unique affordances to the learner. He found that it could offer: 1) contingent learning, allowing learners to respond and react to the environment and changing experiences, ARTICLE Using Context-Aware Ubiquitous Learning to Support Students' Understanding of Geometry Helen Crompton * In this study, context-aware ubiquitous learning was used to support 4th grade students as they learn angle concepts. Context-aware ubiquitous learning was provided to students primarily through the use of iPads to access real-world connections and a Dynamic Geometry Environment. Gravemeijer and van Eerde's (2009), design-based research (DBR) methodology was used in this study. As a systematic yet flexible methodology, DBR utilizes an iterative cyclical process of design, implementation, analysis, and revision. Using this particular DBR methodology, a local instruction theory was developed that includes a set of exemplar curriculum activities and design guidelines for the development of context-aware ubiquitous learning activities. Data collection included semi-structured clinical interviews, observations, student artefacts, video recordings and lesson reflections. This study of technology is grounded in a subject content area (mathematics) so the researchers could clearly state the advantages of using this approach in an educational context. A review of the findings indicates that context-aware ubiquitous learning proved useful in avoiding many common errors and misconceptions that students have in learning these concepts, and students demonstrated growth in their understanding of angle and angle measure beyond what is typically expected. From this study, the researchers present four design guidelines and a full set of context-aware ubiquitous activities. Keywords: Context-aware; ubiquitous; authentic; situated; angle; geometry; mathematics 2) situated learning, in which learning takes place in the surroundings applicable to the learning, 3) authentic learning, with the tasks directly related to the immediate learning goals, 4) context aware learning, in which learning is informed by history and the environment, and 5) personalized learning, customized for each unique learner in terms of abilities, interests, and preferences. From these five categories, a clear trend towards real-world connections is evident. M-learning can provide a shift from the abstract concepts to the contextualized. In other words, difficult subjects can be made more understandable to students by connecting these concepts to the world in which the students live, rather than the traditional textbook examples often used to teach students. Context-Aware Ubiquitous Learning Context-aware u-learning is an emerging sub category of mobile learning. Hwang et al. (2008) described contextaware u-learning as: The learner's situation or the situation of the realworld environment in which the learner in location can be sensed, implying that the system is able to conduct the learning activities in the real world . . . context-aware u-learning can actively provide supports and hints to the learners in the right way, in the right place, and at the right time, based on the environmental contexts in the real world. (p. 84) This is the way context-aware u-learning is being identified in this study. To further explicate context-aware u-learning, Hwang et al. provided a Table 1 of contextaware u-learning example activities that is included below: In the examples provided, the students are interacting with the device and the environment to learn particular concepts. The environments described in these examples are atypical classroom environments, although they could also take place in the classroom. The premise of contextaware u-learning is that students use portable devices to learn by physically exploring and interacting with the real world (Colella 2000;Squire & Klopfer 2007). Technologies to support the teaching and learning of angle Geometry forms the foundations of learning in mathematics and other academic subjects (Clements 2004). However, school geometry is a complicated network of concepts, ways of thinking, and axiomatic representational systems, that young students can find difficult to grasp. Angle and angle measurement in particular have many unique challenges. Prototype diagrams can lead students to considering non-relevant angle attributes (Yerushalmy & Chazan 1993), such as the length of the rays (lines that make up the angle) of the angle and orientation (Battista 2009). For example, textbook right angles typically are shown facing one way. If the students should come across right angles in different orientations they do not recognize them as right angles. Context-Aware Ubiquitous Learning Examples Learning in the real world with online guidance The students are learning in the real world and are guided by the system, based on the real-world data collected by the sensors. For example, for the students who take a chemistry course, hints are provided automatically based on his or her real-world actions during the chemistry procedures. Learning in the real-world with online support The students learn in the real world, and support is automatically provided by the system based on the real-world data collected by the sensors. For example, for the student who is learning to identify the types of plants on campus, relevant information concerning the features of each type of plant is provided automatically based on his or her location and the plants around him or her. Collect data in the real world via observations The students are asked to collect data by observing objects in the real world and to transfer the data to the server via wireless communications. For example, observe the plants in this area and transfer the data (including the photos you take and your own descriptions of the features of each plant) to the server. Identification of a real-world object Students are asked to answer the questions concerning the identification of the real-world objects. For example, what is the name of the insects shown by the teacher? Observations of the learning environment Students are asked to answer the questions concerning the observation of the learning environment around them. For example, observe the school garden, and upload the names of all the insects you find. Co-operative data collecting A group of students are asked to co-operatively collect data in the real world and discuss their findings with others via mobile devices. For example, co-operatively draw a map of the school by measuring each area and integrate the collected data. Co-operative problem solving The students are asked to co-operatively solve problems in the real world by discussion using mobile devices. For example, search each corner of the school and find the evidence that can be used to determine the degree of air pollution. As students move on to angle measurement, many students believe that the size of the angle is determined by measuring the length of the line segments that are the rays of the angle (Clements 2004;Berthelot & Salin 1998). In a review of the literature Crompton (2013b) found five problems as students studied angle; (a) understanding that angles have an abstract nature, (b) understanding the angle as a turn, (c) understanding what the angle is measuring, (d) struggling to see the different angles in different contexts, and (e) determining salient criteria for judging angles. For centuries, scholars have advocated the importance of connecting mathematics to the real world (e.g. Clairaut 1741/2006). Using real-world connections in mathematics has many recorded benefits, such as enhancing students' understanding of the mathematical concepts (De Lange 1996), amplifying students' ability to think mathematically outside the classroom (National Research Council 1998), and motivating students to learn about mathematics (National Academy of Sciences 2003). Technology has also been used to support students' understanding of concepts. Dynamic Geometry Environments provide the students with figures and basic tools to create composite figures. A review of the literature revealed that real-world contexts and Dynamic Geometry Environments are two pedagogical approaches to supporting students learning of geometry. There are those who have used context-aware u-learning to make the real-world connection to mathematics. For example, Elisson and Ramberg (2012) used DBR to conduct a study where students were asked to relocate imaginary species from the local zoo to a field close to the school. Students had to use a mobile software application which measures the distance between two mobile devices via Global Positioning System. Students measured and placed cones to demarcate where certain species would live in the field based on the size of habitat required. Bray and Tangney (2014) have used technology to transform mathematics by creating contextualized activities. In this particular DBR study they had year 10 students (age 15/16) complete activities such as the Human Catapult activity that involved students using foam balls, cameras, and GeoGebra to investigate concepts such as rates of change and velocity. Spikol and Eliasson (2010) also used a DBR approach to work with middle school students to explore geometry both inside and outside. The students used mobile devices with DGE and AR visualizations to explore and understand geometrical concepts. Purpose of this study The purpose of this study is to use a context-aware u-learning approach to support students as they learn about angle and angle measure. The research questions guiding this research are: 1. How can context-aware u-learning be used to extend and enhance students' understanding of angle? 2. What design guidelines will inform the development of context-aware u-learning activities? To this end, the researchers employed Gravemeijer & van Eerde's (2009) design-based research (DBR) methodology. DBR is a systematic yet flexible methodology utilizing an iterative cyclical process of design, implementation, analysis, and revision. Method Participants Two fourth grade teachers chose to participate in the study. This determined the classes from which students participated. There were 30 fourth grade (9-10 years of age) students in each class, for a total of 60 student participants in the study. The study took place in the south-eastern United States. Following Gravemeijer & van Eerde's (2009) DBR approach, two teaching experiments were carried out, one with each class of fourth grade students. Eight of the 60 students completed the pre-and pos-instruction clinical interviews. The eight students were made up of four randomly selected students from each class. As each interview was approximately one hour each and took multiple hours to analyse qualitatively, eight students was deemed a good amount by an external research review team. Research team The researcher acted as the teacher in both of the teaching experiments. In the DBR process it is not uncommon for one researcher to serve as the teacher implementing the instructional intervention (e.g., Markworth 2010). For both teaching experiments, the class teacher, and mathematics and technology specialists served as witnesses to the teaching episodes, and a technology and mathematics educator acted as co-researcher. Design-based research protocol for this study There are various types of DBR including those developed by Bannan-Ritland (2003), Cobb et al. (2003), and McKenney and Reeves (2012). Gravemeijer and van Eerde (2009) DBR was selected as it employs methods that enable the research team to develop a local instruction theory and instructional materials to be used to explore the process by which students learn a particular concept in mathematics. The study involved two macro cycles with one teaching experiment occurring in each macro cycle. The teaching experiments consisted of seven days of mini cycles of thought and instructional experiments to serve the development of the local instruction theory. The local instruction theory in this study involved two components; design guidelines for informing the development of context-aware u-learning activities and a set of exemplary context-aware u-learning activities for extending and enhancing students' understanding of angle concepts. These activities are an embodiment of the design guidelines. For the context-aware u-learning components of this study, each student was given an iPad2 with Sketchpad Explorer (dynamic Geometry Environment) loaded onto the device with the add-on sketch titled Measure a Picture (Steketee & Crompton 2012). Using iPads and Measure a Picture add-on, students interacted with the real-world as they found angles in the environment outside their school grounds. Measure a picture enabled the students to take photographs of the angles while in the program and use the dynamic protractor and other dynamic tools to measure the angles in the pictures. See Figure 1 for a screenshot of the program. In addition, students were asked to work with Quick Response codes (QR codes) for other activities within the instructional sequence. The two macro cycles for this study are illustrated in Figure 2. Note in the figure the occurrence of the three phases within each macro cycle (a) the design of instructional materials, (b) classroom-based teaching experiments and mini cycle analysis, and (c) the retrospective analysis of the teaching experiments which informed the next macro cycle. Cycle One Design of Instructional Materials. From a thorough review of the literature, a set of instructional materials was designed. The day before the commencement of the teaching experiment, a clinical interview on angle concepts was administered to the four students from the first class. Classroom Teaching Experiments and Mini Cycle Analysis. Next, using the instructional materials, the first teaching experiment was conducted for seven consecutive school days with the entire class of 30 students. During the teaching experiments, the co-researcher and witness observed and took notes on the classroom instruction, and the instruction was videotaped. Students' work, such as screenshots and worksheets, was collected at the end of each day. Also, at the end of the day's instruction, the researcher, co-researcher, and witness met to discuss the lesson. The conversations were audio recorded. Following this meeting, the researcher completed a daily reflection journal. During each daily mini cycle of the teaching experiment, the researcher utilized the collected data to modify the next day's instruction when necessary. The Retrospective Analysis. At the end of the teaching experiments, the entire data collected (video, observation notes, interview responses and scores, artefact collection, and reflection meeting audio recordings) were analysed collectively. Detailed notes were made of the design implications and the initial instructional materials were revised based on the findings of the retrospective analysis. Cycle Two This second cycle was a repeat of the first with a new set of students. The second teaching experiment took place two weeks after the conclusion of the first teaching experiment. There were two retrospective analyses conducted, one at the conclusion of each macro cycle. The local instruction theory came from the final retrospective analysis. At the bottom of Figure 2 is a list of when each of these data was collected in conjunction with each part of the macro cycle. Data sources One of the distinct characteristics of DBR methodology is that the researchers develop a deeper understanding of the phenomenon while the research is in progress. Therefore, it is crucial that the research team generated a comprehensive record of the entire process (Cobb et al., 2003). There were several sources of data that were used in this DBR process. These data sources are: • Pre-and post-instruction clinical interviews • co-researcher and witness classroom observations • whole class video recording • daily mini cycle reflection audio-recording with research team • artefact collection of student classwork • researcher's daily reflection journal • retrospective analysis at the end of a macro cycle Clinical Interview Scally's (1990) clinical interviews were used in this study. This interview included a set of six angle activities with a script and scoring guide to determine students understanding of angle concepts in relation to the van Hiele levels of geometrical thinking. Note that this instrument does not measure knowledge of geometry facts (memo-rization) but the students' actual understanding of these concepts. Scally's (1990) clinical interview allowed the investigator to react responsively to data, asking new questions in order to clarify and extend student thinking. The interview design enabled the researcher to gain insight into the depth of student understanding with a collection of both oral and graphical explanations from the students. The credibility of Scally's clinical interview has been determined with 83% reliability and the content validity of the instrument is established. Furthermore, Scally's (1990) study provided evidence for her to claim that the instruments and scoring procedures could be used effectively by other researchers and in other settings. Classroom Observations and Whole Class Video Recording During the teaching experiment, observation notes were collected from the research team which included the classroom teachers, mathematics and technology specialists, and one other researcher. The video recordings were also transcribed and additional observation notes were developed from the recordings. Daily Mini Cycle Reflection Immediately after each instructional episode, the research team met together to discuss their observations of the lesson and changes that need to be made to the instruction for the following day. These meetings were audio recorded and used in the retrospective analysis. Crompton: Using Context-Aware Ubiquitous Learning to Support Students' Understanding of Geometry Art. 13, page 6 of 11 Artefact Collection Student work artefacts from the teaching experiment were collected for analysis. This included screen shots of the students angle findings and measurements as well as worksheets and any rough notes or jottings the students created. Researcher's Daily Reflection Journal The primary researcher completed a personal reflection journal for each of the teaching episodes during each mini cycle. The journal was an instrument that allowed the researcher to step back from the action to record impressions, feelings, and thoughts (Holly 2002); and within the context of DBR, future plans were also be recorded. This form of data collection provided a medium for thinking aloud and was a reflective tool for "trying out ideas for action and assessing their implication, and evaluating the effectiveness of attempts to introduce changes" (Holly 2002, p. v). The researcher reflection journal completed during each mini cycle was a catalyst for change during the teaching experiment and the retrospective analysis. All of these data sources were used during both the daily mini cycle analysis and the retrospective analysis phases at the end of each macro cycle. Data gathered from the final retrospective analysis was used to create a more robust local instructional theory. Coding the Data for Design Guidelines To develop a set of design guidelines, data from all of the sources, other than the clinical interviews, were coded. The interviews were not included as they underwent a separate analysis following Scally's (1990) protocol described earlier and were primarily used to provide an empirical understanding of pre and post instruction students' angle understandings. The rest of the data (video, audio, and text) was entered into NVivo 10 and was coded using grounded theory design with a constant comparative method (Strauss & Corbin 1998). The data were open coded to identify important themes from the data regarding the design of activities and they were labelled. The study of these data was an iterative and inductive process. The initial codes led to intermediate coding and the constant comparison of data of information to information, information to codes, and codes to codes. Results and Discussion Using DBR, the researchers developed a local instruction theory involving two components; design guidelines for informing the development of context-aware u-learning activities and a set of exemplary context-aware u-learning activities for extending and enhancing students' understanding of angle concepts. Extending and enhancing students understanding of angle -Interview Data Using Scally's (1990) clinical interview, students were required to demonstrate understanding of angle concepts, specifically of apperception of the physical attributes of angle; this included the static (configurational) and dynamic (moving) aspects (Kieran 1986). Students were asked to provide both oral and graphical explanations to show understanding that angles can be represented in multiple contexts, embody generalizable attributes, and demonstrate correct procedures for measuring angle. Scally's interview methodology used the van Hiele levels of geometric thinking (1957/1984) to determine how well context-aware u-learning supported students' growth scores in how well they understood angle and angle measures. Table 2 and Table 3 show the pre and post instruction angle understanding scores for macro cycle one and macro cycle two. The students in macro cycle one began working between the visual and the analysis level for drawing, identifying, and sorting angles. For angle measure and relations the students were working within the visual level. For the post instruction interviews, the four students improved and moved from the visual to the analysis level. The majority of the students were working fully within the analysis level (level two) at the end of the macro cycle. Students in macro cycle two predominantly scored within the visual level in the pre instruction interview with some students working partially between the visual and analysis level. For the post instruction interview, the majority of the students moved into the analysis level of geometric thinking, however, for drawing angles and angle relations three of the four students were working between the analysis level of thinking and the informal deduction level. Note. V indicates that those students are working at the visual level; A indicates that those students are working at the analysis level, and I indicates that those students are working at the informal deduction level. Two letters indicate that those students are working between two levels. Dominance in one level is not denoted on this table. The numbers represent the students working at that level. Table adapted from "The impact of experience in a Logo learning environment on adolescents' understanding of angle: a van Hielebased clinical assessment," by S. P. Scally, 1990, Unpublished doctoral dissertation, Emory University, Atlanta, Georgia. Following the teaching experiment the students from both macro cycles showed improvement. However, students in macro cycle two demonstrated the greatest increase from pre to post interview scores. Arguably, this improvement is due to the revision to the activities following macro cycle one. Extending and enhancing students understanding of angle -Data from the Teaching Experiment and Mini Cycle Analysis In the review of the literature, a number of problem areas were described as to how students can develop misconceptions and errors as they come to understand angle concepts. Context-aware u-learning was proposed as a pedagogical approach that may ameliorate those problems (See Table 4). As the data were analysed during the mini cycle analysis and retrospective analysis it appeared that contextaware u-learning did support the students in these ways: Set of activities The results of this study provide a set of activities involving context-aware u-learning. Due to space constraints, the full set of activities developed from this study cannot be provided within this paper but they are included as Appendix A and also within this Dropbox file; https://www.dropbox.com/s/ n9xyeflfpuy4jl3/DBR%20Lessons.pdf?dl=0 Design Guidelines Data collected from this study provided a vast amount of information. These data were coded and four design guidelines emerged. Design guideline 1. Ensure students do not rely on the technology to do the talking Discussion is an effective way of promoting learning. "Reflective thought and, hence, learning is enhanced when the learner is engaged with others working on the same ideas" (Van de Walle & Lovin 2006, p. 4). Computers can be used to foster mathematical discourse, augmenting communication from teacher-to-student, or computerto-student, to richer student-to-student communication Problem Addressed Recognizing angles in different contexts. Student lack this ability as indicated by Crompton2013b. By using the mobile devices to take photographs of the angles, the students were able to first see the 3D angles which helped the students connect with the real-world angles. In addition, the camera view reduced the amount of external information the student was receiving to more easily find the angles. Determining plausible answers The students could look back from the device to see the physical angles which helped them determine if the final measurement was plausible. Angles are based on a dynamic rotation. Student lack and understanding of this concept as indicated by Crompton 2013b. Students were able to understand that an angle is the rotation from a point as the dynamic protractor demonstrated this movement. The length of the angle rays (lines) are irrelevant attributes of angles A misconception indicated by Clements 2004;Berthelot & Salin 1998;and Yerushalmy and Chazan (1993). Students were supported in understanding that the length of the rays does not change the size of the angle as the rays on the app were changeable in length. In Figure 3, the student demonstrates the understanding that the length of the angle ray did not matter as they fit the length of the dynamic rays in the app to a coat pattern. Orientation is an irrelevant angle attribute. A misconception indicated by Battista (2009). As students became familiar with looking for angles in the real world, the students realized that angle orientation did not matter. For example, the typical textbook right angle always faced one way. In the real world as the students found right angles and measured them using the dynamic protractor they realized that orientation did not matter. For example, using the app (Measure a Picture), the student in Figure 3 demonstrated that he/she no longer considered orientation a salient angle attribute and the length of the angle rays did not constitute the measure of the angles. During the instructional experiment it was found that students engaged their partners in very little discussion when they were asked to share the angles they had identified. Instead the students used the features of the iPad to share the angles and provided very little verbal explanation. For example, one student was asked by their partner what angles he/she had found and the student responded by pointing to the iPad screen and using the pinch feature to zoom in and out of the image, again pointing each time they did this. The student did not make any verbal connection to the other student during this time. During the design of these activities it is important to include a specific requirement that the students verbally interact as well as use the features of the technology to get across the information to another student or educator. Design guideline 2. Reduce cognitive load by not introducing the educational concept and the new technology at the same time Cognitive load is a detailed field of study that is too great to go into in-depth review or analysis in this paper. However, data from this research show that students strug-gled to learn two new independent concepts at the same time. At the beginning of the teaching experiment students are first coming to explore the meaning of the term angle and to have them learn the use of a new technological device and program (Measure a Picture) at the same time was too much information for the students to process. This was changed to have students' first focus on the educational concept of study, then on the second day the students were introduced to the mobile devices and the program. Design guideline 3. Reduce the amount of real-world information that the student is processing Having the students conduct context-aware u-learning activities will typically have the students interacting with the real-world environment. Although the students may easily connect with a familiar environment, e.g. school grounds, there is a lot of visual information connected with that place when students are asked to explore it for a particular concept. For example, in this study, students were required to find angles in a real-world context. In a 360 degree view of the environment next to a school building there is a large amount of information to review to identify angles. In addition, the students are new to understanding what an angle is. Use the rulers and protractors to measure distances and angles in the picture. Unlock and double-tap the picture to replace it from your album or cmnera. This information should be reduced and a photo viewer is a good way of reducing the information the student is receiving. This should be included in a context-aware u-learning program to allow students to interact with the real world in a manageable way. As the students are preparing to use the mobile technology, to reduce the load of information students can be required to use a non-digital technology such as a cardboard tube to look through. The students can then move from the cardboard tube to the photo viewer. Figure 4 shows students preparing to use the tubes for viewing angles. It is important to have students working with contextaware u-learning activities to gain an in-depth understanding of concepts with connections to the real-world. However, the context-aware u-learning activities must also be mixed with decontextualized learning to ensure the students can transfer that information. In other words, the students may recognize angles on a building in the real world, but they should also be able to recognize an angle drawn onto a piece of paper and make the connection that they are both angles. Conclusion This study resulted in an empirically-based instruction theory of how context-aware u-learning can be used to support students' understanding of angle and angle measurement, and a set of design principles for developing context-aware u-learning activities. Using a cyclical iterative process of anticipation, enactment, evaluation, and revision (Gravemeijer & van Eerde 2009), the final set of activities were developed and they are an embodiment of the design principles. Context-Aware U-learning Activities that Extend and Enhance Students' Understanding of Angle Using Scally's interview, that matched students' angle understanding to the van Hiele levels of geometric thinking (1957/1984) and using the other data from the mini cycle and retrospective analysis, the findings indicate that the context-aware u-learning activities did extend and enhance students' understanding of angle concepts. In addition, changes made to the instructional activities improved students' understanding in macro cycle two to a higher level than it did with the students in macro cycle one. Furthermore, evidence from the multiple data sources was triangulated and it would appear that context aware u-learning was supportive for learning about angle concepts in these ways: (a) by using the mobile devices to take photographs of the angles, the students were able to first see the 3D angles which helped the students connect with the real-world angles; (b) as students became familiar with looking for angles in the real world, the students realized that angle orientation did not matter. For example, the typical textbook right angle always faced one way. In the real world as the students found right angles and measured them using the dynamic protractor they realized that orientation did not matter; (c) the students could look back from the device to see the physical angles which helped them determine if the final measure was plausible; (d) students were able to understand that an angle is the rotation from a point as the dynamic protractor demonstrated this movement; and (e) students were supported in understanding that the length of the rays does not change the size of the angle as the rays on the app were changeable in length. These points connected with the misconceptions and errors that students have with angle concepts that were initially identified in the literature review. The final set of context-aware u-learning activities can be found in full in Appendix A. Figure 4: Cardboard viewing tubes to reduce the amount of real-world information being reviewed.
2017-09-24T23:02:14.399Z
2015-08-18T00:00:00.000
{ "year": 2015, "sha1": "d7316aeb9429c1a95295c93d4301ed08c7bffad5", "oa_license": "CCBY", "oa_url": "http://jime.open.ac.uk/articles/10.5334/jime.aq/galley/576/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "00a53009ead55e92702c196f3e9b1f309d06bb22", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
55651467
pes2o/s2orc
v3-fos-license
intermetallics : past , present and future Intermetallics have seen extensive world-wide attention over the past decades. For the most part these studies have examined multi-phase aluminide based alloys, because of their high stiffness, combined with reasonable strength and ductility, good structural stability and oxidation resistance, and attempted to improve current Ni-base superalloys, Ti-base alloys, or Fe-base stainless steels for structural aerospace applications. The current status of development and application of such materials is briefly reviewed. Future developments are taking intermetallics from the realm of "improved high-temperature but low-ductility metallic alloys" into the realm of "improved aggressive-environment, high-toughness ceramic-like alloys". Such evolution will be outlined. INTRODUCTION Intermetallics are materials with an ordered arrangement of mixed atom species of metal-metal or metahsemimetal types, generally in a nearstoichiometric composition, for example NÍ3AI, FeAl, TiAl, M0SÍ2, etc.Here nickel (Ni), iron (Fe), titanium (Ti) and molybdenum (Mo) play the role of metal and aluminium (Al) or silicon (Si) the role of metal/semimetal.In such cases the metal-metal or metal-semimetal bonding takes on a partially metallic and partially covalent (or ionic) nature.There are also important intermetallic compounds of metahmetal combinations where atomic size differences are responsible for the ordered arrangement, notably for the Laves phases.The existence of strong interatomic bonding leads to higher elastic moduli -stiffer materials.The presence of a reactive species, aluminium or silicon here, leads to the formation of a protective surface layer which endows good oxidation and corrosion resistance.The ordered superlattice structure means that larger shear displacements are required to plastically deform the lattices, and these may not be found in atomically-smooth slip planes, leading generally to stronger, less ductile materials.These essential characteristics have led to the interest in intermetallic compounds as stiff and strong, oxidation-resisting materials for aerospace structural applications. Based on such promise, considerable effort has been devoted to the development of intermetallics as aerospace materials.While showing good strength and environmental stability, other aspects such as low ductility and toughness and mediocre creep strength, as well as fabrication difficulties have greatly hindered the introduction of intermetallics as industrial structural materials.The present document gives a very short account of the major research and development activities that have taken place over the past few decades, to the present time, and also gives some indications of where future activities may lead.The document considers in turn the evolution and activities corresponding to each of the most important families of intermetallic compounds. NisAI ALLOYS Basic studies over the 1980-1990 decade offered the hope that the brittle stoichiometric intermetallic could be transformed into a useful, ductile material by micro-alloying (additions of B to enhance grain boundary strength) or by macroalloying (excess Ni to avoid the stoichiometric composition; Cr addition to avoid intermediate temperature embrittlement).These materials, with a higher Al content than conventional superalloys, would have lower density and higher melting point, and would lead to improved versions of such superalloys.This research came to an abrupt halt when it was realised that the creep strength was poor, and possibilities of use as an aerospace material were limited. Despite this "loss of limelight", NÍ3Al-base alloys have been developed into commercial materials of some considerable value -notably as new high temperature alloys where shorter-term strength, rather than creep strength, is the important property (e.g.forging or extrusion dies; superalloy-competing materials for aerospace components) ^ , and also as materials resisting aggressive chemical and mechanical environments (material-processing rolls; sulphur-rich oil well piping) ^^\ Fe^AI ALLOYS Basic alloy studies over the 1980-1990 decade showed that neither FcjAl based alloys nor FeAl based alloys possessed good high temperature strength, and aerospace interest quickly ceased.Work continued, especially at Oak Ridge National Laboratory, on the development of more ductile and creep-resisting Fe3Al base alloys as substitute alloys for conventional high-temperature and stainless steels -by micro-alloying additions (such as B, Cr, Nb, Mo, etc.) and by microstructure control (grain size and aspect ratio...).Difficulties of processing these Fe3Al based materials, and their minimum property advantages over already existing steels, led to a halt to this research, and a subsequent attention to FeAl base alloys of higher oxidation-resisting capacity. While most development work has now stopped, one US company has recently announced the commercialisation of FeAl strip, prepared by powder metallurgy methods, and used for hightemperature heating elements^ \ Additionally, a current EU project is evaluating the industrialscale processing of high strength, ductile FeAl, also using powder metallurgical methods^ . TiAl ALLOYS This is the intermetallic family that has received the most attention, continuing to the present.Some of the major advances are presented in volumes such as those of references^ \ A very brief outline of developments over the past decades is given below: • 1980-1990: Identification of TiAl as most important potential aerospace intermetallic.Realization that a two-phase mixture composed of a2 and g phases (TÍ3AI and TiAl phases) is the best structural material.Examination of the role of chemical composition modifications (amount of Al, additions of Nb, Cr, Mn, V, etc.) evaluated to balance mechanical and chemical (oxidation) behaviour. • 1990-1995: Development of "first-generation" alloys based on 48-2-2 (48 % Al with 2 % Nb and 2 % Cr or Mn, etc.) with good balance of mechanical properties and oxidation resistance.Identification of turbocharger rotors and valves in automobile engines, as well as aero-and land-based-turbine blades and valves as important components where these materials should be used. Cast structures being coarse, the examination of B additions to the melt to obtain refined assolidified structures -the development of the German TAB alloys. Development of casting techniques for producing aerospace and land-base (powergenerating) turbine blades and vanes. Development of techniques for rolling thin sheets of TiAl (based on "pack-rolling" methods), superplastic forming techniques, joining processes. • 1995-2000: Development of cast technologies for clean and contaminant-free ingot production -is no easy task for such high melting point and highly reactive alloys. Development of more complex alloys, for example as Ti-Al-Ml-M2-SM: where the Al content may be reduced slightly towards 44 % to allow more of the strong RJ phase to be retained, Ml is a transition metal such as Cr,Mn,V introduced to control ductility, M2 is an early bcc metal such as Nb, Ta, Mo, W, etc, used to improve creep and oxidation characteristics, and SM is a semi-metal such as Si, B, C, etc, used to improve creep behaviour. Examination of segregation problems during cast -which can be very severe, with as much as ± 2-3 % Al variation as macrosegregation in a Ti-48 % Al alloy base. Development of processing procedure for production of automobile valves -based on clean ingot casting, bar extrusion, heavy working to valve head form.A comparative evaluation of powder metallurgy techniques for the same components. Development of cast-HIP-extrusion-forging processing for the preparation of turbine blades.Comparative development of powder processing, either directly to HIP-forged blades, or as precursors to sheet rolling and then diffusion bonding to hollow blades. Upscale of casting techniques to pilot-scale production quantities. Development of friction welding/laser welding techniques and the introduction of cast, bondedshaft turbocharger rotors to industrial production. Pilot plant-scale casting of TiAl for automobile applications. Test introduction of cast-forged TNB blades/vanes for high-pressure compressors in aeroturbines (Germany-UK). Low temperature turbine blades by powder metallurgy-HIP processing (France). Over the past decades there has been a clear evolution from basic research into structure, microstructure and property relationships, combined with suggestions for "first-generation" alloy compositions, towards a balance of process examination, development and scale-up while more sophisticated alloy compositions and structures are produced.As the first, relatively small-scale applications begin, it should be remembered that these gamma-aluminide alloys are still relative "youngsters" in their materials field, having been around for only 15 years or so, while their serious competitor, the Ni-base superalloy, is well installed and has been optimised by 60 years or so of research, development and production. FUTURE DEVELOPMENTS As outlined above, extensive development of NÍ3AI, FeAl and, especially, of TiAl is almost completed, and these materials are now slowly gaining acceptance as they start to be used as commercial materials. Research is now underway on several new versions of intermetaïlics, where it appears that two families may receive attention -Fe^Al-base alloys and Silicides of bcc transition metals such as Mo, Nb.In both cases, it is the important industrial demand for materials capable of supporting higher temperatures in very aggressive environments that is important.Iron aluminide alloys -read as Al-rich Fe-base steels -have outstanding environmental resistance behaviours in a wide range of industrial environments, including sulphidising, carburising, and oxidising conditions.This industrial demand is leading to significant research and development activity, including via mechanical alloying, alloy composition control, and processing studies.Molybdenum silicides (based essentially on the M0SÍ2 compound) are widely used in the heat treatment industries, for their good oxidation resistance, but have too low toughness and suffer low-temperature "PEST" degradation to find wider use as structural materials.Significant research activity on alloy composition and microstructural modification, and on processing method, is taking place, including extensive activity on multi-phase alloys based on the MoSÍ2'Mo-Mo3Si'Mo5SiB2 system.The window being addressed here is of very high-temperature and very aggressive (oxidising) environments, with improved fabricability and toughness/fatigue behaviour over that found in the competing ceramic-base materials.
2018-12-15T03:39:47.582Z
2005-12-17T00:00:00.000
{ "year": 2005, "sha1": "5f33ddec8418e530de7a1ac55cc5531807bffb96", "oa_license": "CCBY", "oa_url": "http://revistademetalurgia.revistas.csic.es/index.php/revistademetalurgia/article/download/1084/1095", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5f33ddec8418e530de7a1ac55cc5531807bffb96", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
2621554
pes2o/s2orc
v3-fos-license
Reviewing ALOS PALSAR Backscatter Observations for Stem Volume Retrieval in Swedish Forest Between 2006 and 2011, the Advanced Land Observing Satellite (ALOS) Phased Array L-type Synthetic Aperture Radar (PALSAR) instrument acquired multi-temporal datasets under several environmental conditions and multiple configurations of look angle and polarization. The extensive archive of SAR backscatter observations over the forest test sites of Krycklan (boreal) and Remningstorp (hemi-boreal), Sweden, was used to assess the retrieval of stem volume at stand level. The retrieval was based on the inversion of a simple Water Cloud Model with gaps; single estimates of stem volume are then combined to obtain the final multi-temporal estimate. The model matched the relationship between the SAR backscatter and the stem volume under all configurations. The retrieval relative Root Mean Square Error (RMSE) differed depending upon environmental conditions, polarization and look angle. Stem volume was best retrieved in Krycklan using only HV-polarized data acquired under unfrozen conditions with a look angle of 34.3° (relative RMSE: 44.0%). In Remningstorp, the smallest error was obtained using only HH-polarized data acquired under predominantly frozen conditions with a look angle of 34.3° (relative RMSE: 35.1%). The relative RMSE was below 30% for stands >20 ha, suggesting high accuracy of ALOS PALSAR estimates of stem volumes aggregated at moderate resolution. OPEN ACCESS Remote Sens. 2015, 7 4291 Introduction Throughout its lifetime (2006)(2007)(2008)(2009)(2010)(2011), the Phased Array L-type Synthetic Aperture Radar (PALSAR) instrument onboard the Advanced Land Observing Satellite (ALOS) acquired multiple images in several operating modes according to a predefined observation scenario [1].Given the repeatedly acknowledged sensitivity of L-band data to forest variables in particular in the cross-polarized backscatter [2][3][4] and under unfrozen conditions [5,6], the image acquisition of ALOS PALSAR were tailored to provide repeated dual-polarized (Horizontal-Horizontal, HH, and Horizontal-Vertical, HV) data in the Fine Beam Dual (FBD) mode during the summer and fall of each year.In addition, HH-polarized images were acquired during the winter season in Fine Beam Single (FBS) mode.During each spring and late fall, a single dataset was acquired in the polarimetric (PLR) mode to obtain a full scattering matrix.These modes acquired images with a resolution of approximately 20-30 m and were operated along ascending orbits, i.e., at nighttime.Along descending orbits during daytime, PALSAR operated in the Wide Beam (WB) mode with a spatial resolution of approximately 70 m, to allow sharing of resources with two optical instruments [1].The acquisition strategy was refined towards the end of 2006 by changing the look angle of the Fine Beam mode from 41.5° to 34.3° to reduce range ambiguities [1].For the PLR mode, images were acquired with a look angle of 21.5°: since 2009, images were also acquired at 23.1°.In the remainder of this paper, we will refer to a specific acquisition configuration in terms of mode and integer of the look angle (e.g., FBD34 stands for Fine Beam Dual mode with a look angle of 34.3°). Over Sweden, the amount of ALOS PALSAR observations from different acquisition modes is superior to most areas of the globe thanks to the involvement in the calibration and validation phase of the sensor [7] and in JAXA's Kyoto & Carbon Initiative [8] aimed at demonstrating the capability of ALOS data to support environmental conventions [1].The advantage of multi-temporal observations with respect to single observation relies either in the possibility to reduce speckle noise, thus reducing the error component in the estimation of forest variables from a single average image [9] or to reduce the error in single-image estimates with a combination of these [10,11].The latter approach is in our understanding more powerful because the prediction capability of each observation is kept in the multi-temporal combination.Using ALOS PALSAR observations (FBD mode only), the retrieval of forest above-ground biomass improved by approximately 20% in terms of Root Mean Squared Difference (RMSD) with respect to the best single-image retrieval [12].This confirmed previous results obtained for L-band HH-polarized backscatter [5,13], C-band repeat-pass interferometric SAR coherence and backscatter [10] and C-band Envisat Advanced SAR (ASAR) backscatter [14], all in boreal forest.Yet, the multi-temporal aspect of ALOS PALSAR data was only partially exploited in studies dealing with the retrieval of forest variables.Either a single image (e.g., a mosaic product) was used to derive an estimate of above-ground biomass [15][16][17] or retrieval from multiple images (multi-temporal, multi-polarization) was undertaken with a regression and results compared [18][19][20][21].Multiple regression or machine learning approaches combining SAR input datasets have been reported as highly promising to estimate forest variables [22,23].To the best of our knowledge, an approach involving an inversion of a forest backscatter model to estimate stem volume from multiple ALOS PALSAR images through combining the single-image estimates has not been pursued yet. This study set out to exploit the extensive ALOS PALSAR dataset acquired over Sweden in order to provide a comprehensive review of the stem volume retrieval achieved with multi-temporal observations of the SAR backscatter from several acquisition modes of the PALSAR instrument.Given the simple relationship between stem volume and above-ground biomass in boreal forest expressed by means of a biomass conversion and expansion factor of approximately 0.5 [24], the terms stem volume and above-ground biomass are here interchanged.With respect to [12], we address the benefits of multi-temporal observations for other modes besides FBD, having available a larger number of observations as well.Another objective was to assess a Water Cloud-based modeling approach to retrieve stem volume in view of an operational retrieval scheme such as used for hyper-temporal C-band backscatter data [14]. This study was undertaken at the boreal forest test site of Krycklan and the hemi-boreal forest test site of Remningstorp.Both test sites have been used in several studies to relate airborne and spaceborne remote sensing SAR backscatter and interferometric data to forest variables (see [25] for a recent overview).At Krycklan, a model-based retrieval from P-band airborne backscatter data resulted in a Root Mean Square Error (RMSE) relative to the in situ mean value of stem volume of 28%-42% [26].Using linear regression and several polarimetric indicators, RMSE of 17%-25% was obtained from L-band airborne data (supported by polarimetric interferometric data) [27].The same approach applied to P-band data returned an error between 5% and 27%.The errors further decreased when using non-parametric methods; nonetheless, the error span was also larger [27].At Remningstorp, retrieval of above-ground biomass based on single images of the SAR backscatter was evaluated with backscatter from low frequency data (L-, P-and VHF-band), repeat-pass interferometric C-band coherence, and single-pass interferometric X-band data.The RMSE decreased for decreasing frequency, being between 31% and 46% at L-band [28], between 18% and 27% at P-band [28] and below 25% at VHF-band [29] using SAR backscatter data.With repeat-pass interferometric C-band coherence, an RMSE of 27% was obtained [30].Single-pass interferometric X-band data yielded a relative RMSE of 23% (average over RMSEs from 18 image pairs); a multi-temporal combination of single-image estimates improved the retrieval error to 16% [25].Multi-temporal retrieval of stem volume using L-band backscatter in Swedish boreal forest was investigated at the test site of Kättböle with nine backscatter images acquired by JERS-1 in 1997-1998 in single polarization (HH) and with a fixed look angle of 34.3° [5].The retrieval was most accurate under unfrozen conditions, did not present systematic errors due to backscatter saturation in high stem volume forest and the RMSE was 25%. Test Sites The Remningstorp test site (Figure 1) is located in the south of Sweden (58°30′N, 13°40′E).The topography is fairly flat with a ground elevation between 120 m and 145 m above sea level.The test site covers about 1200 ha of productive forest land managed by Skogssällskapet and owned by the Hildur and Sven Wingquist's Foundation for Forest Research.Prevailing tree species are Norway spruce (Picea abies), Scots pine (Pinus sylvestris) and birch (Betula spp.).The forest is divided into stands mostly smaller than 10 ha with a range of stem volume conditions up to a maximum value of about 600 m 3 /ha at stand level.The forest stands are even-aged and consist mainly of coniferous species (i.e., either spruce or pine, or mixed), where only a few stands are dominated by deciduous species (i.e., birch).The Krycklan test site (Figure 1) is located in the north of Sweden (64°14′N, 19°50′E) and is a watershed managed and owned by both Swedish forest companies and private owners.Topography is hilly with several gorges and the ground elevation ranges between 125 m and 350 m above sea level.The forest land covers about 6800 ha of mainly coniferous forests.The prevailing tree species are Norway spruce and Scots pine, but some deciduous tree species, e.g., birch (Betula pubescens), are also present.The forest is divided into stands of different sizes, occasionally being larger than 50 ha.Stem volume conditions range up to a maximum value of about 400 m 3 /ha.The forest stands are even-aged and consist mainly of either spruce or pine, or mixed species.In situ data consisted of digital stand boundary maps in vector format and stand-wise measurements of stem volume derived from forest field inventory data. For Remningstorp, 340 subjectively inventoried stands were stratified into 100 m 3 /ha range up to 700 m 3 /ha.Altogether, 56 forest stands were randomly selected for field inventory, ensuring representation of the entire stem volume range.Nonetheless, only for a few stands the stem volume was smaller than 150 m 3 /ha; most stands included forest with a stem volume above 200 m 3 /ha (Table 1 and Figure 2).Stand size was on average 3 ha, the largest stand being 11 ha large (Table 1).Topography was flat with local slope angle being less than 2°.The inventory was undertaken in 2004 and was done according to prescriptions in the forest management planning package (FMPP) developed by the Swedish University of Agricultural Sciences [31].The FMPP includes an objective and unbiased method for estimation of forest variables such as stem volume, tree height, and tree species composition at stand level from measurements of individual trees.Given the high yearly growth rate of stem volume in the region (7.5 m 3 /ha/year [32]), the stem volumes were updated each year with stand-wise yearly growth factors available with the forest field inventory data.Stands where forest was felled at some time between 2006 and 2010 [33,34] were excluded from the analysis of images acquired after the felling. For Krycklan, stem volume was available for 1131 forest stands; inventory was undertaken during 2007 and 2008 with the same approach as for Remningstorp.Stem volumes were mostly below 300 m 3 /ha; forest stands included all growth stages ranging from young regrowth to mature forest (Table 1 and Figure 2).Forest stands were on average larger than in Remningstorp (Table 1); several stands covered an area larger than 10 ha.No major felling activity was reported to have occurred during the period of image acquisition.The average slope angle at stand level was between 0° and 20°; for approximately 90% of the stands, the slope angle was smaller than 10°. ALOS PALSAR Dataset The ALOS PALSAR dataset available for this study is summarized in Table 2 with respect to operating acquisition modes and in Table 3 with respect to polarization/look angle.During 2006, PALSAR datasets were acquired in Fine Beam mode using several look angle configurations, primarily at 41.5° (Table 2).After the optimization of the look angle with respect to image quality, the Fine Beam mode was operated since 2007 with a look angle of 34.3°.The large number of FBS34 and FBD34 datasets is explained by the repeated observations (at least two in FBS and three in FBD per year) over Sweden (Table 2).PLR images were acquired throughout the ALOS mission in spring and late fall with a look angle of 21.5°, except during the fall of 2009 when also the 23.1° look angle was used (Table 2).The largest multi-temporal datasets were acquired with a look angle of 34.3°, primarily at HH-polarization (Table 3).Repeated acquisitions were also available in PLR mode with a look angle of 21.5° and at HH-polarization with a look angle of 41.5° (Table 3).Unfortunately, five of the six acquisitions in PLR21 mode over Krycklan covered the test site only partially and were therefore discarded, thus not allowing any multi-temporal analysis in such mode.For both test sites, we also had available a multi-temporal dataset of images acquired in WB mode.However, these were here not considered because of the moderate resolution (approximately 70 m) and the small size of the forest stands (Table 1), which caused the stand-wise averages of the backscatter to be affected by significant residual speckle noise. Part of this dataset was already utilized to analyze the signature of the PALSAR backscatter as a function of look angle, polarization and environmental conditions [35].It is here extended with images acquired after April 2008 until January 2011, shortly before the end of data acquisition in March 2011 and the end of the ALOS mission in May 2011.The additional acquisitions increased the multi-temporal dataset in the modes FBS34, FBD34 and PLR, whereby no additional datasets with a look angle of 21.5° (FBS mode), 41.5° or 50.8° were acquired. Environmental Conditions at Image Acquisition The weather data consisted of daily observations of temperature (min/max), total precipitation and snow depth from meteorological stations nearby each test site and reported in the Global Historical Climatology Network (GHCN) database by the National Climatic Data Center (NCDC), National Oceanic and Atmospheric Administration (NOAA).Since L-band backscatter data in Swedish boreal forest were found to be mostly sensitive to seasonal conditions (e.g., frozen or unfrozen conditions), we have grouped images according to the major environmental condition at the time of image acquisition (frozen, unfrozen and freeze or thaw transition) (Table 4).For simplicity, we did not add information here about whether the images were acquired under dry or wet conditions; adequate reference is, however, given when presenting the results of this study (Section 5).Most images over Remningstorp were acquired under unfrozen conditions.At Krycklan, the majority of the observations were acquired under frozen conditions because of the colder climate compared to Remningstorp.As a result of the PALSAR observation scenario timing the FBS mode during the winter season and the FBD mode between spring and fall, no dual-polarized images were acquired under frozen conditions.The only cross-polarized dataset acquired under frozen conditions belonged to a PLR21 dataset.At both test sites, several images were acquired during transitions periods related to freeze and thaw conditions.The few datasets acquired with a look angle of 41.5° and 50.8° were all acquired under unfrozen conditions, except for one HH-polarized image over Krycklan. Processing of the PALSAR Images PALSAR images were processed as described in [35] from Single Look Complex (SLC) Level 1.1 format to form a stack of calibrated, terrain geocoded and topography-compensated images of the SAR backscatter.At first, all SAR images for a given acquisition mode (e.g., FBD34) were co-registered with respect to a master image using a cross-correlation algorithm [36].Each SLC was then calibrated with factors published in [37] and multi-looked (i.e., spatially averaged) using mode-specific factors aiming at achieving roughly squared pixels of approximately 20 m × 20 m in range and azimuth.The SAR backscatter images were finally geocoded using a Digital Elevation Model (DEM) from the Swedish National Land Survey (Lantmäteriet) with 50 m posting and orbital information provided by JAXA along with the image data.To maintain the high resolution of the PALSAR data (20 m), the DEM was resampled to this pixel size with a bilinear interpolation.Terrain geocoding was based upon a geocoding lookup table that described the link between pixels in the radar (input) and map (output) geometry [36].Taking into account that SAR images for a given mode had been co-registered, just one lookup table per mode was required.Imperfect orbital information implies geocoding offsets with respect to the true output geometry.To compensate for such offsets, each lookup table was refined by estimating these with a cross-correlation algorithm between the mode-specific master SAR image and a SAR image simulated from the DEM, representing the output map geometry [36].The geocoding accuracy following the refinement of the lookup table was below 1/3rd of the pixel size, i.e., less than 10 m in northing and easting.The geocoded SAR backscatter images were finally compensated for distortions of the backscatter due to sloped terrain by correcting for the effective pixel area (in radar geometry) and the local incidence angle [35].The backscatter component due to object-specific scattering mechanisms and terrain slope [38] was not accounted for because it required additional information, which was not available for this study. Working at stand level implied computing the average SAR backscatter and the standard deviation for each stand.We also computed the average and the standard deviation of the local incidence angle derived from the DEM for each stand in Krycklan.Here the local incidence angle spanned an interval of approximately 15°.The correlation coefficient between local incidence angle and backscatter for stands with similar stem volume (±5 m 3 /ha) was always below 0.3, justifying why we did not consider the local incidence angle as additional explanatory variable in our investigation.The availability of a DEM with a pixel size of 50 m, i.e., well above the spatial resolution of the PALSAR data, was, however, sub-optimal to conclude on the real effect of local incidence angle on the investigations.This analysis was not necessary at Remningstorp because of the predominantly flat terrain. Stem Volume Retrieval Methodology To retrieve stem volume from the ALOS PALSAR backscatter data, we used a model-based approach exploiting a Water Cloud Model with gaps [5,39].The individual estimates of stem volume obtained from each SAR backscatter image by inverting the trained model were then combined with a linear weighted function, referred to as multi-temporal combination [5].The modeling and retrieval approach was presented and discussed extensively for L-band in previous research papers [5,12,39].An assessment of the performance of this approach to retrieve forest variables from L-band backscatter was recently presented with respect to other existing parametric and non-parametric approaches [40] did not show significant shortcomings of the Water Cloud Model with gaps. The Water Cloud Model with gaps assumes that the forest backscatter consists of a component coming from the canopy and a component originating at the ground surface reaching the sensor either through the canopy gaps or, attenuated, through the canopy.Double-bounce and multiple interactions are not considered because in managed boreal forest these terms were found in previous studies to be negligible with respect to direct scattering (see [5] and therein cited references).Polarimetric decomposition of the PLR data [41] confirmed that the total forest backscatter could be explained as a contribution of a surface and a volume component.In a more general context, a double-bounce term should not be discarded a priori [42,43]. The original forest backscatter model expressed the total forest backscatter as a function of a parameter of canopy closure from a microwave perspective, named area-fill factor.In [10], it was shown that an equivalent expression could be obtained by replacing the area-fill factor and the related tree attenuation with stem volume and a factor expressing the two-way transmissivity of a forest.Equation (1) shows the semi-empirical model: In Equation ( 1), the forest backscatter, σ°for, is related to stem volume, V, in terms of a ground component and a vegetation component where σ°gr and σ°veg express the backscattering coefficient of the ground and the vegetation, respectively.Both coefficients are unknown a priori and need to be estimated to allow an inversion of the model to retrieve stem volume.Each term is weighted by the fraction of ground seen through gaps and foliage (attenuated) expressed in the form of a two-way forest transmissivity e −βV .The coefficient β is empirical [10,44] and depends on forest structure and dielectric properties of the canopy.Nevertheless, realistic values at L-band were found to be between 0.003 and 0.007 [5].A reasonable approximation in boreal and temperate forest for unfrozen conditions was found to be β equal to 0.008, when relating the two-way forest transmissivity to aboveground biomass [12], which scales to 0.004 when using stem volume. In this study, every second stand sorted for increasing stem volume was included in a training set; the rest of the stands formed the test set.Herewith, we tried to ensure that the training and the test set would represent the same range of stem volumes.Estimates of σ°gr, σ°veg and β were obtained by least squares regression using the measurements of the SAR backscatter and stem volume for the stands in the training set.Model training was also performed assuming a constant β set a priori equal to 0.006.Results will be compared in Section 5. Given a measurement of the forest backscatter, σ°for,meas, and the corresponding estimates of the model parameters σ°gr, σ°veg and β for the given SAR image, the inversion of the model in Equation ( 1) is straightforward and allows the estimation of the stem volume, Vest (Equation (2). Assuming that N measurements of the SAR backscatter are available for a given forest stand, the corresponding estimates of stem volume, Vest,i, can be combined to obtain a new estimate referred to as multi-temporal stem volume, Vmt (Equation (3)), using weights, wi, which are here assumed to correspond to the difference of the backscattering coefficients for vegetation and ground, i.e., σ°veg,i -σ°gr,i.The coefficient wmax was equal to the largest of the differences. The accuracy of the retrieval was quantified with (i) the RMSE with respect to the in situ stem volume in the test set; (ii) the relative RMSE equal to the RMSE divided by the average stem volume derived from the in situ data forming the test set; (iii) the coefficient of determination R 2 and (iv) the bias equal to the difference between the average retrieved and in situ stem volume. Relationship between SAR Backscatter and Stem Volume The SAR backscatter increased for increasing stem volume, with a rapid ascent for low stem volumes (<100 m 3 /ha or less) followed by a significant decrease of sensitivity.The relationship between the SAR backscatter and stem volume depended upon environmental conditions and polarization (see examples in Figure 3), as well as on look angle.At Krycklan, we observed strongest sensitivity of the backscatter to stem volume under unfrozen conditions and at HV-polarization (Figure 3a and Table 5).The backscatter increased rapidly for increasing stem volume; the sensitivity of the backscatter to stem volume became extremely weak in the densest forests.Observations taken during winter-time (frozen conditions or freeze/thaw events) were much less correlated with stem volume than under unfrozen conditions (Figure 3a,b and Table 5).At Remningstorp, we observed a slightly different trend, with SAR backscatter from data acquired under frozen conditions being better correlated with stem volume than in case of data acquired under unfrozen conditions (Figure 3c,d and Table 5). An almost linear trend between SAR backscatter and stem volume was observed in several cases when images were acquired under frozen conditions.There did not seem to be any apparent difference between statistics for images acquired under unfrozen moist (i.e., <2 mm of recorded precipitation) and unfrozen wet (i.e., >2 mm of recorded precipitation) condition.Overall, the observations at the two test sites and the temporal consistency for a given environmental condition agreed with trends of the SAR backscatter at L-band with respect to stem volume and above-ground biomass in boreal as well as in other forest environments [5,6,[15][16][17][18][19][20]22,23,28,39,45,46].At both test sites, the spread of the SAR backscatter for a given stem volume was considerable, thus confirming that L-band backscatter captures only part of the information on structural properties of a forest and the signal recorded by the radar contains additional contributions [28].5).Look angle: 34.3°. Forest Backscatter Modeling To illustrate the performance of the modeling approach with respect to the measurements of SAR backscatter and stem volume, we focus on the Krycklan test site because of the availability of stem volumes throughout all growth stages.The lack of stands with low stem volumes in Remningstorp hindered the assessment of the performance of the backscatter model in Equation (1) (as in [5]).Figure 4 shows one example of modeled and measured backscatter with respect to stem volume for each type of PALSAR dataset available at Krycklan.Taking into account previous investigation where it was shown that the backscatter is highly consistent over time for a given polarization, look angle and environmental condition [35], the examples in Figure 4 can be considered general enough to represent the behavior of the backscatter for all images available in this study.The modeled backscatter followed well the trend in the measurements (Figure 4).The strongest sensitivity of the backscatter to stem volume was found for look angles of 34.3° and 41.5° under unfrozen conditions with an increase of approximately 3 dB at HH-polarization and 4 dB at HV-polarization.Under frozen conditions, the HH-polarized backscatter increased by only 1 dB.For the 21.5° look angle, the co-polarized backscatter increased by less than 2 dB with an almost linear trend both in the FBS mode (left panel in Figure 4) and in the PLR mode (right panel in Figure 4).The HV-backscatter of the PLR mode increased by slightly less than 3 dB, thus less than the observations at shallower look angles.The modeled backscatter in Figure 4 was obtained by considering three unknowns in Equation ( 1) and showed a large range of slopes, i.e., a wide range of the values estimated for the parameter β.To get further understanding on the behavior of the coefficient β, we looked at the statistical distribution of the estimates of β with respect to environmental conditions and look angle, which was possible only for HH-polarized data.Polarization did not seem to have an effect on the estimates of the coefficient β. To understand the dependency of β upon environmental conditions, we selected the largest dataset for a given look angle and polarization, covering all seasons (34.3° look angle and HH-polarization).The estimates of β were more consistent under unfrozen conditions than under frozen conditions or during periods of freeze/thaw transitions (Figure 5).Under unfrozen conditions, the estimates of the coefficient β were mostly between 0.005 and 0.009 (Figure 5), being in line with a previous investigation in Swedish boreal forest using JERS-1 data [5].The estimates did not show any significant difference between dry, moist and wet environmental conditions except for one observation acquired when 12 mm of precipitation were recorded during the day.On such date, the backscatter did not show any sensitivity to the stem volume and the model curve flattened after a rapid increase for the lowest stem volumes.As a consequence, the estimate of β should not be interpreted as having a physical meaning.The same explanation applies to the observations for the case with the highest overall β estimate (0.0189), in correspondence with an acquisition under thawing conditions (temperature around the freezing point, diminishing snow cover, and precipitation).The environmental conditions affected the relationship between backscatter and stem volume to such extent that they masked out the true dependency between the two variables.Frozen conditions were characterized by the largest range of β estimates.From the weather records we could not infer dependencies between the estimates of β and weather parameters (e.g., temperature, snow depth, precipitation).We interpret the results as a consequence of the limited sensitivity of the backscatter to stem volume under frozen conditions; given the non-negligible spread of the backscatter measurements with respect to stem volume, the confidence interval of the β estimate was rather large.refers to unfrozen conditions.Frozen conditions refer to images acquired under dry conditions as well as cases with snow fall.If precipitation was recorded at <2 mm, the unfrozen conditions were moist; otherwise, the conditions were wet. The dependency of the estimates of β upon the look angle was limited (Figure 6).There did not seem to be any relevant difference between estimates corresponding to look angles of 34.3° and 41.5°.For both look angles, the histogram had a peak between 0.006 and 0.007.For the very few acquisitions at 21.5°, the estimates were somewhat lower (<0.004), which agrees with the understanding that at steeper look angles the forest transmissivity is higher because of larger gaps and less vegetation along the path travelled by the microwaves. Based on the outcome of these analyses, we compared the modeled backscatter assuming β unknown a priori (i.e., model with three unknowns) and for a predefined β value (i.e., model with two unknowns).In the latter case, β was set equal to 0.006, which was considered the most reasonable value over all acquisition geometries, polarizations and environmental conditions.Figure 7 shows the modeled backscatter assuming two and three unknowns in Equation (1) for three extreme β values.The black curves correspond to the dataset for which the highest estimate of β was obtained (see Figure 7).The red and blue curves correspond to the acquisition with the highest and lowest β estimate for dry conditions, respectively; both images were acquired under frozen conditions.The modeled backscatter for such extreme cases differed only for the lowest (below 50 m 3 /ha) and highest (above 250 m 3 /ha) stem volumes.However, except for the dataset acquired under thawing conditions, the difference between the model realizations based on three and two unknowns is minimal, suggesting that a retrieval based on modeling solution where the β coefficient is set a priori equal to a constant would perform equally well as compared to a more rigorous approach where β is unknown. Retrieval of Stem Volume To verify that the retrieval of stem volume based on a model containing only two unknowns would perform similarly to the case of three unknowns, the RMSEs for each image acquired over Krycklan were compared (Figure 8).The scatter plot shows that the error was lower when using β = 0.006 in most cases.Only for some images acquired under frozen conditions, the model training with three unknowns performed better.Nonetheless, these images were characterized by low correlation, a consequence of the weak sensitivity of the backscatter to stem volume.At Remningstorp, the model training with a constant β = 0.006 performed in general worse than when assumed unknown a priori (Figure 9).This is a consequence of the distribution of stem volumes in the dataset available to this study.The dataset included mostly mature forest being characterized by weaker sensitivity of the backscatter to stem volume than at low stem volumes (Figure 3).The lack of stands with low stem volumes caused the estimate of the ground backscatter in the model training with two unknowns to be more imprecise and the modeled backscatter only partially fitted the observations.This effect was most prominent in images showing the highest correlation between stem volume and backscatter, which also showed high correlation).It was indeed negligible for all other images where the stem volume and the backscatter were almost uncorrelated.The evaluation of the retrieval is done at both test sites based on the model with two unknowns and constant β = 0.006.The RMSE for single images differed depending on polarization and environmental conditions.We evaluate the error at Krycklan in Figure 10 and Table 6; the RMSE was smaller for HV-than for HH-polarized data for similar environmental conditions.For a given polarization, slightly lower errors were obtained under unfrozen dry conditions compared to unfrozen wet and thaw conditions; much larger errors were obtained for frozen conditions (HH-polarization only) because of the much weaker sensitivity of the backscatter to stem volume (Figures 3 and 4).At Remningstorp, the retrieval error was smallest under frozen conditions, at HH-polarization (see in Table 6, column of "Single image" retrieval for 34.3°,HH and FBS).Under unfrozen conditions, the retrieval performed poorly because of the frequent wet and moist ground conditions, which almost entirely suppressed the sensitivity of the backscatter to stem volume and caused large variability of the backscatter for similar stem volume. The extensive dataset of PALSAR images acquired under different look angles, polarizations and environmental conditions allowed different groupings to assess the role of each on the multi-temporal combination of stem volume estimates.All retrieval statistics from the multi-temporal combination are reported in Table 6; results are grouped according to look angle and then for different combinations of polarizations.To appreciate the performance of the multi-temporal combination, the best and the worst relative RMSE for the retrieval based on a single image are also included in Table 6.Yearly retrievals have been considered to allow the multi-temporal dataset to include a fairly large number of stem volume estimates per stand while avoiding that growth and/or disturbances would distort the values of the in situ stem volumes used as reference.The retrieval error was never below 35%, being mostly between 40% and 70% and occasionally even in the 90% range.With respect to single-image retrieval, the stem volume estimates from the multi-temporal combination were closer to the in situ stem volumes.Table 6 shows substantial differences between the two test sites.At Krycklan, the agreement between the retrieved stem volumes with the multi-temporal combination and the in situ stem volumes was strongest for the 34.3°,HV-polarized dataset (only unfrozen conditions).The smallest relative RMSE was 44.0% from data acquired during 2008 (Table 6).Figure 11 shows that the estimated stem volume agreed well with the in situ data; nonetheless, the scatter plot did not match the 1:1 line indicating some deficiencies in either the modeling approach or the model training.The loose agreement between retrieved and in situ stem volumes is then attributed to the large scatter of the SAR backscatter for a given stem volume (see Figure 3).At Remningstorp, the best agreement between retrieved and in situ stem volumes was obtained with the 34.3°,HH-polarized dataset; the contribution of stem volumes estimated from winter-time data was predominant.The smallest relative RMSE was 35.1% from data acquired during 2009 (Table 6).As in Krycklan, the levels of retrieved and in situ stem volumes agreed well; nonetheless, the scatter between the two datasets was large (Figure 12).Remarkably, stem volume could be retrieved for the entire range of values represented at each test site (Figures 11 and 12).At both test sites, the multi-temporal combination of estimates from the two polarizations of the Fine Beam modes did not perform better compared to using the best result obtained with a single polarization (i.e., HV for Krycklan and HH for Remningstorp) (Table 6).At Remningstorp, in some cases the multi-temporal retrieval using all observations performed worse compared to the best single-image retrieval or to a combination based on the couple of images characterized by the lowest RMSEs (Table 6).The multi-temporal combination performed similarly across the different years, except when the RMSE was high for each of the images being combined (Table 6).In such cases, the retrieval statistics presented fluctuations, which are however of minor importance given that the retrieval performed poorly.The multi-temporal retrieval did not seem to be affected by the look angle nor could we notice an advantage of using full polarimetric data with respect to single-or dual-polarized data (Table 6).In PLR mode, the best retrieval (in a multi-temporal sense) was obtained with HV-polarized data only (Table 6); the contribution of stem volume estimates from other polarizations to the multi-temporal retrieval using all polarizations was minimal. The retrieval error was finally investigated with respect to stand size.This investigation was possible at Krycklan only, because of the large range of stand sizes and number of stands (Table 1).The relative RMSE for the multi-temporal combinations of stem volumes estimated from the 34.3°,HH-and HV-polarized datasets decreased for increasing minimum stand size (Figure 13), thus confirming results in the Northeast U.S. [12].For the retrieval based only on HV-polarized backscatter, the relative RMSE was below 30% for a minimum stand size of approximately 20 ha.The lack of a number of forest stands larger than 20 ha sufficient to compute a reliable value of the relative RMSE did not allow clarifying whether the retrieval error would further improve or reach saturation as in the case of HH-polarized data where the relative RMSE was consistently between 37% and 42% for stands with a minimum size between 10 ha and 20 ha. Discussion The extensive dataset of ALOS PALSAR images acquired over the two Swedish test sites of Krycklan (boreal forest, in the north) and Remningstorp (hemi-boreal forest, in the south) allowed a deep understanding of the relationship between stem volume and L-band backscatter observations and, in turn, on the possibility to retrieve stem volume.Taking into account that the stem volume estimates based on field measurements were updated each year with a term related to the growth factor (Section 2), we attempted to minimize the error introduced by a time lag between the acquisition of the PALSAR data and the in situ stem volume.The sensitivity of the SAR backscatter with respect to stem volume differed depending primarily on polarization and environmental conditions (Figure 3 and Table 5).Under unfrozen conditions, the L-band backscatter contrast between low and high stem volumes is affected by the forest structure and an external contribution due to soil moisture (and roughness to a certain extent).Wet conditions increase the backscatter in forest with low stem volumes while dense forests are less affected, resulting in a smaller backscatter contrast compared to unfrozen dry conditions.The effect of wet conditions is then stronger in co-polarized data than in cross-polarized data because of the surface scattering, which is negligible in the latter.For images acquired under frozen conditions, the increased transmissivity of the L-band signal through the canopy cause an increase of the ground backscatter and a decrease of the canopy backscatter resulting in an overall weaker sensitivity of the forest backscatter to stem volume.At Krycklan, the retrieval performed best under dry and unfrozen conditions, whereas frozen conditions were characterized by the largest errors (Figure 10).At Remningstorp, the frequently wet conditions under unfrozen conditions implied weak sensitivity of the backscatter to stem volume and caused the retrieval to perform poorly, regardless of the polarization.As a consequence, the frozen conditions, which implied dry terrain conditions, were characterized by the highest correlation coefficients (Table 5) and the smallest retrieval errors (Table 6). The effect of look angle on the relationship between backscatter and stem volume was only apparent when comparing observations taken with a steep (21.5°) viewing geometry compared to a somewhat shallow (34.3° and 41.5°) look direction.For the latter, the sensitivity of the backscatter to stem volume was higher (Figure 4) as a consequence of the longer path travelled by the microwaves through the canopy, which then also implied lower retrieval errors (Table 6) for the same set of environmental conditions and polarization.This result has implications for the suitability of the data acquired in full polarimetric mode (PLR) with step look angles (i.e., <25°).The retrieval of stem volume from backscatter measurements in the PLR21 and PLR23 modes was outperformed by data acquired in FBD (and/or FBS) mode because of the shallower look angle.Even the much larger number of stem volume estimates from the PLR mode compared to the FBD or FBS mode could not compensate in the multi-temporal combination for the intrinsic limitations of the viewing geometry used when acquiring in the PLR mode. The retrieval of stem volume was undertaken with a fairly simple but well-known modeling approach.Yet, the results showed that there are some flaws both in the model and the model training, which ultimately caused some systematic under-or overestimation of stem volume.Although the model could fit the measurements of backscatter and stem volume reasonably well for any combination of look angle, polarization and environmental conditions, the match was not always perfect (Figure 4).Given the large spread of the backscatter observations for a certain stem volume, part of the discrepancies between retrieved and in situ stem volumes could also be related to aspects not accounted for by the model.It is unclear whether such variability of the backscatter is a consequence of different forest structures or other aspects (terrain slope, soil conditions etc.).Assuming the parameter of the forest transmissivity term to be constant, the constant β = 0.006 did not seem to particularly affect the performance of the retrieval.For Remningstorp, the relative RMSE using β = 0.004 (i.e., increased transmissivity) was slightly better only in the case of winter data (32.7% vs. 35.1% for the best result using β = 0.006).Hence, it is necessary to take into account that this parameter can be spatially and temporally variable, even in a broad sense (i.e., season-dependent, forest-type dependent).For this, an evaluation of multi-temporal ALOS PALSAR datasets over different forest environments would be needed.An evaluation of other model training approaches based on non-parametric methods [22,27,47] may be worth investigating in order to provide a more comprehensive overview of the limitations of the modeling solution adopted in this study. Looking back at the ALOS PALSAR acquisition strategy and data availability [49], the archives include the best possible datasets for the retrieval of stem volume in boreal forest.On the contrary, the retrieval in hemi-boreal forest, and in general in forest environments when the FBD data were acquired during periods of moist soils, may not perform equally well.Given the lack of similar analysis in other forest types, it is not possible to quantify the benefit of the multi-temporal observations in a more general sense.The ALOS-2 PALSAR-2 mission, started in May 2014, foresees the acquisition of dual-polarized HH and HV images during the winter season as well [49] with potential improvement of the retrieval outside of the boreal biome because of the often dry conditions.Yet it is unclear, how sensitive the HV-backscatter to stem volume is under frozen conditions given the weak attenuation of the L-band signal in the canopy and the negligible contribution of surface scattering to the cross-polarized backscatter.The increased bandwidth (28 MHz) for the dual-polarization mode shall also lead to improved estimation of stem volume given the finer scale at which data will be available.Some concern applies to the reduced availability of dense multi-temporal stacks of images, which might limit the exploitation of multi-temporal approaches to retrieve forest stem volume or above-ground biomass. Multi-temporal observations are key to an improved stem volume retrieval accuracy with respect to a single-image retrieval, especially in the case of short wavelengths (X-and C-band) that are more affected by environmental conditions compared to L-and P-band.The major benefit of multi-temporal observations is the decrease of the retrieval error in stem volume/above-ground biomass ranges to which the sensitivity of the SAR backscatter is weak.While this approach is recommended for current spaceborne missions providing primarily data on SAR backscatter, it is likely that it will represent a simple complement in future missions specifically targeting the retrieval of forest variables (BIOMASS [50], NISAR [51], SAOCOM-CS [52]).These will prefer acquisition strategies providing observables more closely related to forest structural parameters (e.g., SAR interferometry, SAR polarimetric interferometry and SAR tomography) while the SAR backscatter will be useful for additional information, such as forest detection and forest cover change mapping. Conclusions This study looked at six years of ALOS PALSAR backscatter data (2006-2011) at two forest test sites in Sweden (Krycklan and Remningstorp) with the aim of quantifying the capability of such observations to retrieve forest stem volume.The results confirmed the rapid increase of the SAR backscatter for increasing stem volume in sparse forest (up to approximately 100 m 3 /ha) followed by marginal increase in high density forest.The relationship between SAR backscatter and stem volume differed depending on look angle, polarization and environmental conditions.A straightforward modeling approach based on the Water Cloud Model with gaps was able to follow reasonably well the trend in the observations; nonetheless, the retrieval was affected by the simple formulation of the model and the model training.The best retrieval results (44.0% at Krycklan using only HV-polarized data acquired under unfrozen conditions with a look angle of 34.3°; 35.1% at Remningstorp using only HH-polarized data acquired under predominantly frozen conditions with a look angle of 34.3°) indicate a reasonable performance of ALOS PALSAR backscatter to retrieve stem volume at stand level; the smaller retrieval errors for larger stands (relative RMSE <30% for stands >20 ha at Krycklan) suggests that accurate stem volume estimates are feasible at moderate resolution (pixel size >300 m) by aggregating the SAR backscatter observations from the original spatial resolution (i.e., 20-30 m) at the level of an forest stand or a moderate resolution raster. Figure 1 . Figure 1.Map of Sweden showing the location of the two test sites of Remningstorp and Krycklan. Figure 2 . Figure 2. Bar chart of stem volume distribution in Remningstorp and Krycklan.Bars were grouped into intervals of 20 m 3 /ha. Figure 4 . Figure 4. Measured and modeled PALSAR backscatter as a function of stem volume for Krycklan.The model curves are based on Equation (1).The crosses and the vertical bars represent the median backscatter and the interquartile range in 25 m 3 /ha large intervals of stem volume.All data acquired under unfrozen conditions unless specified in the legend (fr = frozen). Figure 5 . Figure 5. Estimates of the model parameter β at Krycklan with respect to environmental conditions for PALSAR data acquired with 34.3° look angle and HH-polarization."Unfr."refers to unfrozen conditions.Frozen conditions refer to images acquired under dry conditions as well as cases with snow fall.If precipitation was recorded at <2 mm, the unfrozen conditions were moist; otherwise, the conditions were wet. Figure 6 . Figure 6.Histograms of the estimates of the model parameter β at Krycklan as a function of look angle. Figure 7 . Figure 7. Three examples of modeled backscatter as a function of stem volume assuming β unknown (solid curves) and set a priori (dashed curves).The measurements of backscatter are represented by crosses and vertical bars (median and interquartile range) for groups of stem volume, each being 25 m 3 /ha wide.Test site: Krycklan. Figure 8 . Figure 8. Scatter plot of single-image RMSEs for a model with three unknowns (horizontal axis) and a model with two unknowns where the parameter β was set a priori equal to 0.006 (vertical axis).Test site: Krycklan. Figure 9 . Figure 9. Scatter plot of single-image RMSEs for a model with three unknowns (horizontal axis) and a model with two unknowns where the parameter β was set a priori equal to 0.006 (vertical axis).Test site: Remningstorp. Figure 10 . Figure 10.Distribution of single-image retrieval RMSE at Krycklan for combinations of look angle, polarization and environmental conditions for which multi-temporal SAR backscatter observations (at least three) were available. Figure 11 . Figure 11.Scatter plot of retrieved stem volume with respect to in situ stem volume in the case of all HV-polarized images acquired during 2008 over Krycklan with a look angle of 34.3°. Figure 12 . Figure 12.Scatter plot of retrieved stem volume with respect to in situ stem volume in the case of all HH-polarized images acquired during 2009 over Remningstorp with a look angle of 34.3°. Figure 13 . Figure 13.Relative RMSE with respect to minimum stand size for the multi-temporal combination of stem volumes estimated from HH-and HV-polarized images acquired during 2008 over Krycklan with a look angle of 34.3°. Table 1 . Distribution of stand size and stem volume in the forest field inventory data used in this study for the test sites of Remningstorp and Krycklan. Table 2 . Number of PALSAR datasets available over Remningstorp and Krycklan grouped according to acquisition mode.Each dataset acquired in the FBS, FBD and PLR mode consisted of 1, 2 and 4 images, respectively. Table 3 . Number of PALSAR images available over Remningstorp and Krycklan grouped according to polarization and look angle. Table 4 . Frequency of environmental conditions at image acquisitions grouped according to polarization and look angle for each test site (Re: Remningstorp; Kr: Krycklan). Table 5 . Distribution of the Pearson's correlation coefficient between stem volume and SAR backscatter for a given combination of look angle, polarization and environmental condition.Combinations are listed consisting of at least three PALSAR datasets.The minimum (Min), three quartiles (Q1, Q2 and Q3) and the maximum (Max) are listed.For combinations with three datasets, only Q2 is given.For combinations including four or five datasets, only Min, Q2 and Max are given.Transparent cells refer to Krycklan and shaded cells to Remningstorp. Table 6 . Retrieval statistics for multi-temporal combinations available in the PALSAR datasets.For each combination, the best and worst retrieval statistics for a single-image retrieval are also reported.Transparent cells refer to Krycklan and shaded cells to Remningstorp.
2015-09-18T23:22:04.000Z
2015-04-13T00:00:00.000
{ "year": 2015, "sha1": "38eb2bb53380333dc9d0cb48b191d1aa2008c244", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/7/4/4290/pdf?version=1429020312", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fc53b1d1c70b5fee7c6d5c9e5aa273221dc39886", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science", "Computer Science" ] }
54521004
pes2o/s2orc
v3-fos-license
Assessment score comparison of knowledge and clinical reasoning of medical students educated by bedside or conference room teaching in internal medicine ward Introduction Bedside teaching has been proposed as one of the ideal methods in medical education. Skills of history taking, physical examination, professional attitude and a comprehensive diagnosis and treatment approach to patients are instructed by this manner. In addition, students learn clinical reasoning and clinical problem solving at the bedside of patients.1-4 While all human aspects are considered, a real patient provides an opportunity to be trained in real-world procedures.5 While most doctors would agree that bedside teaching is a necessity in medical educations, in practice the subject is often ignored, so that in 1960 approximately 75% of medical education was conducted at bedside. This amount has now dropped to somewhere around 8% to 19%.6 This is Sir William Osler’s idea that “the best teaching is that taught by the patient himself ” and “for the junior student in medicine and surgery it is a safe rule to have no teaching without a patient for a text.”7 When a student is faced with real patients, a situation with patient perception arises which can invoke patient care and improve the practice of medicine.8 But what is now happening in most educational Introduction Bedside teaching has been proposed as one of the ideal methods in medical education.Skills of history taking, physical examination, professional attitude and a comprehensive diagnosis and treatment approach to patients are instructed by this manner.2][3][4] While all human aspects are considered, a real patient provides an opportunity to be trained in real-world procedures. 5hile most doctors would agree that bedside teaching is a necessity in medical educations, in practice the subject is often ignored, so that in 1960 approximately 75% of medical education was conducted at bedside.This amount has now dropped to somewhere around 8% to 19%. 6This is Sir William Osler's idea that "the best teaching is that taught by the patient himself " and "for the junior student in medicine and surgery it is a safe rule to have no teaching without a patient for a text." 7 When a student is faced with real patients, a situation with patient perception arises which can invoke patient care and improve the practice of medicine. 8But what is now happening in most educational institutions is that education is started at the bedside but the discussion is continued in a corridor or a conference room away from the patient.In other words, bedside rounds and medicinal practice is already being replaced with conference room case presentations currently.Several reasons are cited for the decline of bedside rounds and bedside teaching.One of the main reasons is the shift from patient-oriented medicine to rely on technology and testing to diagnose diseases. 6However, a key issue, namely educational reasons, is not considered in this regard.Logically students are often concerned with how to achieve assessments in a competitive field.Multiple Choice Questions (MCQ), Key Feature (KF) clinical reasoning and Objective Structured Clinical Examination (OSCE) are main methods for assessing medical students in most academic arenas. 9,10The impact of educational methods on the outcome of educational assessments has been less studied in educational institutions.A study was conducted to compare exam scores of students after training with one of 2 methods: structured bedside teaching or traditional bedside teaching indicated that the results are not different. 11Structured bedside teaching is an educational modality designed to fill in gaps found in traditional bedside teaching in medical education.Since bedside learning is very important in the practice of medicine and students tend to be interested in teaching methods that they can earn better scores on exams, the present study was planned to compare the effect of 2 teaching methods, traditional patient's bedside and conference room case presentations, on the most common methods of knowledge and clinical reasoning assessments, including the MCQ and KF clinical reasoning examination respectively. Materials and Methods This quasi-experimental study was carried out at the Department of Internal Medicine Hospital affiliated with the Birjand University of Medical Sciences (BUMS) (South Khorasan province, Iran) during 2 consecutive semesters in the 2016 academic year. Study design Based on a census of 71 medical students who attended in internal medicine ward for completing the training course, all 71 were selected for the study.All students who passed their pulmonary pathophysiology course with the same teacher were included.Based on pulmonary pathophysiology course score (referred course score based on 20) similar students were placed pairwise.Then pairs were split into 2 similar groups (Figure 1).The object of the study was explained for each group separately.MCQ and KF clinical reasoning examinations were used for the evaluation of knowledge and clinical judgment regarding the theme of chronic obstructive pulmonary disease (COPD) as the topic of educational research.The MCQ exam contained 30 questions and KF contained 7 questions.Each question on the KF exam contained 16 priority options based on clinical decision making.MCQ and KF questions were approved by an expert at the Education Development Center (EDC) of the BUMS.Content validity of questions was reviewed and approved by 2 independent academic scholars in the Department of Internal Medicine.Due to specific issues, a post-exam split-half reliability test was used to measure the MCQ's internal consistency (Spearman-Brown coefficient value for internal consistency was 0.217).The opinions of 2 academic scholar referees were reviewed and coordinated with the designer of the KF questionnaire to achieve acceptable validity and their agreement in determining the most correct answer (Inter-rater reliability >66%) to achieve acceptable reliability in KF questionnaire.Scores were awarded for each question in the MCQ exam of 0 or 0.66 and the total possible score was 20.In the KF exam, 4 of 16 items with the greatest accuracy must be selected by students.The score for each greatest accuracy answer was 0.72.The score for other options apart from the correct answers up to 4 options was zero.A negative score (-0.72 score) was calculated for every wrong answer with more than 4 elections to avoid distortion of replies.Therefore, no more than 4 options were chosen by students for each question on the KF exam.Finally a score was calculated for each question on the KF exam calculated on the basis of 2.88 and the total score of the KF questionnaire was implemented based on 20.Clinical ward score and scores were also taken into account at a maximum of 20. Intervention COPD was chosen as the theme of education and evaluation for the present study.Concomitant routine educational programs in the ward were also held for both groups.Additional training for COPD independent of routine ward curriculum was conducted for one group (36 students) using traditional patient's bedside teaching and for the other group (30 students) lessons were presented by the conference room case presentations method.To start the program in both methods, one student attended the COPD patient's bedside, took the history, carried out the physical examination, collected para-clinical records and became familiar with the prescribed medications.The collected data was then presented in the conference room apart from the patient bedside or at the bedside of the patient based on the educational method.Training in the internal medicine department is carried out over a 3-month period and all education programs continued during the period concomitant with the topic of research.In the traditional patient's bedside teaching manner, COPD training was conducted at the patient's bedside round regarding all aspects of the patient suffering from COPD and/or concurrent diseases.In each training round the following items were discussed: (1) History taking and physical examination; (2) Diagnostic approach to the patient; (3) Para clinical interpretation recorded in the documents of patients; (4) Medications information about prescribed medications for the patient; and (5) Provision of self-care education to the patient.In each round various issues related to the case (the package of patient problems including COPD and other comorbidities) were considered and discussed, so a long time was spent to complete the related topic of COPD.Topics related to COPD were completed during 4 consecutive rounds (1.5 hours for each round for a total of 6 hours).The students were divided into 3 groups (12 students in each group).In the conference room case presentations methods the case presentation and education program was conducted in the conference room away from the patient bedside.A case of COPD was presented and topics related to COPD were discussed and completed in the conference room in one meeting session (1.5 hours with the participation of all student in one session).The students' training activities and self-education were conducted freely in both groups.Evaluations were conducted by the MCQ and KF clinical reasoning examination (all questions related to COPD) at the end of the 3-month period Independent of the routine ward exam.The mean score of the students obtained from all topics of educational programs in the ward was also compared to ensure matching of the 2 groups.Gender, ward educational curriculum and educational level of the student were considered as potential confounders.Stratified analysis by comparing difference in frequency or means of confounders between 2 studied groups was used to remove these effects. Statistical analysis Data were analyzed using SPSS 23.Normality and variance homogeneity were tested using the Kolmogorov-Smirnov and the Levene tests.The Spearman-Brown coefficient value was used to test for internal consistency of MCQ questionnaire.Independent t tests for data with normal and Mann-Whitney U test for non-normal distribution were carried out in the data analysis.Chi-square test was used to compare difference in frequencies between groups.P values of less than 0.05 were considered significant at a 95% CI. Results During 6 months, 66 undergraduate students, including 24 (36.4%)male and 42 (63.6%)female were enrolled in a study.Interns and externs were included 25 (37.9%) and 41 (62.1%) respectively.No differences in frequency in gender or educational level (as confounder factors) were observed between 2 studied groups (Table 1).Because of personal problems, 2 of the students (intern) did not participate in the final ward assessment, but participated in all programs of training and assessment related to theme of the research.Among participants, 36 students (54.5%) were trained by traditional patient's bedside teaching manner.Among student allocated to the conference room case presentations method group, five did not attend in class and were excluded from the study, and the remaining 30 student (45.5%) remained in the group.The overall score and scores in each experimental group are shown in Table 2.The MCQ score was significantly higher in the group of conference room case presentations methods (Table 2).The ward rating scores (as one confounder factor) in the traditional bed side teaching and conference room case presentation method were 15.05±1.80 and 15±1.60 respectively (P=0.90).Scores on the KF clinical reasoning, ward ratings, and the MCQ exam showed no difference between interns and externs (Table 3).There was also no difference in assessment scores of all evaluation topics between men and women in the study (Table 4). Discussion In comparing scores of various evaluations in the present study, ward ratings of the studied students had the highest score.In this regards, no significant difference was observed between the traditional patient bedside and conference room case presentations teaching method. Ward rating scores are based on collective opinions of several training providers.It seems that diverse views of several training providers reduce the validity and reliability of student evaluations. 12In a qualitative study conducted with students by Calman and colleagues, they claimed that clinical assessment instruments pay little attention to clinical skills. 13Chapman believes that overcoming mental judgments in medical students' clinical competency evaluation are difficult. 12tudents taught with the conference room case presentations teaching method earned higher scores in the MCQ exam than those trained with traditional patient bedside teaching manner.However, the results sometimes overlapped and more people are needed to differentiate the methods more precisely.In a study carried out by Landry et al, they conclude that students are more comfortable asking questions in a classroom setting and getting an answer. 14In this sense, it seems that case presentations in the conference room setting provides more opportunities for students to be more successful in their MCQ exam.Currently, in most educational institutions, most exams are held using the MCQ exam. 15There are several reasons for this institutional orientation to the MCQ exam.When students are confronted with the well-designed MCQ, application of knowledge and problem-solving skills will be assessed in an acceptable and consistent level. 9Other advantages include encompassing wider dimensions of knowledge, minimum impact of examiner, targeted, and good ability to compare students. 16Therefore if the MCQ exam is going to be carried out in educational institutions to assess a student's educational status, learning by conference room case presentations method appears to be a more useful method for student to be more successful in their exams.In a study conducted by Chéron et al, it was suggested that students prefer case-based teaching and the MCQ exam. 17Case-based teaching in Chéron and colleagues' study is partly in accordance with the conference room case presentations method in our study.The mean score of the KF clinical reasoning exam in the group trained by the traditional patient bedside teaching manner was higher than the group educated by the conference room case presentations method.But the difference was not statistically significant.The KF Question is one type of curriculum evaluation that is used to assess clinical reasoning and clinical judgment in medical students.Fischer et al believes that the KF Questions is a reliable method for clinical competency evaluation. 10The KF exam can be designed with varying numbers of questions and is able to distinguish between the experienced and the beginner student. 18The KF exam is generally seen as an appropriate method of clinical reasoning evaluation and able to predict clinical practice authority in the future, although it is not possible to evaluate all clinical aspects by this method. 18Higher scores on the KF exam can be a sign of being a more efficient physician in clinical practice in the future.One thing that should not be forgotten is lack of popularity of this type of exam and also relying on classroom-based student learning in most educational institutes, including this teaching hospital.Therefore, if the student and educational institutional experiences are improved with this type of exam, more accurate and more reliable results will be obtained.0][21] It is assumed that more experience in clinical practice, must be accompanied by better scores on the KF exam in interns in comparison with externs. 18However, this was not true in our study and there was no difference in KF exam scores between externs and interns.One reason may be that our students have less exposure in a clinical setting at bed side and thus do not learn clinical reasoning in practice during their externship.It was pointed out by some studies that bedside teaching practice has significantly declined in recent years. 1 The status in the study's teaching hospital somewhat is similar to other educational institutions and so clinical experience of interns is not expected to be different from that of externs.Time spent in engaged in the traditional patient's bedside teaching manner was much higher.Clinical teachers argue that they do not have enough time to teach at the bedside. In addition, time constraints on behalf of students lead to less interest in students' attendance at the bedside. 1 Overall, if the assessment is supposed to be conducted on the basis of multiple-choice questions, both teachers and students will prefer education by the conference room case presentations method.However, unwanted effects of conference room case presentations will be surface rather than deep education and training. 22The researchers suggest that a combination of knowledge and practice evaluation methods should be applied in medical student assessment. 23ncorporation of interns and externs in the study groups could lead to bias in the study.According to non-significant difference in frequencies between interns and externs in the current study's groups, this effect is minimal.That issue can also partly be adjusted by statistical analysis on gender.The overall mean scores in evaluations of clinical ward education did not differ significantly between the 2 study groups, so the effect of this confounder is also minimal. In addition to COPD in the present study, the internal medicine ward training encompassed several dimensions of education.Therefore, in the perspective of a broad spectrum of education and, of course nil impact of research topic (COPD) education programs on the assessment score of ward rating, the ward rating score could not be used to compare the 2 groups for the final outcome.However, the lack of statistically significant differences between the scores of ward assessments in the 2 groups of study students can prove relatively matched conditions.The 2 groups were matched on the basis of the pulmonary pathophysiology lesson scores.In fact, similarity between the 2 study groups in the ward educational curriculum and finally lack of statistically significant differences between the scores of ward assessments in the 2 groups minimizes the confounding effect of ward educational program on research results. The attendance of all students in programs at the same time was a limitation.The problem was solved by putting the students in groups of their choice in traditional bed side teaching manner.But in the conference room case presentations group, five students were unable to attend in the classes and therefore were excluded from the study after selection.Concomitant attendance of externs and interns in study groups and concomitant implementation of ward education courses were other limitations.One additional limitation was selecting just one topic of educational program (COPD) for research and so restrictions in designing large number of questions to maximize reliability of questionnaire.Finally we need to have a larger population and broader issues of educational topics for more precise results. Conclusion The score of the KF exam was not statistically different in 2 groups of student with 2 methods of education. Although they are overlapped, the score of the MCQ exam was significantly higher in the student group using the conference room case presentations method than in those using the traditional patient's bedside teaching manner. In addition, students spent less time in the conference room case presentations method.This means that if students must be evaluated and compared by the MCQ exam, they would prefer to be taught by conference room case presentations method, something that would not be appropriate in all practical terms in medical education but is less time consuming and more convenient for teachers and students.The suggestion is to emphasize practical and clinical judgment assessment instead of only knowledge evaluation in determining a student's certification.In addition it is appropriate to suggest conducting a study on a larger population, with similar educational levels and also with similar, and broader, educational topics. Table 1 . Demographic characteristic in traditional patient bedside (n = 36) and conference room case presentations teaching method (n = 30) group Table 2 . Comparison of mean scores between students trained by traditional patient bedside and conference room case presentations method Abbreviations: Tr-BT, Traditional Patient's Bedside Teaching; Cr-CP, Conference room Case Presentations; MCQ, Multiple Choice Questions; KF, Key Features clinical reasoning examination; WR, Ward Rating score.a adjustment for clinical ward educations in two studied groups. Table 3 . Comparison of mean scores between intern and extern students Table 4 . Comparison of mean scores between male and female students
2018-12-02T16:31:04.486Z
2017-08-10T00:00:00.000
{ "year": 2017, "sha1": "9ff649674e9bdd9205fceabc28843f667b43f110", "oa_license": "CCBY", "oa_url": "https://rdme.tbzmed.ac.ir/PDF/RDME_19306_20170425225509", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9ff649674e9bdd9205fceabc28843f667b43f110", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
240868046
pes2o/s2orc
v3-fos-license
Evidence of Climate and Environmental Change in Nigeria: Synthesis from the Driving Force, Pressure, State, Impact and Response (DPSIR) Framework This review employs the DPSIR framework to synthesise evidence of climate and environmental change in Nigeria. The study identified population, political, social, economic, and technological dynamics as the major drivers of human activities with indicators being land-use, water-use, and energy-use dynamics. . Land-use and water-use, for example, which involve direct exploitation of land resources, result in landcover dynamics. The rate, extent, and magnitude of human activities are the proximate or direct factors that exert pressure on the environment, particularly loss of vegetation cover and thus CO 2 and other GHG sinks, whereas energy-use results in increased CO 2 and other GHG emissions. This double tragedy is a direct contributor to climate and environmental change. Total energy consumption has increased in Nigeria, where both spatial and temporal variations in air temperature distribution have been observed, with the trend revealing that mean temperature has shown a rise of about 1.2 o C along the coastal cities and 2 o C in extreme northern Nigeria. The observable state and trends of the Nigerian environment include an increase in temperature and an increase in extreme weather events. Droughts and desertification have persisted, as has the frequency, amount, duration, and intensity of rainfall, as well as changes in landuse/landcover. As a result, 70-80 percent of Nigeria's original forest has vanished. As the area of Lake Chad has shrunk, an increasing number of fauna (primates) and flora biodiversity are threatened or endangered. Crop and livestock productivity declines or is lost, as are rural livelihoods, infrastructure, tourist potentials, the agro-based manufacturing sector, the energy sector, and increased food insecurity. The current state and trend of climate and environmental change in Nigeria has prompted responses, mitigation, and adaptation in order to increase resilience, adaptive capacity, or reduce vulnerabilities and risk. INTRODUCTION Climate change has emerged as one of the most significant environmental issues confronting countries around the world, and many policymakers are gradually recognizing it as a top priority (Jalloh, 2013). There is compelling scientific evidence that the earth's atmosphere is changing faster now than at any other time in human history as a result of sustained greenhouse gas emissions (GHGs) from anthropogenic activities (Rockström et al., 2009;Steffen et al., 2011;Ndabula et al., 2014). Although global climate change and variability have become more pronounced, there is significant variation across geographic areas. According to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Africa's climate will likely remain warmer than the global average, and annual rainfall on the continent is expected to fluctuate throughout the twenty-first century (Niang, I. et al., 2014). Across Sub-Saharan Africa and Nigeria, increase CO2 concentration is believed to have resulted to warming across diverse geographical regions and eco-climatic zones (Figure 1), altering rainfall patterns, including total amount received and distributed seasonally (Abiodun, Lawal, Salami, & Abatan, 2013a;Niang, et al., 2014;Ogungbenro & Morakinyo, 2014;Oguntunde, Abiodun, & Lischeid, 2011;Olaniran, 1991a). According to Adesina and Odekunle (, n.d) ;Oladipo, (1993); and Ndabula et al. (2013) reported that the impacts of climate change on Africa are much more extreme due to the economy's vulnerability to climate change and a lack of capacity to adapt. As a result, extreme climatic change events including heavy flooding (Ndabula et al., 2012a), water and heat stress (Jidauna et al., 2016), shortages and droughts, and sea level rise are becoming more frequent (Ayanlade et al., 2018;Oloruntade et al., 2017). With negative consequences affecting all sectors of Nigeria's economy, including agriculture, transportation, telecommunications, and power infrastructure (BNRCC (Building Nigeria's Response to Climate Change Project), 2011; Cervigni & World Bank, 2013). Given the current challenges faced by climate and environmental change in Nigeria, it is essential to improve understanding in order to devise successful policies and initiatives to build resilience to the effects of climate and environmental change. Although evidence for climate and environmental change can be found in a variety of studies conducted across Nigeria, they are rarely brought together to form a comprehensive image or shed light on the existence and scale of both. This study explores the main driving forces, environmental pressures, the state of the Nigerian environment in space and time, and the impacts these driving forces, environmental pressures, and the state of the climate and environment have inflicted on Nigerians through a thorough synthesis of indicators using the DPSIR system to bridge this gap. This review is organized into five sections. The following section provides context by describing Nigeria's physical and socioeconomic settings. This is followed by a section with information to help you understand the DPSIR framework and its applications. Then comes the results section, which synthesized evidence of climate and environmental change using the DPSIR framework, followed by a conclusion. 2. NIGERIA'S PHYSICAL AND SOCIO-ECONOMIC SETTING 2.1 Physical Setting 2.1.1 Location and Extent Nigeria is located on the west coast of Africa between latitudes 40⁰16'N and 14⁰37'E and longitudes 2° to 15° E (approx.. 4° to 14°N and 2° to 15°E), near the extreme inner corner of the Gulf of Guinea (Figure 2). Nigeria occupies an area of 923,768 sq. km (356,669 sq. mi) with approximately 13,000 km2 (1.4%) of the land covered by water and the remaining 98.6% by land. (Abiodun, Lawal, et al., 2013) The country has a total boundary length of 4,900 km (3,045 mi) of which 853 km (530 mi) is coastline. meters above sea level (Pham-Duc et al., 2020). The Niger-Benue basin, Lake Chad basin, and Gulf of Guinea basin are the three major drainage regions in Nigeria. Short rivers pour into the Gulf of Guinea, draining the coastal areas. The country's coastline is about 853 kilometers long, with roughly 80% of it located in the Niger Delta region. Eco-Climatic Nigeria spans different climatic and ecological zones ( Figure 2). The variable climatic conditions and physical features have endowed Nigeria with a very rich biodiversity. The country's rich fauna is also as a result of the diverse vegetation types these ecosystems. These varieties of ecosystems range from rainforests in the south to moist savannah in the central part of the country. The climate in Nigeria is characterized by relatively high temperature and variations in the amount of precipitation throughout the year with alternating two seasons. The rainy season is generally from April to October and the dry season from November to March. The mean annual rainfall ranges from about 450 mm in the north east to about 3500 mm in the coastal south east, with rain falls within 90 to 290 days respectively . Annually, the average temperature ranges from 21 to 32°C in the south while the north has a temperature range of 13 to 41°C (Ogunjobi et al., 2018).The mean annual temperature ranges from 27⁰C in the south to 30⁰C in the north with extreme of 14⁰C and 45⁰C and an altitude range of 0 -1000m above sea level. 2.2 Socio-Economic Setting 2.2.1 Population Nigeria had a population of over 140 million in 2006. The population is predicted to be around 206 million people in 2020 at the current rate of 2.62 percent (See also Figure 4 below) (Population Reference Bureau, 2018). The population distributed unevenly over the country, with around 65 percent residing in rural areas and the rest in urban regions. 2.2.2 Agriculture In Nigeria, agriculture remains the bedrock of the economy as it provides a living for the majority of its populace. World Bank reported that the agricultural sector alone accounts for 33% of the total GDP of Nigeria and the sector employs around 23% of its total economically active population . The dry northern savannah is appropriate for sorghum, millet, maize, groundnuts, and cotton. Cash crops like oil palm, cocoa and rubber can be grown in the South. Low-lying and seasonal flooded areas can grow rice. The country has 68 million hectares of arable land, abundant freshwater resources covering about 12 million hectares, and an ecological diversity which enables the country to produce a wide variety of crops (Bashir & Kyung-Sook, 2018). UNDERSTANDING DPSIR FRAMEWORK AND ITS APPLICATIONS This study employs a Driver-Pressure-State-Impact-Response (DPSIR) framework. The DPSIR framework was developed by the Organization for Economic Cooperation and Development to aid in decision making and research (Bidone & Lacerda, 2004). DPSIR stands for: Driving force (D); Pressure (P); State (S); Impacts (I); and Responses (R.) (Figure 3). Drivers are the social, demographic and economic developments in societies. They are often defined as socioeconomic sectors that fulfill human needs for food, water, shelter, health, security and culture. They can originate and act at globally, regionally or locally scales. Pressure on the environment can be caused by intentionally or unintentionally human activities. These include: land use changes, resource consumption, release of substances and physical damage through direct contact on the environment and depends on the kind and level of technology involved in the human activities (Domínguez-Gómez, 2016). The State of the environment is the condition of the abiotic and biotic components of the ecosystems. The change in quantity and quality of physical, biological and chemical variables can alter the state of the environment by temperature, CO2 concentrations, habitat, species, biodiversity and many others (Amthor, 2001). Impacts is when the welfare and well-being of humans is compromised due to changes in the quality and Journal of Environment and Earth Science www.iiste.org ISSN 2224-3216 (Paper) ISSN 2225-0948 (Online) Vol.11, No.7, 2021 41 functioning of ecosystem services. Ecosystem goods and services are ecosystem functions or processes that directly or indirectly benefit human social or economic drivers, or have the potential to do so in the future (Adekola & Mitchell, 2011). Responses are actions taken by groups or individuals in society and government to prevent, compensate, ameliorate or adapt to changes in the state of the environment by seeking to control drivers or pressures through regulation, prevention, or mitigation to directly maintain or restore the state (Woodley, 2011). While the framework has been used in Nigeria to increase understanding of threats to ecosystem services, the importance of these services to dependent communities, and potential management measures in the Niger Delta (Adekola & Mitchell, 2011), Assessment of bio-physical indicators of desertification status in semi-arid zone of Nigeria (Ndabula, 2015), and oil spillages in the Niger Delta area (Madu et al., 2018), its application to climate change is still limited and only a few study studies exist. This study seeks to fill this void by broadening the DPSIR's application to the climate change discourse in Nigeria. It is hoped that this study will bring together information from a variety of sources in Nigeria's expanding debate on climate and environmental change. In 2020, Nigeria's population was estimated to amount to 206 million individuals. Between 1950 and 2020, the number of people living in Nigeria regularly increased at a rate above two percent. In 2020, the population grew by 2.58 percent compared to the previous year (Population Reference Bureau, 2018). The consumption of land resources is directly proportional to population growth, resulting in a demand for land resources that exceeds the regenerating capacity of these resources as the population grows. Nigeria currently has the world's seventh largest population, and it is the second fastest growing after India, so it is expected to overtake India as the third largest by 2050 (Population Reference Bureau, 2018). In their study showed that the mean annual pressure on land resources in Nigeria in the past five decades (1967-2017) was 9.323 hectares per capita, while the projected pressure in the next five decades (2018-2068) was 213.178 hectares per capita (Eririogu et al., 2020). This results implies that about 73.08 percent of the pressure per capita in the past five decades emanated from arable land consumption (6.813ha), while 75.91percent of the pressure is expected to emanate from fossil land in the next projected five decades due to crude oil and mineral resource exploration and exploitation. This signals a reduction in carrying capacity from 6.4091 hectares per capita (1967-2017) to 1.667 hectares per capita (2018-2068) (Eririogu et al., 2020). Moreover, the ability of land for agriculture has deteriorated as population demand on arable land has increased. According to (Olagunju, 2015), the current level of resource consumption in agriculture is resulting in a high rate of erosion, pollution, and soil deterioration. Economic Growth Nigeria's goal is to become the world's 20th largest economy by 2020. Because of the feedback mechanism between the environment and economic growth, there is a significant link between the two. Over the years, the environment plays a significant role in Nigeria's economic development Mereu et al., 2018). First and foremost, it generates natural resources that are used as inputs in the manufacturing process. Second, the natural environment absorbs air, water, and solid contaminants produced during the manufacturing and consuming processes (Kehinde et al., 2020). While economic activities such as agriculture, urbanization, oil and gas exploration and refining, power generation and transmission, and transportation are important to achieving that vision, they however have a negative impact on the environment, causing environmental and climate change (Adewuyi et al., 2020). Increasing consumption of non-renewable resources, increased pollution, and the potential loss of environmental ecosystems are all examples of the environmental consequences of economic expansion Nigeria. Moreover, in Nigeria, growing demand for food, minerals, energy, and tourism may place further strains on the Environment. The majority of Nigeria's economic activities are primary and secondary, such as agriculture, mining, and manufacturing (Adewuyi et al., 2020). The primary sector extracts or gathers raw resources and basic foods from the soil. Agriculture (both subsistence and commercial), mining, forestry, grazing, hunting and gathering, fishing, and quarrying are all example of primary economic activity. Energy Demand and Consumption Fossil Fuels Consumption Nigerian energy consumption: Energy consumption per capital for example, can be said to be an indicator that reflects annual consumption of commercial primary energy (e.g., coal, lignite, petroleum, natural gas and hydro, nuclear and geothermal electricity) in kilogr. Ams (Ansar et al., 2014;Olaniyan et al., 2018). The quest for energy will be increased to meet the demands of increasing human population. The situation will eventually lead to an increase in carbon emissions because humans, automobiles, industries, and other sectors will use more fossil fuel, resulting in higher emissions of carbon, which is a gas that is harmful to health and, among other things, causes global warming (weather modification) and air pollution (Cervigni et al., 2013). As more cars (more pollution), as seen in Lagos and Abuja, a direct effect on the water table (water scarcity), which is rapidly dropping below normal, overuse of natural resources, deforestation, desertification, urban sprawl, clearing land for residential use, and increased garbage (Cervigni et al., 2013). Biomass Energy Dynamics in Nigeria Biomass consumption or usage as source of energy in Nigeria has revealed an increasing trend since 1971 to 2011 as shown in Fig.5 below. Fuelwood and charcoal constitute the bulk of the biomass energy consumption in Nigeria (Ben-Iwo et al., 2016;Ohimain, 2010). More than 80% of households use fuelwood for their cooking, making it the most used form of cooking energy (Ansar et al., 2014;Naibbi & Healey, 2013). While overdependence on fuelwood in the country has been attributed to its availability and affordability compared to the other sources of energy, the tendency towards excessive total fuelwood consumption, which is due to population growth, low technical efficiency of the traditional cooking style and the lack of adoption of other sustainable cooking methodologies (Jewitt et al., 2020). Southern parts of the country use more modern fuels (kerosene and gas) than their northern counterparts, whose cooking fuel choice is related to the erratic supply of fossil fuel in the region (Olaniyan et al., 2018). The supply of Liquefied Petroleum Gas (LPG) in the northern part of Nigeria is high in Kano state and moderate in Kaduna state. According (Onyekuru et al., 2020), despite the high supply of LPG in these states, more than 65% of their households depend on fuel wood for cooking. In contrast, Anambra, Delta and Ogun states in the south receive a low supply of. LPG. Fuelwood consumption in Nigeria is very high as shown in Fig.6 among the geo-political zones. North east is the highest (93.7%) followed by North west (91.8) and North central (74.0%). South west zone has the least (37.2%) consumption of fuelwood which may be attributed to more availability of hydro-carbon fuel in the zone because of its proximity to the seaport where it is imported. while about 40% of the households used fuelwood (Adamu et al., 2020). State of Climate and Environment The state of the climate and environment is determined by observing spatiotemporal patterns or trends in forms and processes across the Nigerian landscape. Nigeria's climate is changing, as evidenced by: rising temperatures; variable rainfall; rising sea levels and flooding; drought and desertification; land degradation; more frequent extreme weather events; diminished fresh water resources; and loss of biodiversity. While there is a general decrease in rainfall in Nigeria, the coastal areas of Nigeria like Warri, Brass and Calabar are observed to be experiencing slightly increasing rainfall in recent times (Odjugo, 2005). This is clear evidence of climate change because one of the most noticeable effects of climate change is increased rainfall in most coastal areas and decreased rainfall in the continental interiors (Akpodiogaga-a & Odjugo, 2010). According to (Odjugo, 2009), the number of rain days has decreased by 53% in north-eastern Nigeria and 14% in the Niger-Delta Coastal areas. These studies also revealed that, while the areas experiencing double rainfall are shifting southward, the short dry season (August Break) is occurring more frequently in July, as opposed to its normal occurrence in August prior to the 1970s. Journal of Environment and Earth Science www.iiste.org ISSN 2224-3216 (Paper) ISSN 2225-0948 (Online) Vol.11, No.7, 2021 Rainfall durations and intensities have increased, causing massive runoff and flooding in many parts of Nigeria. The variability of rainfall is expected to increase further. Precipitation is expected to increase in the south, and rising sea levels are expected to exacerbate flooding and coastal land submergence. Climate change has been described as having a "considerable impact" on Nigeria (Ogunjobi et al., 2018). Temperatures in Nigeria have risen by about 1.6 degrees Celsius since the beginning of the industrial era, which is higher than the global average. Temperatures could rise by another 1.5-5 degrees Celsius by the end of the century, depending on the rate of future climate change (Niang,, et al., 2014). Despite a rise in average temperatures, there has been little research into how climate change has affected heatwaves in Nigeria. However, research indicates that heatwaves will become more common in Nigeria regardless of future warming. The number of "hot nights" in Nigeria is also expected to skyrocket in the coming decades (Akinbile et al., 2020). Hot night are those in which a region's nighttime temperatures are in the top 10%. Hot nights are known to aggravate existing respiratory and other health problems and have previously been linked to increased mortality rates. Droughts have become a regular occurrence in Nigeria since the 1980s as a result of a decrease in precipitation and an increase in temperature, and they are expected to continue in Northern Nigeria (Ogungbenro & Morakinyo, 2014). Lake Chad and the country's other lakes are drying up and facing extinction (Pham-Duc et al., 2020). According to (Odjugo, 2005), Nigeria north of 12 o N is under severe threat of desertification, and sand dunes are now common features of desertification in states such as Yobe, Borno, Sokoto, Jigawa, and Katsina. Advances in extreme heat, in particular, pose a threat to the many millions of Nigerians who do not have access to electricity or air conditioning. In cities, only 92 out of every 1,000 people have access to air conditioning. In rural areas, it is only 14 out of every 1,000. 4.2.1 State of Biodiversity Climate and environmental change, exacerbated by human activities and his reckless exploitation of species, continue unabated. Human activities such as bush burning, hunting, and poaching have continued to endanger the existence of wildlife in all environments (Osemeobo, 1988). As a result, many animals are on the verge of extinction and are classified as threatened or endangered. For example, Cross River Gorilla, forest and savanna elephants in Cross River and Yankari national parks, Niger-Delta red colobus monkey, African Lion in Yankari and Kainji Lake national parks, Leopard in the Gashaka Gumti national park, and African wild dog in Kainji Lake national park are among the endangered or threatened species in Nigeria (Anadu, 1987). Others include the gazelle, giraffe, and Nile crocodile, as well as Grey-headed Picartes. Impact Indicators of Environmental and Climate Change in Nigeria 4.3.1 Ecological Impacts of Climate Change The current 0.2 m sea level rise has inundated 3,400 km2 of Nigeria's coastal region, and if the sea level rise reaches the projected 1 m on or before 2100, 18,400 km 2 of Nigeria's coastal region may be inundated (NEST 2003). A metre rise in sea level would pose a serious threat to coastal settlements such as Bonny, Forcados, Lagos, Port Harcourt, Warri, and Calabar, among others, which are less than 10 meters above sea level. Sea-level rise causes salt-water intrusion into fresh water, as well as the invasion and destruction of mangrove ecosystems, coastal wetlands, and coastal beaches. Drought and desertification have become more common as temperatures have risen and rainfall has decreased. Forest destruction is also a difficult challenge that exposes the environment to the more severe effects of climate change (Terdoo & Adekola, 2014). Forests clean the air, improve water quality, preserve soils, and provide food, wood products, and medicines to the world's population, as well as habitat for many of the world's most endangered wildlife species (Adekola & Terdoo, 2015;Adekola & Mitchell, 2011;Anadu, 1987). In addition to the problem of desertification, climate change is expected to increase the prevalence of pests and diseases that kill forest trees . The study also provided explanations for the gradual extinction of forest tree species in Nigeria's various ecosystems, including the Iroko tree and oil bean in the South East, various mahogany species in the Southwest, the baobab, locust bean and gum Arabic in the Journal of Environment and Earth Science www.iiste.org ISSN 2224-3216 (Paper) ISSN 2225-0948 (Online) Vol.11, No.7, 2021 Northwest (Osemeobo, 1988). Years of drought in Nigeria have been documented as years of invasion of locusts and pests on tender farm crops, resulting in citizen hunger. Floods destroy farm crops near river banks, resulting in poor harvests and bridge destruction, obstructing access to markets and farms (Echendu, 2020). i. Hydrology Climate change will alter all aspects of the hydrological cycle ranging from evaporation through precipitation, run off and discharge (Ayeni et al., 2015;Pham-Duc et al., 2020). For example, floods can also be as a result of increase in the frequency and intensity of heavy rainfall events caused by atmospheric changes (Ndabula et al., 2012b;Jidauna et al., 2016;Echendu, 2020). The global warming and decreasing rainfall together with the erratic pattern of rainfall produce a minimal recharge of groundwater resources, wells, lakes and rivers in most parts of the world especially in Africa thereby creating water crisis. In Nigeria, many rivers have been reported to have dried up or are becoming more seasonally navigable while persistent droughts and desertification, declining frequency, amount, duration and intensity of rainfall, changes in landuse/landcover and increase in human and livestock water demand and exploitation have caused reduction of the inflow of runoff or discharge and attendant shrinking of the Lake Chad in the North East of Nigeria 4.3.2 Socio-Economic and Demographic Impacts Climate change occurring either as slow or rapid onset events is a threat to global economic development affecting various sectors of the economy (Ati et al., 2002). Impact of climate and environmental change are estimated to hurt Nigeria's economy badly, with a projected GDP losses between 2% and 11% by 2020 Cervigni et al., 2013). i. Demographic and security impacts Drought and desertification and the migrating sand dunes have buried large expanse of arable lands, thus reducing viable agricultural lands and crops' production (Eririogu et al., 2020). This has prompted massive emigration and resettlement of people to areas less threatened by desertification. Such emigration gives rise to social effects like loss of dignity and social values. It often results in increasing spate of communal clashes among herdsmen and farmers and such clashes (Akpodiogaga-a & Odjugo, 2010;Ndabula et al., 2017). The worst impact is population displacement, which may result in communal crisis. The coastal inundation and erosion with their associated population displacement are currently major environmental problems in Nembe, Eket and other coastal settlements in Bayelsa, Delta, Cross River, Rivers, and Lagos States of Nigeria (Eririogu et al., 2020). It is estimated that a metre rise in sea level will displace about 14 million people from the coastal areas of Nigeria (Sayne, 2011). Increasing number of environmental refugees has drastically increased as people are forced to leave their homes for alternative destinations with relative safety (Morland, 2017). Drought and desertification in the NE with attendant reduction in the size of the Lake Chad and other forms of land degradation have intensified conflicts among pastoralist, farmers and fishermen increasing the number of environmental refugees. Similarly oil spillage, gas flaring and land pollution from other crude oil activities have devastated ecological resources making land uncultivable and sea resources economically unviable to increase environmental refugees (Alhaji et al., 2018;Pham-Duc et al., 2020). ii. Human Health and Epidemiology Lake Chad had shrunk to 10% of its original size, and many rivers in Nigeria, particularly in Northern Nigeria, are in danger of drying up. Because of the scarcity of water, users will tend to congregate around the few remaining sources of water. Under such conditions, there is an increased risk of further contamination of the limited water sources, as well as the transmission of water-borne diseases such as cholera, typhoid fever, guinea worm infection, and river blindness. According to Odjugo (2000) and DeWeerdt (2007), rising temperatures will cause mosquitoes to migrate northward and malaria fever to spread from the tropical region to the warm temperate region, while the sporogony of the protozoa that causes malaria will shorten from 25 days at 10 degrees Celsius to 8 days at 32 degrees Celsius. Heat exhaustion, famine, water-related diseases (diarrhoea, cholera, and skin diseases), inflammatory and respiratory diseases (cough, and asthma), depression, skin cancer, and cataract will all increase as a result of climate change's excessive heat, increasing water stress, air pollution, and suppressed immune system. Confalonieri et al. (2007b) also identified the effects of climate provoking drought on health to include sudden deaths, malnutrition, infectious and respiratory diseases. Countries within the "Meningitis Belt" in semi-arid sub-Saharan Africa had experienced the highest endemic and epidemic frequency of meningococcal meningitis. The authors went on to say that the spatial distribution, intensity, and seasonality of meningococcal (epidemic) meningitis appear to be strongly linked to climate and environmental factors, especially drought. iii. Infrastructure Floods are low-probability, high-impact events that can overwhelm physical infrastructure and human communities, according to Confalonieri et al. (2007a). In the year 2012, Nigeria experienced major flooding that affected 21 states and resulted in the displacement of thousands, the destruction of homes, farmlands, and infrastructure -particularly roads, electric poles, and pipelines -resulting in a shortage of food production in the second half of 2012. (Okonjo-Iweala, 2013;Ujah, 2009) iv. Economy Agriculture: Over 70% of Nigeria's population relies on rain-fed agriculture and fishing as their primary source of income, which accounts for more than 33% of national income (Mereu et al., 2018). A situation in which the variability in the timing and amount of rainfall poses a high risk to the food production system (Mereu et al., 2018). Crops account for nearly 94 percent of the agricultural sector in Nigeria, and some areas are already experiencing a 20 percent reduction in the length of growing days (Ati et al., 2002). Temperature increases reduce the growth rates of maize, guinea corn, millet, and rice. Warming trends also make root crops and vegetables more difficult to store for those who do not have access to refrigerators. Increasing variations in the timing and amount of rainfall will have a negative impact on agriculture. Water scarcity may also reduce crop and livestock production, necessitating imports (Akinbile et al., 2020). The impact of climate change on agriculture can be seen in the southern part of Nigeria through the effects of gully erosion on land use. Gully erosion has resulted in the loss of a significant amount of arable land in the states of Anambra, Enugu, Ebonyi, and Kogi. Sometimes the effects of erosion take the form of leaching, in which agricultural lands are devastated, rendering the land unproductive and resulting in a poor harvest . Drought caused by climate change, particularly in the eastern parts, has resulted in environmental conditions such as decreased pasture, soil, and surface inflow of water, particularly to Lake Chad, causing it to shrink in the region. This has resulted in overpopulation of livestock in the Chad basin, as well as a southward shift, complicating farmer-fisherman-herder conflicts (Bashir & Kyung-Sook, 2018). Empirical studies (Ubani Onyejeke, 2013; Ojimba and Iyagba, 2012) have found that gas flaring and crude oil spillages and pollution have a negative impact on agricultural yield, farmland loss, and the degradation of fish and other aquatic resources in the Niger Delta region (Adekola & Mitchell, 2011). Aside from floods and drought, there are other types of extreme weather events, such as hailstorms/hailstones that accompany heavy rains, causing widespread destruction of rural farmers' houses and agricultural crops, particularly in northern Nigeria (Ibrahim, 2012;Abubakar, 2013). Furthermore, severe floods have destroyed vast swaths of fertile floodplain farmlands. Increased rainfall in Southern Nigeria, combined with irregular rainfall events, has also caused flooding, which has harmed mining operations as well as offshore drilling in the region (Adesina & Odekunle, 2011). Manufacturing: According to Yahaya et al. (2011), the effect of global warming on climate-vulnerable sectors of the economy such as agriculture and coastal resources that provide inputs for industry has threatened infant industries as well as small and medium-sized businesses (Mereu et al., 2018;Olaniran, 1991). According to the authors, this development will have a negative impact on the country's GDP due to a lack of access to manufacturing inputs. Similarly, the Manufacturing Sector suffered significant losses due to flooding, impeding its ability to meet production targets. This is due to the sector's reliance on agricultural products as inputs, which are vulnerable to climate change (Nachmany et al., 2015). Tourism: According to Anadu (1987) and (Osemeobo, 1988), forests continue to be a haven for wildlife, and if the forests disappear, the wildlife does as well. This makes tourist attractions in Nigeria more vulnerable to climate change, resulting in a loss of patronage and revenue for the tourism industry. Food Insecurity: More than 800 million people in tropical and sub-tropical countries are currently food insecure. This is as a result of increased crop failure and loss of livestock. Climate change is expected to affect human health and livelihoods. Crops occupy nearly 94 percent of the agricultural sector in Nigeria. Some areas are already experiencing loss in the length of growing days by 20 percent (Ati et al., 2002). Pests and crop diseases can also spring up in response to climatic variations which may hamper food storage. The extreme weather events like storms, heavy winds and floods may ravage farmlands, leading to crop failure, and food shortage (Echendu, 2020). The challenge of food insecurity from climate change experience is also expected to affect human health, livelihoods, and people's purchasing power at household level across Nigeria (Oladipo, 2010). Pests and crop diseases can also spring up in response to climatic variations which may hamper food storage. The extreme weather events like storms, heavy winds and floods may ravage farmlands, leading to crop failure, food shortage and food shortage (Liu et al., 2008;Oort & Zwart, 2018). The challenge of food insecurity from Climate change experience is also expected to affected human health, livelihoods, people's purchasing power at household level. According to (Milos & Sani, 2017) the warming trend also hinders livestock production with reduction in animal weight and diary yield. Desertification and reduction in Lake Chad water levels are likely to cause food shortage in the Northern Sahel region, which accounts for 26.6 percent of Nigeria's land area. Agricultural practice in Southern Nigeria is also affected by climate change because of low elevation structure of the area, which subjected it to salt water intrusion as the sea level rises (BNRCC (Building Nigeria's Response to Climate Change Project), 2011). v. Impact on Energy Generation and Supply in Nigeria Energy services are necessary inputs for every nation's development and growth. And also the fuel driving the engine of growth and sustainability development is a nation's access to reliable and adequate energy (Ben-Iwo et al., 2016). Nigeria has an abundant supply of energy sources as it's endowed with thermal, hydro, solar, oil resources and yet still described as an energy poor country (Olaniyan et al., 2018). Nigeria as a country is highly vulnerable to the impact of climate change because its economy is mainly dependent on income generated from the production, processing, export and/or consumption of fossil fuels and associated energy-intensive products (Sayne, 2011). Nigeria will be increasingly be affected by climate change in trends, increasing variability, greater extremes and large inter-annual variations in climate parameters in some regions (Akpodiogaga-a & Odjugo, 2010). Climate change is also expected to negatively impact the already limited electrical power supply through impacts on hydroelectric and thermal generation coupled with service interruptions is also expected to result from damage to transmission lines and substation equipment impacted by sea level rise, flash floods and other extreme weather events (Oguntunde et al., 2011). Responses 4.4.1 Nigeria's response to climate change includes strategies for mitigation, adaptation, and resilience building. Mitigation: responses are aimed at reducing GHG emissions from the four key sectors of the economy that are responsible for them (Table 1). In doing so, the Federal Government of Nigeria pledged as its Intended Nationally Determined Contribution (INDC) to climate change mitigation at the Paris climate summit in Journal of Environment and Earth Science www.iiste.org ISSN 2224-3216 (Paper) ISSN 2225-0948 (Online) Vol.11, No.7, 2021 December 2015 to implement policies and strategies that facilitate a 20% unconditional and 40% conditional reduction in GHG emissions by 2030. (Nachmany et al., 2015). To that end, the Nigerian government intends to work on: (i) ending gas flaring; (ii) constructing efficient gas power plants; (iii) reducing transmission losses by improving the electricity grid; (iv) increasing off-grid solar PV by investing in renewable energy; (v) increasing economy-wide energy efficiency; (vi) promoting climate smart agriculture and reforestation; and (vii) promoting climate smart agriculture and reforestation. Previous efforts at reforestation in Nigeria were among these strategies (Osemeobo, 1988). While reforestation efforts have the potential to sequester CO2, thus mitigating climate change, such efforts have failed in Nigeria due to a lack of widespread conservation agricultural practices, unsustainable logging, and an increasing population of households relying on fuelwood to meet their daily energy needs (Boahene, 1998;Osemeobo, 1988). Renewable energy, such as the construction of solar panels, is another important aspect that has recently gained government attention. However, on a grand scale, this is insignificant. Adaptation and resilience: Table 1 highlights a number of overlapping responses that various sectors of the Nigerian economy could employ in the face of the aforementioned adverse effects of climate change (Sayne, 2011). Depending on the level of costs or investments required, these responses are classified as soft or hard, structural or non-structural . Soft or non-structural adaptation strategies frequently aim to help actors cope with the effects of climate change. Crop and livestock farmers, for example, commonly use soft adaptive practices such as shifting the sowing/planting date and planting drought-resistant crops and livestock feeds (Oloruntade et al., 2017). In practice, these soft practices are frequently less expensive, and no expert training is required to employ them, so they are widely used by resourced poor actors (e.g., small landholders) across Nigeria's eco-climatic belts. On the other hand, there are a number of hard or structural adaptation strategies available, such as increasing irrigated crop area and using inorganic soil fertilizers, among others (Table 1) (Adesina & Odekunle,3011). The goals of these strategies extend beyond the formal goal of constructing a coping mechanism to the goal of constructing resilience in the various systems and sectors of the economy. Diversification, early warning systems, risk management, and human development and capacity building are examples of resilience building strategies. One significant benefit of diversification is that it creates a backup or buffer that helps spread risks. Thus, in the agricultural sector, actors can shift from crop to livestock and vice versa, or to other sources of income (e.g., daily paid work). While new infrastructure can be built across various sectors (e.g., energy, communication, and transportation) as a backup in the event that existing infrastructure is disrupted due to the adverse effects of climate change (e.g. flooding, rainstorm, sea level rise, droughts) . CONCLUSION This review synthesised evidence on the impact of climate change in Nigeria (geographic, sectoral, demographic and security impacts) and responses to address it (i.e. climate change mitigation and adaptation, adaptive capacity and capacity development). If the above-mentioned mitigation, adaptation, and resilience-building strategies are not implemented, the effects of climate change on the Nigerian economy are likely to worsen. This could result in GDP losses ranging from 6% to 30% by 2050, corresponding to N15 trillion (US$100 billion) and N69 trillion (US$460 billion) in monetary terms, respectively Cervigni et al., 2013;Mereu et al., 2018;Nachmany et al., 2015). Consequently, the review advocates for the rapid development of climate and environmental change policies in order to reposition the Nigerian environment for recovery and to avoid future over-exploitation of natural capital, which will eventually lead to the complete breakdown and shutdown of ecosystem functions and services critical Journal of Environment and Earth Science www.iiste.org ISSN 2224-3216 (Paper) ISSN 2225-0948 (Online) Vol.11, No.7, 2021 48 to supporting livelihoods across the country. The review also emphasizes the importance of adaptation planning and inter-sectoral and inter-actor collaborative actions for resilience building in order to strengthen the Nigerian economy and improve the capacities and livelihoods of Nigerians in managing and mitigating the effects of climate and environmental change across all agro-ecological regions. No tillage (Cervigni, et al., 2013). Planting of cover crops (e.g. melon and potatoes) (Adesina & Odekunle, 2011.). Organic fertilization (e.g., manure, mulch, crop residues, or nitrogen-fixing trees or legumes) Cervigni, et al., 2013;Mereu, et al, 2018). Shift of the sowing/planting date Planting or maintaining shade canopy over plantation farms (e.g., live fencing, shelterbelts, and woodlots) (Cervigni, et al., 2013;BNRCC, 2012) Planting of drought resistant crops and livestock feeds Laws to address open access, reduce bush burning, and establish grazing reserves or ranches, rotational grazing, or grazing corridors (Cervigni, et al., 2013;BNRCC, 2012) Expansion of irrigated crop area Planting or sowing 1 month earlier or later than the traditional calendar (Cervigni, et al., 2013;Mereu, et al, 2018) Early warning system Short duration maturing crops and feeds Diversification (agriculture and infrastructure) Large-and small-scale irrigation plants Resilience Risks management Seasonal weather forecasts and communication (Ati et al., 2002) Floods and droughts alert Crop to livestock and vice versa or to other livelihood activities (Adesina & Odekunle, 2011.) Diversify energy and communication sources as backup Build new and maintain road networks Human development and capacity building Crop and livestock insurance (BNRCC, 2011) Strengthen individual and community-based emergency preparedness and response capacity, Promote and fund research and development projects, Provide information and awareness, training, equipment, plans and scenarios, and communication (BNRCC, 2011).
2021-09-29T15:54:06.760Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "e848c19fab100be36350a428879bdf688b5aa5b2", "oa_license": "CCBY", "oa_url": "https://iiste.org/Journals/index.php/JEES/article/download/56808/58664", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "5ddd208fb4d52ac37993dbd28b5493ea75d57c22", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
119116528
pes2o/s2orc
v3-fos-license
Modelling galaxy spectra in presence of interstellar dust-III. From nearby galaxies to the distant Universe Improving upon the standard evolutionary population synthesis (EPS) technique, we present spectrophotometric models of galaxies whose morphology goes from spherical structures to discs, properly accounting for the effect of dust in the interstellar medium (ISM). These models enclose three main physical components: the diffuse ISM composed by gas and dust, the complexes of molecular clouds (MCs) where active star formation occurs and the stars of any age and chemical composition. These models are based on robust evolutionary chemical models that provide the total amount of gas and stars present at any age and that are adjusted in order to match the gross properties of galaxies of different morphological type. We have employed the results for the properties of the ISM presented in Piovan, Tantalo&Chiosi (2006a) and the single stellar populations calculated by Cassar\`a et al. (2013) to derive the spectral energy distributions (SEDs) of galaxies going from pure bulge to discs passing through a number of composite systems with different combinations of the two components. The first part of the paper is devoted to recall the technical details of the method and the basic relations driving the interaction between the physical components of the galaxy. Then, the main parameters are examined and their effects on the spectral energy distribution of three prototype galaxies are highlighted. We conclude analyzing the capability of our galaxy models in reproducing the SEDs of real galaxies in the Local Universe and as a function of redshift. INTRODUCTION In recent years the first epochs of galaxy formation have been continuously pushed back in time by the discovery of galaxy size objects at higher and higher redshifts, z∼4-5 (Madau et al. 1996;Steidel et al. 1999), z∼6 (Stanway, Bunker & McMahon 2003;Dickinson et al. 2004) to z∼10 (Zheng et al. 2012;Bouwens et al. 2012;Oesch et al. 2012), up to z∼10-20 according to the current view (Rowan-Robinson 2012).Furthermore, this high redshift Universe turned out to heavily obscured by copious amounts of dust (see for instance Shapley et al. 2001;Carilli et al. 2001;Robson et al. 2004;Wang et al. 2009;Micha lowski et al. 2010), whose origin and composition are a matter of vivid debate ⋆ E-mail: letizia@lambrate.inaf.it(LPC); lorenzo.piovan@gmail.com(LP); cesare.chiosi@unipd.it(CC) (Gall, Andersen & Hjorth 2011a,b;Dwek, Galliano & Jones 2009;Draine 2009;Dwek & Cherchneff 2011).In this context, the current paradigm is that the interstellar dust plays an important role in galaxy formation and evolution.Therefore, understanding the properties of this interstellar dust and setting a physically realistic interplay between populations of stars and dust are critical to determine the properties of the high-z Universe and to obtain precious clues on the fundamental question about when and how galaxies formed and evolved. It follows from all this that to fully exploit modern data, realistic spectro-photometric models of galaxies must include this important component of the interstellar medium (ISM).This has spurred unprecedented effort in the theoretical modelling of the spectro-photometry, dynamics, and chemistry of dusty galaxies (see for instance Narayanan et al. 2010;Jonsson, Groves & Cox 2010;Grassi et al. 2011;Pipino et al. 2011;Popescu et al. 2011). Dust absorbs the stellar radiation and re-emits it at longer wavelengths, deeply changing the shape of the observed spectral energy distributions (SEDs) (Silva et al. 1998;Piovan, Tantalo & Chiosi 2006b;Popescu et al. 2011).It also strongly affects the production of molecular hydrogen and the local amount of ultraviolet (UV) radiation in galaxies, playing a strong role in the star formation process (Yamasawa et al. 2011). This paper is a sequel of the series initiated with Piovan, Tantalo & Chiosi (2003, 2006a,b) devoted to modelling the spectra of galaxies of different morphological type in presence of dust.In this paper we present new results for the light emitted by galaxies of different morphological type and age.Our galaxy model contains three interacting components: the diffuse ISM, made of gas and dust, the large complexes of molecular clouds (MCs) in which active star formation occurs and, finally, the populations of stars that are no longer embedded in the dusty environment of their parental MCs.Furthermore, our model for the dust takes into account three components, i.e. graphite, silicates and polycyclic aromatic hydrocarbons (PAHs).We consider and adapt to our aims two prescriptions for the size distribution of the dust grains and two models for the emission of the dusty ISM.The final model we have adopted is an hybrid one which stems from combining the analysis of Guhathakurta & Draine (1989) for the emission of graphite and silicates and Puget, Leger & Boulanger (1985) for the PAH emission, and using the distribution law of Weingartner & Draine (2001a) and the ionization model for PAHs of Weingartner & Draine (2001b).The SEDs of single stellar populations (SSPs) of different age and chemical composition, the building blocks of galactic spectra, are taken from Cassarà et al. (2013) who have revised the contribution by asymptotic giant branch (AGB) stars taking into account new models of stars in this phase by Weiss & Ferguson (2009) and include the effect of dust on both young stars and AGB stars.During the history of a SSP there are two periods of time in which self obscuration by dust in cloud surrounding a star, causing internal absorption and re-emission of the light emitted by central object, plays an important role: young massive stars while they are still embedded in their parental MCs and intermediate and lowmass AGB stars when they form their own dust shell around (see Piovan, Tantalo & Chiosi 2003, 2006a,b;Cassarà et al. 2013, for more details about AGB stars).With the aid of all this, we seek to get clues on the spectro-photometric evolution of galaxies, in particular taking into account the key role played by dust in determining the spectro-photometric properties of the stellar populations, and to set up an library of template model galaxies of different morphological type (from pure spheroids to pure discs and intermediate types with the two components in different proportions) whose SEDs at different evolutionary time can be compared with galaxies of the Local Universe and as function of the redshift. The strategy of the paper is as follows.The model we have adopted is shortly summarized in Sect. 2 where first we define the galaxy components we are dealing with, i.e. bare stars, stars embedded in MC complexes, and diffuse ISM; second we outline the recipes and basic equations for the gas infall, chemical evolution, initial mass function (IMF) and star formation rate (SFR); third we describe how the total amounts of stars, MCs and ISM present in the galaxy at a certain age are distributed over the galaxy volume by means of suitable density profiles, one for each component that depend on the galaxy type: pure disc galaxies, pure spheroidal galaxies, and composite galaxies with both disc and bulge; finally the galaxy volume is split in suitable elemental volumes to each of which the appropriate amounts of stars, MCs and ISM are assigned.In Sect. 3 we explain how the SEDs of galaxies of different morphological type are calculated in presence of dust in the ISM.First the technical details of the method are described and basic relationships/equations describing the interaction between the physical components of the galaxy are presented.In Sect. 4 we shortly describe the composition of the dust.In Sect. 5 we list and discuss the key parameters of the chemical and spectro-photometric models for the three basic types of galaxy: pure discs, pure spheroids and composite systems.In Sect.6 the SED of a few proto-type galaxies are presented, first as it would be observed at the present time and then as a function of the age.In Sect.7 we examine the present-day theoretical colours in widely used photometric systems as a function of the morphological type of the underlying galaxy and compare them with literature data.In Sect.8 we present the colour evolution of the model galaxies as a function of the redshift assuming the nowadays most credited ΛCDM cosmological scenario and compare the theoretical colours with the observational data currently available in literature.Finally, some concluding remarks are drawn in Sect.9. Physical components of galaxy models As originally proposed by Silva et al. (1998) and then adopted in many EPS models with dust (see for instance Bressan, Silva & Granato 2002;Piovan, Tantalo & Chiosi 2006a,b;Galliano, Dwek & Chanial 2008;Popescu et al. 2011), the most refined galaxy models with dust available in literature contain three physical components: -The diffuse ISM, composed by gas and dust; the model for the ISM adopted in this study is described in detail in Piovan, Tantalo & Chiosi (2006a) and includes a state-ofart description of a three component ISM made by graphite, silicates and PAHs, the most widely adopted scheme for a dusty ISM. -The large complexes of MCs where active star formation takes place.In our model we do not consider HII regions and nebular emission.The MCs hide very young stars; their SEDs are severely affected by the dusty environment around them and skewed toward the IR.The library of dusty MCs for which a ray-tracing radiative transfer code has been adopted is presented in Piovan, Tantalo & Chiosi (2006a) at varying the input parameters. -The populations of stars free from their parental MCs.They are both intermediate age AGB stars (with age from about 0.1 Gyr to 1-2 Gyr) that are intrinsically obscured by their own dust shells (Piovan, Tantalo & Chiosi 2003;Marigo et al. 2008), and the old stars that shine as bare objects and whose radiation propagates through the diffuse galactic ISM.The effect of dust around AGB stars has been included ex-novo taking into account the new models of TP-AGB stars by Weiss & Ferguson (2009) as described in Cassarà et al. (2013), where new SEDs of SSPs have been presented and tested. The models of galaxies are based on a robust evolutionary chemical model that, considering a detailed description for the gas infall, star formation law, IMF and stellar ejecta, provide the total amounts of gas and stars present at any age, with their chemical history (Chiosi 1980;Tantalo et al. 1996Tantalo et al. , 1998;;Portinari, Chiosi & Bressan 1998;Portinari & Chiosi 1999, 2000;Piovan, Tantalo & Chiosi 2006b).These chemical models are adjusted in order to match the gross properties of galaxies of different morphological type.The interaction between stars and ISM in building up the total SED of a galaxy is described using a suitable spatial distribution of gas and stars.For each type of galaxies, a simple geometrical model is assumed.The following step is to distribute the total gas and star mass provided by the chemical model over the whole volume, using suitable density profiles, according to each component and depending on the galaxy type (pure spheroid, pure disc, and a combination of disc plus bulge).The galaxy is split in suitable volume elements; each elemental volume contains the appropriate amounts of stars, MCs and ISM and it is at the same time source of radiation (from the stars inside) and absorber and emitter of radiation (from and to all other elemental volume and the ISM in between).These elements are the primordial seeds to calculate the global galaxy SED. The star formation and chemical enrichment of galaxy models The so-called infall-model, developed by Chiosi (1980), and used by many authors (Bressan, Chiosi & Fagotto (1994), Tantalo et al. (1996), Tantalo et al. (1998), Portinari, Chiosi & Bressan (1998)) characterized the star formation and chemical enrichment histories of the model galaxies.Originally conceived for disc galaxies, over the years it has been extended also to the early-type ones (ETGs).In this paragraph the main features of the infall model are presented.Within an halo of arbitrary shape and volume, which contains Dark Matter (DM) of mass MDM , the mass of the Baryonic (luminous) Matter, MBM , evolves with time by infall of primordial gas following to the law τ is the infall time scale.The constant M 0 BM is fixed considering that at the present age tG the mass MBM (t) is equal to MBM (tG).The baryonic (luminous) asymptotic mass of the galaxy becomes: while the time variation of the BM is For more details we refer to Tantalo et al. (1996Tantalo et al. ( , 1998)). Early-type galaxies and/or bulges.Applied to an ETG or a bulge, our infall model of chemical evolution can mimic the collapse of the parental proto-galaxy made of BM and DM in cosmological proportions from a very extended size to the one we see today.Under the self gravity both DM and BM collapse and shrink at a suitable rate.As the gas falls and cools down into the common potential well, the gas density increases so that star formation begins.A central visible object is formed.Soon or later both components virialize and settle to an equilibrium condition, whose geometrical shape is close to a sphere.In this case, the galaxy can be approximated by a sphere of DM with mass MDM and radius RDM , containing inside a luminous, spherical object of mass MBM and radius RBM .As more gas flows in, the more efficient star formation gets.Eventually, the whole gas content is exhausted and turned into stars, thus quenching further star formation.The star formation rate starts small, rises to a maximum, and then declines.Because of the more efficient chemical enrichment of the infall model, the initial metallicity for the bulk of star forming activity is significantly different from zero.The radius of the stellar component of a galaxy will grow with the mass of it according to the law of virial equilibrium.It must be underlined that the collapse of the proto-galactic cloud cannot be modelled in a realistic dynamical way using a traditional chemical code: to simulate the gas dynamics other technique must be used (see for instance Chiosi & Carraro 2002;Springel, Di Matteo & Hernquist 2005;Merlin & Chiosi 2007;Gibson et al. 2007;Merlin et al. 2010Merlin et al. , 2012)).However, these models require lots of computational time and do not allow us to quickly explore the space of parameters.Therefore, our chemical model will be a static one simulating in a simple fashion the formation of the galaxy by the collapse of primordial gas in presence of DM.In other words, the model galaxy is conceived as a mass point (Chiosi 1980) for which no information about the spatial distribution of stars and gas is available.These latter will be distributed according to suitable prescriptions (see below where the distribution laws and the normalization of the physical quantities are explained in detail for the different morphological types).Finally it is worth recalling that the chemical history of spheroidal systems is best described by models in which galactic winds can occur. Disc galaxies.In the case of pure disc galaxies or the disc component of intermediate type galaxies, it is reasonable to suppose that discs are the result of accumulation of primordial or partially enriched gas at a suitable rate (as originally suggested by the dynamical studies of Larson 1976;Burkert, Truran & Hensler 1992).If so the formalism presented above can be extended to model galactic discs, provided we identify the baryonic mass MBM (t) with the surface mass density and consider the disc as made by a number of isolated and independent concentric rings in which the mass grows as a function of time (Portinari & Chiosi 1999, 2000) or as a number of rings in mutual communication thanks to radial flows of gas and dust (Portinari & Chiosi 2000).However, for the purposes of this study the simple one-zone formulation is fully adequate also for disc galaxies: the radial dependence is left aside and the disc is modelled as an dimensionless object as in the classical paper by Talbot & Arnett (1971). This simple models well reproduces the results of dynamical models (Larson 1976;Burkert, Truran & Hensler 1992;Carraro, Ng & Portinari 1998), with the exception of the radial flows of gas.As for the spheroidal objects above, the spatial distribution of stars and gas in the disc will be introduced by hand later on (see below).The chemical history and dynamical structure of galactic discs are fully compatible with absence of galactic winds. Gravitational potential of a spherical system and galactic wind.In order to determine the physical conditions under which galactic winds may occur, we need to evaluate the gravitational potential of spherical systems made of DM and BM with different radial distributions.In the following we adopt the formalism developed by Tantalo et al. (1996) that is shortly summarized here for the sake of clarity.The spatial distribution of DM with respect to BM follows the dynamical models of Bertin, Saglia & Stiavelli (1992) and Saglia, Bertin & Stiavelli (1992): the mass and radius of the DM (MDM and RDM) are linked to those of the BM (MBM and RBM) by The mass of the dark component is supposed to be constant in time and equal to MDM = βMBM(tG), where MBM (tG) the asymptotic content of BM of a galaxy, and β ≃ 6 is given by the baryon ratio in the ΛCDM Universe we have adopted (Hinshaw et al. 2009).With the assumptions, the binding gravitational energy of the gas is given by Ωg where Mg(t) is the current value of the gas mass, αBM is a numerical factor ≃ 0.5, while is the contribution to the gravitational energy given by the presence of dark matter.Adapting the original assumptions by Bertin, Saglia & Stiavelli (1992) and Saglia, Bertin & Stiavelli (1992) to the present situation, we assume MBM/MDM = 0.16 and RBM/RDM = 0.16.Using these values for MBM/MDM and RBM/RDM, the contribution to gravitational energy by the DM is Ω ′ BD = 0.03.Assuming that at each stage of the infall process the amount of luminous mass that has already accumulated gets soon virialized and turned into stars, the total gravitational energy and radius of the material already settled onto the equilibrium configuration can be approximated with the relations for the total gravitational energy and radius as function of the mass (Saito 1979a,b) for elliptical galaxies whose spatial distribution of stars is such that the global luminosity profile follows the R 1/4 law.The relation between RBM(t) and MBM(t) became: with η = 1.45 (see Arimoto & Yoshii 1987;Tantalo et al. 1998). If DM and BM are supposed to have the same spatial distribution, Eqs. ( 5) and ( 6) are no needed: the binding energy becomes In the spheroidal systems, when the thermal energy of the gas in the galaxy, heated by the SNe explosions, stellar winds and UV radiation from massive stars and cooled down by radiative processes, equals or exceeds the gravitational potential energy of this the galactic wind is supposed to occur, star formation is quenched and all the remaining gas is expelled.In contrast, no galactic winds are supposed to occur in disc galaxies. Basic chemical equations.The complete formalism of the chemical evolution models providing the backbone of the photometric history of galaxies can be found in Tantalo et al. (1996) for a spherical system and in Portinari & Chiosi (2000) for disc galaxies with radial flows (see also Cassarà (2012) for an exhaustive review of the various models in literature).Here we show only the final equations for the chemical evolution of the ISM made of dust and gas lumped together in a single component. Indicating with Mg (t) the mass of gas at the time t, the corresponding gas fraction is Denoting with Xi (t) the mass abundance of the i-th chemical species, we may write where by definition i Xi = 1.The evolution of the normalized masses Gi and abundances X(i) of the i-th element, the system of equations is given by: where c 2014 RAS, MNRAS 000, 1-?? Galaxy spectra with interstellar dust 5 and The first term at the r.h.s. is the depletion of the ISM because of the star formation process that consumes the interstellar matter; the following three terms at r.h.s. are the contributions of single stars to the enrichment of the element i, the fifth term is the contribution by the primary star of a binary system, the sixth term is the contribution of type Ia SNe, the following term describes the infall of primordial material, and finally the last one takes into account the eventual outflow of matter at the onset of the galactic wind in elliptical galaxies.f (M1) is the distribution function of the primary M1 mass in a binary system, between M1,min = M b,l /2 and M1,max = M b,u .RSNI is the rate of type Ia SNe rate , while ESNI,i is the ejecta of the chemical element i always in type Ia SNe.Further details on the calculation of yields and the adopted formalism can be found in Chiosi & Maeder (1986), Matteucci & Greggio (1986) and Portinari, Chiosi & Bressan (1998).For a complete presentation of the equations of chemical evolution see Cassarà (2012).Initial Mass Function.In literature there are several possible laws for the IMF: at least nine according to Cassarà (2012); Cassarà et al. (2013) have derived the SSP photometry in presence of dust for all of them.In this paper, for the sake of simplicity we consider to the classical Salpeter's law where the proportionality constant is fixed by imposing that the fraction ζ of the IMF mass comprised between ≃ 1M⊙ (the minimum mass whose age is comparable to the age of the Universe) to the upper limit Mu (≃ 100 M⊙), i.e. the mass interval effectively contribution to nucleosynthesis, is fixed.Accordingly Star Formation Rate.We adopt the classical law by Schmidt (1959).The SFR, i.e. the number of stars of mass M born in the time interval dt and mass interval dM , is dN/dt = Ψ (t) Φ (M ) dM .The rate of star formation Ψ (t) is the Schmidt (1959) law which, adopted to our model Ψ (t) = νMg (t) k and normalized to MB (tG), becomes The parameters ν and k are crucial: k yields the dependence of the star formation rate on the gas content; current values are k = 1 or k = 2.The factor ν measures the efficiency of the star formation process.In this type of model, because of the competition between gas infall, gas consumption by star formation, and gas ejection by dying stars, the SFR starts very low, grows to a maximum and then declines. The time scale τ of Eq. 1 roughly corresponds to the age at which the star formation activity reaches the peak value. The chemical models for pure spheroids and/or discs provide the mass of stars, M * (t), the mass of gas Mg (t) and the metallicity Z (t) that are used as entries for the population synthesis code.In the case of composite galaxies made of a disc and a bulge, the mass of the galaxy is the sum of the two components.The assumption is that disc and bulge evolve independently and each component will have its M * (t), Mg (t) and Z (t). Stars and ISM: their spatial distribution In most EPS models, those in particular that neglect the presence of an ISM in form of gas and dust, the SED of a galaxy is simply obtained by convolving the SSP spectra with the SFH (Arimoto & Yoshii 1987;Bressan, Chiosi & Fagotto 1994;Tantalo et al. 1998) and no effect of the spatial distribution of the various components (stars and ISM) is considered.For intermediate type galaxies with a disc and a bulge (and maybe even a halo), the situation is mimicked by considering different SFHs for the various components (Buzzoni 2002(Buzzoni , 2005;;Piovan, Tantalo & Chiosi 2006b).This simple simple approach can no longer be used in presence of the ISM and the absorption and IR/sub-mm emission of radiation by dust.In particular, the emission requires a spatial description, whereas the sole treatment of the extinction could be simulated by applying a suitable extinction curve.In the general case of dust extinction/emission (Silva et al. 1998;Piovan, Tantalo & Chiosi 2006b) the spatial distribution of the ISM, dust and stars in the galaxy must be specified.The observational evidence is that the spatial distribution of stars and ISM depends on the galaxy morphological type and that it can be reduced to: pure spheroids, pure discs, and composite systems made of spheroid plus disc in different proportions (the irregulars are neglected here).Therefore one needs a different geometrical description for the various components, to each of which a suitable star formation history is associated.Finally in the present approach, no bursts of star formation are included into the chemical model (see Piovan, Tantalo & Chiosi 2006b, for somes examples of star-burst model galaxies).In the following we adopt the same formalism proposed by Piovan, Tantalo & Chiosi (2006b) which is also reported here for the sake of completeness and easy understanding by the reader.The formalism is presented in order of increasing complexity. Disc Galaxies The mass density distribution of stars (ρ * ), diffuse ISM (ρISM ), and MCs (ρMC), inside a galactic disc is approximated to a double decreasing exponential law.Considering a system of polar coordinates with origin at the galactic center [r, θ, φ], the height above the equatorial plane is z = rcosθ and the distance from the galactic center along the equatorial plane is R = rsinφ; φ is the angle between the polar vector r and the z-axis perpendicular to the galactic plane passing through the center.The azimuthal symmetry rules out the angle φ.The density laws for the three components are: where "i" can be " * ", "ISM ", "M Cs", i.e. stars, diffuse ISM .The scale parameters are chosen taking into account the observations for the type of object to model: for the disc of a typical massive galaxy like the Milky Way (MW), the typical assumption is z d ≃ 0.3−0.4kpc,and R d is derived from either observations of the gas and star distribution or from empirical relations (Im et al. (1995), log(R d /kpc) ∼ −0.2MB − 3.45, where MB is the absolute blue magnitude).Typical values for R d are around 5kpc. The constants ρ i 0 vary with the time step.Indicating with tG the age of the model galaxy the gaseous components ask the normalization constants ρ M C 0 (tG) and ρ ISM 0 (tG) since both loose memory of their past history.For the stellar component, ρ * 0 (t) is needed all over the galaxy life 0 < t < tG.The stellar emission is calculated using the mix of stellar populations of any age τ ′ = tG − t.The normalization constants come by integrating the density laws over the volume and by imposing the integrals to equal the mass obtained from the chemical model.The mass of each component Mi(t) is: The mass of stars born at the time t is given by Ψ(t): ρ * 0 (t) will be obtained by using M * (t) = Ψ(t).M ISM (t) is the result of gas accretion, star formation and gas restitution by dying stars.The current total mass M M C (tG) is a fraction of M ISM (t) and the remaining is the gas component Mg(t).The double integral (in r and θ) is numerically solved for ρ i 0 (t) to be used in Eq. ( 14).The galaxy radius R gal is left as a free parameter of the model. The last point is the subdivision of the whole volume of a disc galaxy into a number of sub-volumes.The energy source inside each of these can be approximated to a point source located in their centers, and the coordinates [r, θ, φ] are divided in suitable intervals.As far as the radial coordinate, nr = 40-60 is a good approximation in securing the overall energy balance among the sub-volumes, speeding up the computational time and yielding numerically accurate results.The number of radial intervals come by imposing that the mass density among two adjacent sub-volumes scales by a fixed ratio ρj/ρj+1 = ζ, with constant ζ.The grid for the angular coordinate θ is chosen in such a way that spacing gets thinner approaching the equatorial plane.We split the angle θ going from 0 to π in n θ sub-values.We need an odd value for n θ so that we have (n θ -1) /2 sub-angles per quadrant.The angular distance α1 between the two adjacent values of the angular grid is chosen following Silva (1999): R gal subtends a fraction f < 1 of the disc scale height (z d ).The grid for the angular coordinate φ is chosen to be suitably finely spaced near φ = 0 and to get progressively broader and broader moving away clockwise and counterclockwise from φ = 0. Early-type Galaxies and Bulges The luminosity distribution of ETGs is customarily described by King law.Following Fioc & Rocca-Volmerange (1997), we use a density profile slightly different from the King law to secure a smooth behavior at the galaxy radius R gal .The mass density profiles for stars, MCs, and diffuse ISM are where as usual "i" stands for " * ", "ISM ", "M Cs".r * c , are the core radii of the distributions of stars, MCs, and diffuse ISM; the exponents γ * and γMC can be 1.5 (Piovan, Tantalo & Chiosi 2006b) and γISM is not well known.Froehlich (1982), Witt, Thronson & Capuano (1992), Wise & Silva (1996) suggest to adopt, for the elliptical galaxies, γM ≃ 0.5−0.75.Here, we consider γM = 0.75.The density profile has to be truncated at the galactic radius R gal , free parameter of the model, to prevent the mass (tG) can be found by integrating the density law over the volume and by equating this value of the mass to the correspondent one derived from the global chemical model.The last step is to fix the spacing of the coordinate grid (r, θ, φ).The spherical symmetry simplifies this issue.and the spacing of the radial grid is made keeping in mind the energy conservation constrain.We take a sufficiently large number of grid points nr ≃ 40 − 60.The coordinate φ is subdivided into an equally spaced grid, with n φ elements in total, and φ1 = 0, φ1 = 0.For the azimuthal coordinate θ we adopt the same grid we have presented for the discs. Intermediate-type galaxies Intermediate-type galaxies go from the early S0 and Sa (big bulge) to the late Sc and Sd (small or negligible bulge).Different SFHs for the disc and the bulge can reproduce this behavior.We adopt a system of polar coordinates with origin at the galactic center (r, θ, φ): azimuthal symmetry rules out the coordinate φ.In the disc, the density profiles for the three components are the double decreasing exponential laws of Eq. 14 and the scale lengths are In the bulge the three components follow the Kinglike profiles (Eq.16) with the core radii r referred to the bulge.The SFHs of disc and bulge evolve independently: the total content in stars, MCs and ISM is the sum of the disc and bulge contributions. The composite shape of the galaxy lead the definition of a new mixed grid sharing the properties of both components.RB is the bulge radius, R gal the galaxy radius.The radial grid is defined building two grids of radial coordinates, rB,i and rD,i.The grid of the bulge is thicker toward the center of the galaxy: the coordinates ri,B of the bulge grid if ri < RB are considered, while for ri > RB the coordinates of the disc rD,i, until R gal , are used.The angular coordinate θ follows the same pattern.The azimuthal grid is chosen in the same way both for the disc and the bulge, both having azimuthal symmetry. The elemental volume grid Given the geometrical shape of the galaxies, the density distributions of the three main components, and the coordinate grid (r, θ, φ), the galaxy is subdivided into (nr, n θ , n φ ) small volumes V .Thereinafter the volume V (riV , θjV , φ kV ) will be indicated as V (i, j, k).The mass of stars, MCs, and diffuse ISM in each volume are derived from the corresponding density laws, neglecting all local gradients in ISM and MCs.The approximation works since the elemental volumes are small. SYNTHETIC PHOTOMETRY OF A GALAXY As already said, the light emitted by a galaxy has two main components: the light emitted by individual stars and the light emitted-reprocessed by the ISM.Our model of the SED emitted by galaxies of different morphological type along any direction strictly follows the formalism and results developed by Piovan, Tantalo & Chiosi (2006a,b).In the following we shortly summarize the prescriptions we have adopted to describe the stellar and ISM contributions.As we are now going to include their results for dusty ISMs in our model galaxies, it is wise to briefly summarize here the basic quantities and relationships in usage for the sake of completeness and clarity.The total cross section of scattering, absorption and extinction is given by the index p stands for absorption (abs), scattering (sca), total extinction (ext), the index i identifies the type of grains, amin,i and amax,i are the lower and upper limits of the size distribution for the i-type of grain, nH is the number density of H atoms, Qp (a, λ) are the dimension-less absorption and scattering coefficients (Draine & Lee 1984;Laor & Draine 1993;Li & Draine 2001) and, finally dni(a)/da is the distribution law of the grains (Weingartner & Draine 2001a). Using the above cross sections we calculate the optical depth τp(λ) along a given path L is the optical path and all other symbols have their usual meaning.The assumption is that the cross sections remain constant along the optical path. j small λ , j big λ and j P AH λ are the contributions to the emission by small grains, big grains and PAHs, respectively.How these quantities are calculated is widely described in Piovan, Tantalo & Chiosi (2006a) to whom the reader should refer for more details.The key relationships are the following ones.The contribution to the emission by very small grains of graphite and silicates is where dP (a) /dT is the distribution temperature from Tmin to Tmax attained by grains with generic dimension a under an incident radiation field and B λ (T (a)) is the Planck function.Q abs (a, λ) are the absorption coefficients, dn (a) /da is the Weingartner & Draine (2001a) distribution law for the dimensions, a f lu is the upper limit for thermally fluctuating grains, amin is the lower limit of the distribution.The emission by big grains of graphite and silicates is evaluated assuming that they behave like black bodies in equilibrium with the radiation field. where amax is the upper limit of the distribution and the meaning of the other symbols is the same as in Eq. 19. The emission by PAHs is given by where the ionization of PAHs is taken into account (Weingartner & Draine 2001b) and χi = χi (a) is the fraction of ionized PAHs with dimension a. SION λ ′ , λ, a and SNEU λ ′ , λ, a give the energy emitted at wavelength λ by a molecule of dimension a, as a consequence of absorbing a single photon with energy hc/λ ′ .a low P AH and a high P AH are the limits of the distribution and I λ ′ is the incident radiation field. Dust-free and dust-embedded SSPs In the following we group the SSPs according to whether or not they incorporate the effect of dust during the MC phase of the young massive stars.In all cases, the effect of the selfabsorbing envelopes of the AGB phase of intermediate and low mass stars is taken into into account. (i) Dust-free SSPs.These SSPs describe the situation in which a generation of stars has already evaporated the parental MC in which it was embedded.This implies that a certain amount of time has already elapsed from initial star forming event.So these SSPs no longer need to include the effects of self obscuration and radiation reprocessing exerted by the parental MC or local ISM.However, they still include these effects when caused by the dust shells surrounding the AGB stars.For these SSPs we consider the recent study made by Cassarà et al. (2013) in which the new models of TP-AGB stars by Weiss & Ferguson (2009) have been used.The reader should refer to Cassarà et al. (2013) for all details. (ii) SSPs-embedded in dusty MCs.In the early stages of their evolution we can consider stars as still heavily obscured by their parental dusty MCs.Libraries of these SSPs have been presented in Piovan, Tantalo & Chiosi (2006a) as a function of four parameters: (1) optical depth τ , (2) metallicity Z, (3) PAHs ionization state (three possibilities, that is, PAHs ionized with full calculations of the ionization state, neutral PAHs and ionization state of the PAHs as in a mixture of Cold Neutral Matter, Warm Neutral Matter and Warm Ionized Matter in the same relative proportions as in the MW) and, finally, (4) the abundance of Carbon in very small grains.These libraries are still up-to-date because no significant changes have been made to the structure and evolution of massive stars in the meantime.Therefore for these SSPs we adopt the library by Piovan, Tantalo & Chiosi (2006a), in which the absorption and emission of the radiation emitted by the young stars embedded in MCs are accurately calculated with the Ray-Tracing method.The Piovan, Tantalo & Chiosi (2006a) SSPs, however, neglect the effects by self-contamination in AGB stars.Although complete sets of SEDs with all these effects simultaneously taken into account would be desirable, or in other words the Piovan, Tantalo & Chiosi (2006a) SSPs should be folded into those by Cassarà et al. (2013), the above approximation is fully acceptable for a number of reasons: (1) the spectral regions interested by AGB stars does not coincide with the spectral regions interested by the interaction of young stars with the dust of the MCs; (2) the details of the SEDs caused by the transfer of energy from FIR to NIR are found to play a marginal role; (3) even if the self-absorption by the AGB dusty envelopes is included, the effect is found to be marginal unless very high optical depths are chosen for the cloud. To conclude in the following we adopt the SSPs by Piovan, Tantalo & Chiosi (2006a) up to the evaporation of the parental MC and switch to those by Cassarà et al. (2013) afterwards. Molecular Clouds and their evaporation According to the current view of star formation stars are born and live for part of their life inside MCs.As already recalled the radiation emitted by these stars in the UV region of the spectrum is absorbed by MCs and re-emitted in the infrared.Therefore the radiation emitted by a galaxy can be severely altered by the presence of the MCs.The ideal approach would be to be able to follow the evolution of MCs that are gradually consumed by the star forming process and swept away by the radiation emitted by the underneath stars (SNe explosions and stellar winds of massive stars).A task that goes beyond the aims of the present study and that is simplified to evaluating the time scale for the MC evaporation.In the real case, however, the clouds are destroyed on a timescale of the order of 10 Myr on average, typical lifetime of the molecular clouds (Dwek 1998;Zhukovska, Gail & Trieloff 2008).A simple way to simulate the process above is to assume that as the time goes on, the SSPs fluxes reprocessed by dust decrease, while the amount of flux left unprocessed increases.The time scale, t0, for the evaporation of the MC will depend on the properties of the ISM, the efficiency of the star formation and the energy injection by young stars inside.We expect t0 to be of the same order of the lifetime of massive stars (the age range going from 3 to 50 Myr).In a low density environment with a moderate rate of star formation, t0 is likely close to the lowest value (lifetime of the most massive stars of the population, case of a typical spiral galaxy), while t0 will be close to the upper value in a high-density environment, (star-burst galaxies) where very obscured star formation can occur in a high density ISM, more difficult to destroy. Finally, there is an important effect due to the the metallicity.See Piovan, Tantalo & Chiosi (2006a) for a more details. The spectral energy distribution of a galaxy The total SED emerging from the galaxy is simulated once the main physical components, their spatial distribution, the coordinate system and the grid of elemental volumes are known, and the interaction among stars, dusty ISM and MCs is modelled.A generic volume V ′ = V (i ′ , j ′ , k ′ ) of the galaxy will receive the radiation coming from all other elemental volumes V = V (i, j, k) and the radiation traveling from one volume to another interacts with the ISM comprised between them.The energy is both absorbed and emitted by the ISM under the interaction with the radiation field.Two simplifying hypotheses are followed: (i) The dust of a generic volume V does not contribute to the radiation field impinging on the volume V ′ .This is due to the low optical depths of the diffuse ISM in the MIR/FIR: dust can not effectively absorb in significant amount the radiation it emits, except for high density regions (Piovan, Tantalo & Chiosi 2006a).The incoming radiation depends only on stars and MCs. (ii) The radiative transfer from a generic volume V to V ′ is calculated by means of effective optical depth: For all details see Piovan, Tantalo & Chiosi (2006a) and Silva et al. (1998). The total radiation field incident on V ′ is: the summations are carried over the whole ranges of i, j, k ) is the value averaged over the volume of the square of the distance between the volumes V and V ′ .The effective optical depth τ ef f of Eq. 23 is given by : The integral represents the number of H atoms contained in the cylinder between V and V ′ .The two terms j M C (λ, V ) and j * (λ, V ) are the emission by MCs and stars per unit volume of V (i, j, k) and they are calculated at the center of the volume element. To calculate j M C (λ, V ) and j * (λ, V ) the fraction f d of the SSP luminosity that is reprocessed by dust and and the time scale t0 for this to occur are requested.The fraction is The fraction of SSP luminosity that escapes without interacting with dust is Tantalo & Chiosi (2006a) for a detailed description of the calculations for the monochromatic luminosity of a dust free and dust enshrouded SSPs: here we just report, for sake of clarity, the emission of stars and MCs per unit volume, j * (λ, V ) and j M C (λ, V ): and Once the incident radiation field J(λ, V ′ ) is known, we can obtain the emission per unit volume from the dusty ISM.The azimuthal and spherical symmetries of the galaxy models become very important, and it allows to calculate the dust emission at φ = 0 for all the possible values of r and θ on this "galaxy slice".The total radiation field for unit volume emitted by a single element is: ) is the radiation outgoing from a unit volume of the dusty diffuse ISM.The total outgoing emission from the volume V , j T OT (λ, V )×V , is of course different from volume to volume. The monochromatic luminosity measured by an external observer is calculated considering that the radiation emitted by each elemental volume (nr, n θ , n φ ) has to travel across a certain volume of the galaxy itself before reaching the edge, escaping from the galaxy, and being detected. The radiation is absorbed and diffused by the ISM along this path, and the external observer will see the galaxy along a direction fixed by the angle Θ (Θ = 0: galaxy seen face-on, Θ = π/2 galaxy seen edge-on).Hence: and τ ef f (λ, V, Θ) is the effective optical depth between V (i, j, k) and the galactic edge along the direction.The detailed description can be found in Piovan, Tantalo & Chiosi (2006b). THE COMPOSITION OF DUST To introduce this topic it is worth a quick summary about the dust properties and the mixture adopted in the models: for a much more extended analysis of this issue and all the details about the calculations of the emission/extinction effects, see Piovan, Tantalo & Chiosi (2006a). The physical properties of the interstellar grains are derived from the dust extinction curves in the UV/optical region of the spectra and the emission spectra in the infrared bands (from the near up to the far infrared), in different physical environments.From the amount of information that can be obtained looking at the effects of extinction and emission related to dust grains, it is possible to derive the features useful to constrain and define a quasi-standard model of interstellar dust, made up of three components.The characteristic broad bump of the extinction curve in the UV at 2175 Å and the absorption features at 9.7 µm and 18 µm (Draine 2003) require a two components model, made of graphite and silicates while a population of very small grains (VSGs), is necessary to reproduce the emission observed by IRAS at 12 µm and 25 µm. VSGs are not exclusively made of silicates (the 10 µm emission feature of silicates is not detected in diffuse clouds Mattila et al. (1996); Onaka et al. (1996)): more likely they are composed by carbonaceous material with broad ranges of shapes, dimensions, and chemical structures (Desert, Boulanger & Shore (1986) and Li & Mayo Greenberg (2002) present a detailed discussion of this topic). The last contributors to dust are the PAH molecules, that originated the emission lines at 3.3, 6.2, 7.7, 8.6, and 11.3 µm which have been first observed in luminous reflection nebulae, planetary HII regions and nebulae (Sellgren, Werner & Dinerstein 1983;Mathis 1990) and in the diffuse ISM with IRTS (Onaka et al. 1996;Tanaka et al. 1996) and ISO (Mattila et al. 1996).These spectral features are referred to as the aromatic IR bands (AIBs). It appears clear that any realistic model of a dusty ISM, able to explain the UV-optical extinction and the IR emission of galaxies, need the inclusion of at least three components: graphite, silicates, and PAHs.Furthermore, the big grains can be treated as in thermal equilibrium with the radiation field, while the VSGs could have temperatures above the mean equilibrium value.The properties of a mixture of grains are obtained once their cross sections, their dimensions, and the kind of interaction with the local radiation field are known.For more information see Piovan, Tantalo & Chiosi (2006a).We only summarize here the models that inspired our three dust components ISM, with graphite, silicates and PAHs.The cross sections for graphite are from Draine & Lee (1984), for sili-cates are from Laor & Draine (1993) and PAHs are from Li & Draine (2001), taking the latest releases from the B. T. Draine webpage.The extinction curves and the distribution of dust grains as a function of their dimension are taken from Weingartner & Draine (2001a).The emission of graphite and silicates, both for thermally fluctuating VSGs and big grains in thermal equilibrium, is based upon the classical paper Guhathakurta & Draine (1989), while for PAHs we adapted Puget, Leger & Boulanger (1985).Finally, the ionization state of PAHs is calculated with the physical models by Draine & Sutin (1987); Bakes & Tielens (1994); Weingartner & Draine (2001b). PARAMETERS FOR CHEMICAL AND SPECTRO-PHOTOMETRIC MODELS We summarize and shortly comment here the main parameters of the chemical and companion spectro-photometric models justifying the choice we have made for each type of model galaxies. Chemical parameters -The galactic mass MBM(tG).In the infall models it represents the asymptotic value reached by the baryonic component of a galaxy at the present time, the galaxy age tG.This asymptotic mass is used to normalize the gas and star masses of the galaxies.The procedure is straightforward and MBM(tG) strictly coincides with real baryonic mass of the galaxy at the present time in the case of disc galaxies, because their evolution is calculated in absence of galactic wind.Conversely in the case of spheroidal systems (bulges and/or ETGs), the occurrence of galactic winds by gas heating due to SN explosion and stellar winds, requires some some cautionary remarks.Galactic winds takes place when the gas thermal energy exceeds the gravitational binding energy of and consequently gas is expelled and star formation is halted.No subsequent revival of star formation is possible in these models.Therefore the real mass of the baryonic component coincides with the total mass of the stellar populations built up to the wind stage.This value is lower than MBM(tG).Keeping this in mind, also in this case the normalization mass is MBM(tG).For a detailed description of the galactic wind process and its effects on the masses of gas and stars, see Tantalo et al. (1996Tantalo et al. ( , 1998)); Cassarà (2012).Given these premises, galaxy models for bulges and discs of different masses have been calculated using the values of MBM(tG) listed in columns ( 2) and (3) of Table 1.Remember that the real present day mass of spheroidal systems is somewhat smaller than the listed value and also model dependent because of the occurrence of galactic winds that halt star formation and expel an sizable fraction of the initial BM. -The ratios RBM/RDM and MBM/MDM, gravitational potential and the effect of dark matter, are both set equal to 0.16; -The exponent k of the Schmidt (1959) star formation rate; all models are calculated using k = 1; -The efficiency ν of the star formation rate.All bulge models have ν = 5.Even if different values of ν at varying the galactic mass might be more appropriate to match real situations (Tantalo et al. 1996(Tantalo et al. , 1998)), we keep it constant at varying the mass.It is worth noticing that in some way the effect of the ν is not expected to be as strong as in Tantalo et al. (1996Tantalo et al. ( , 1998)), because the mass range we are considering is much narrower than in those studies.The value we adopt can be considered typical of a system with mass of 10 11 M⊙.The efficiency ν for the discs is significantly lower: all disc models have ν = 0.50 (Portinari, Chiosi & Bressan 1998), however adjusted the value to ν = 0.35 in order to match the mean metallicity in a typical disc like the Milky Way (Buzzoni 2005).If more complicated star formation laws are used to reproduce the star formation history in disc galaxies (Portinari & Chiosi 2000), the efficiency ν should be adjusted to match the observational value of the mean metallicity. -The initial mass function (slope and ζ).The slope is kept constant at the classical value of Salpeter, whereas the fraction ζ of the IMF containing stars able to enrich the ISM in chemical elements during the Hubble time is ζ = 0.5.As pointed in Tantalo et al. (1996), this is a good choice in order to get models with M/LB ratios in agreement with the observational data and it allows also to be consistent with the super-solar metallicities suggested in Buzzoni (2005) for the bulges in intermediate type galaxies.In the case of disc galaxies a lower value of ζ is adopted, i.e. ζ = 0.17.The choice is suggested by the typical mean metallicities in discs, i.e.Z 0.006 − 0.008 (Buzzoni 2005). -The infall time scale τ .Although the time scale of mass accretion is often considered as a free parameter of the models, we assume τ = 0.3 for all the models in order to reduce the number of free parameters and to mimic results from NB-TSPH numerical simulations by Merlin et al. ( 2012) on the time scale of collapse of baryonic matter and formation of the stellar content of an elliptical like object.The ever continuing, less intense star formation in disc galaxies suggest τ = 3 (Portinari & Chiosi 1999, 2000). -Age of galaxies tG: considering the ΛCDM model of the Universe and redshift of galaxy formation z = z f orm = 20, we get tG = 13.30Gyr.All galaxies are supposed to begin their star formation history at the same time, i.e. redshift. Spectro-Photometric Parameters Here we summarize and discuss only the most important parameters, distinguishing, as usual, between the bulge component of an intermediate type galaxy (or the elliptical galaxy) and the disc component (or the disc galaxy): -Spatial structure: r M c is scale radius of the ISM in the King law (Eq.14); we assume r M c =0.5 kpc.In general, the gas is made up of molecular clouds with active star formation and diffuse interstellar medium.Consequently, we need the length scales of MCs, diffuse ISM, and stars for which we assume the same value r M c =0.5 Kpc.For the effects due to variations of scale radii see Piovan, Tantalo & Chiosi (2006a).The ratio r * c /r M c allows the distribution of gas and stars according to different scale radii.For the sake of simplicity we assume here that both components have the same spatial distribution (see Chiosi & Carraro 2002;Cassarà 2008Cassarà , 2012, for more details).Radial and vertical mass distributions in discs: the parameters 16) we assume γ * = 1.5, γMC= 1.5, and γgas=0.75 as appropriate. -"fgasmol".This parameter fixes the amount of molecular gas present in the galaxy at the age with the peak of star formation, with respect to the total gas.This value is then used to scale proportionally the amount of MCs at different ages of the evolution of the galaxy.Indeed, we assume that star formation occurs in the cold MCs and therefore they should be dominant in the ISM at the peak of SFR.For elliptical galaxies or bulges it has been used only before the onset of the galactic wind, after which the star formation process halts.In ETGs and bulges fgasmol=0.8.In disc galaxies we assume fgasmol=0.6as suggested by Piovan, Tantalo & Chiosi (2006b), who took into account estimates of the masses of H2 and HI/HII obtained from observations of late-type galaxies in the local universe; -Evaporation time t0: In bulges and ETGs we adopt t0 = 30 × 10 6 yr: MCs are embedded in a primordial environment of relatively high density.Consequently a longer time scale is required to dissolve these MCs compared to those in a normal environment such as the solar vicinity.In discs, most likely of lower density, we adopt t0 = 6 × 10 6 yr.This value is taken from Piovan, Tantalo & Chiosi (2006b).Furthermore we assume that the evaporation time scale of MCs in disc galaxies of the local universe is the same during the whole evolutionary history as suggested by their nearly constant star formation rates. SEDS OF GALAXIES OF DIFFERENT MORPHOLOGICAL TYPES With the aid of the chemical models calculated for pure bulges and discs, we build now composite galaxy models going from pure bulges (spheroids) to pure discs passing through a number of composite systems with different combinations of the two components somehow mimicking the Hubble Sequence of galaxy morphological types.The combinations of bulge and disc masses are listed in columns ( 2) and (3) of Table 1.These values are estimated taking into to account for the different values of the L Bulge /LT ot ratio that are needed to compare theoretical galaxy colours with the observed ones.In all models, the total mass is ∼ 1 × 10 11 M⊙ that according to Buzzoni (2005) is typical of intermediate type galaxies (made by bulge + disc). 6.1 SEDs at the age t Gal = 13.30Gyr In this section we present theoretical SEDs for galaxies of various morphological types, taking into account the contribution of the different physical components to the whole galaxy emission.The analyzed age is the final age of the models, that it, t = t Gal = 13.30Gyr, calculated considering as redshift of formation z f orm = 20, in the current cosmological framework. In the panels of Fig. 1 we show the SEDs of galaxies of different bulge to disc ratios and at the same age of 13.30 Gyr.The morphological classification of the models have been made looking at the theoretical [B − V] and [U − B] colours (see below).In more detail: the left panel of Fig. 1 shows the SED of a pure elliptical galaxy of 10 11 M⊙ (black solid line).We represent also the emission of both graphite and silicate grains (red dot-dashed line), the emission of PAHs (black dotted line), and the old stellar population whose light is dimmed by the parent MCs (red solid line).The middle panel displays the SED of the model Sbc-Sab galaxy, with M Bulge = 0.351 × 10 11 M⊙ and MDisc = 0.736 × 10 11 M⊙.The luminosity of the same physical components as in the left panel is represented; we can observe the contribution of the emission of MCs (blue dashed line) due to the star formation still active in the disc.This effect can not be appreciate in the case of an elliptical galaxy since the galactic wind swept off the ISM hence stopped star formation.Together with the global EPS with the contribution of dust, we observe the emission of graphite and silicate grains, the emission of PAHs, the old stellar population extinguished only for the MCs effect (and not for the effect of extinction of the diffuse interstellar dust) and, finally, the emission of MCs.Finally, the right panel gives the SED of a pure disc galaxy of 10 11 M⊙ with the contribution by the different physical components to the whole galaxy emission. It is evident that the shape of the SEDs of the various components gradually changes passing from the elliptical model to the disc model: few differences can be observed comparing the two intermediate galaxy types.These considerations hold for the models at 13.30 Gyr (z=0): -For the elliptical model: the global emission (black solid line) shows a peak in the UV (thus reproducing the ultraviolet excess observed in elliptical galaxies) and a weak IR peak, which is clearly due to a poor amount of dust grains in the diffuse medium (there is no contribution of the molecular clouds at the emission in the IR region since star formation The same as the left and middle panels, but for a pure disc galaxy of M = 10 11 M ⊙ .The meaning of the lines is always the same as before. has stopped).The extinction effect is weak since at the final age of the galaxy small amounts of gas and dust are present. -For the disc model: both the environments with the presence of dust, namely the diffuse interstellar medium and the star forming regions, contribute to the IR luminosity in significant amount and both play a role in the extinction of the UV/optical radiation; -For the intermediate types (disc plus bulge): the emission of the different components is quite similar to those of the disc model.However, in the UV region, the SED where the stellar population is extinguished only for the MCs effect (red lines -taking into account only the effect of obscuration of young stars) and the total emission (black lines) are in practice indistinguishable for λ 0.5 µ m, while for the disc model they are clearly separate (right panel of Fig. 1).This is ultimately due to the smaller amount of gas still present in the intermediate model compared to the disc only model.In the former case the galactic wind has pushed away out all the bulge gas, whereas in the disc the galactic wind does never occur. The contribution of the PAHs, silicates and graphite grains grows going from the elliptical model to the disc model; this effect is due, as already pointed out, to the small amount of dust and gas present in the elliptical galaxy at the final age, whereas in disc galaxies, star formation continues until the present age.As a consequence of this, at any age, spectro-photometric models will contain all the physical components, i.e. newly born stars still embedded in their parental molecular cloud, stars of various ages and metallicities free from molecular clouds and, finally, a substantial contribution of the diffuse ISM. Disc galaxy: the effect of inclinations Disc galaxies can be observed along different inclinations toward the observer.In our models four inclinations are considered and obviously, we expect the final SED to be different according to the viewing angle, going from face-on to edge-on galaxies.The angles under consideration are : 0 0 (face-on), π/6 with respect to the z-axis perpendicular to the equatorial plane, π/3 with respect to the z-axis, and π/2 (edge-on).Differently from what happens for the models of elliptical galaxies because of their nearly spherical symmetry, the luminosity emitted from an edge-on spiral galaxy will be heavily absorbed by the equatorial dust lane between the stars and the observer, and will present a pronounced peak in the IR region of the spectrum.The same galaxy seen face-on will show a less intense FIR emission and a more intense emission in the UV/optical region compared to the edge-on model.Fig. 2 shows the total emission of the model disc galaxy of 10 11 M⊙ according to different inclinations.The SEDs for this galaxy show, as expected, the opposite trend of the IR emission and extinction in the UV-optical region.For λ 150 µm, the edge-on emission of the dust is greater than the emission at all the other inclinations.The opposite in the UV-optical region: as expected, for the edge-on galaxy, the emission is lower than for the other inclinations.It is clear that every time that we consider SEDs, colours or magnitudes of dust-rich galaxies that are not spherically symmetric, thus introducing the dependence on the viewing angle, the results are significantly different depending on the angle.For the same model they span a range of possible SEDs and magnitudes. Evolutionary models of different ages While in Sect.6.1 we analyzed the SEDs of galaxies at the age of 13.30 Gyr (present day age), now we examine how the SEDs of the same models vary along their evolutionary history.The situation is illustrated in the various panels of Fig. 3 (upper panel: elliptical galaxy; middle panel: E-S0 galaxy; lower panel: S0 galaxy) and Fig. 4 (upper panel: Sab galaxy; middle panel: Sab-Sbc galaxy; lower panel: disc galaxy (Sd)) which show how the total EPS emission of the models changes at varying their age.Grouping of the models follows the morphological type, and all galaxies are supposed to start their SFH at redshift z f or = 20.Looking at the SEDs displayed in various panels we can make the following remarks: for the pure elliptical model at the age of t=0.085Gyr, the emission is strongly concentrated in the IR region of the spectrum: the large amount of dust present in the galaxy at this age absorbs the radiation emitted by the young stars the UV/optical region, and it re-emits it in the IR.During the early stage, star formation occurs in medium highly obscured by dust and the young stars play the dominant role in the total SED.Immediately after the onset of the galactic wind (which is supposed to occur simultaneously and instantaneously for the entire gas content of the galaxy), the gas is swept away and star formation is halted.One can assume that the star formation is virtually complete when t = tgw.We can therefore explain the SEDs of the elliptical galaxy for the ages 2.55, 5.66 and 13.09 Gyr: they represent the aging of a stellar population becoming older and older with a small diffuse gas and dust content.The diffuse medium absorbs the stellar radiation in small amount, while the majority of the emission is due to cool stars in the NIR region. For the S0 models, the presence of a small disc component allows the presence of an ever continuing star formation.It follows that, even if for t=0.085Gyr the SED is quite similar to the elliptical galaxy (dust dominated emission), for older ages the SED is very different and the diffuse ISM significantly contributes to the MIR/FIR emission.It is also interesting to note that the PAHs features appear only after a significant enrichment in metals: this is due to the choice of the Weingartner & Draine (2001a) extinction curves.At low metallicity we adopt their SMC flat curve with a poor or negligible contribution of the PAHs.Finally, as expected, the UV emission is much stronger than in models for elliptical galaxies; this is simply due to the small disclike component, in which star formation never stops.The disc component could be replaced by a bulge-like component with stellar ages spanning much broader interval than in the case a pure spheroidal galaxy. In the Sab, Sab-Sbc and Sd (disc) models, the disc mass, the amounts of dust and their effects in turn grow with the morphological type.The total emission increases with time, reaches a maximum in correspondence of the peak of the star formation (both in the optical region and in the IR) and then decreases with the decrease of the star formation rate, according to the typical SFH adopted for discs (Piovan, Tantalo & Chiosi 2006b;Cassarà 2008).As already pointed out, the star formation rate does not fall sharply as in the case of elliptical galaxies: it reaches a peak and then slowly declines, however continuing up to the present age.It is worth noticing that for t=0.085Gyr, the SED is clearly splits in two peaks (FIR and UV); at increasing the age, the trend is smoothed out as intermediate-age and old stars contribute significantly to the 1 µm emission.For the late type models, this is even more evident than for the S0 one, the PAHs features in the the SED correlate with with the metallicity: the low Z extinction introduced in our models and based on the SMC extinction curve, produce young (high-z) galaxies with rather weak PAHs features. Finally, for the full disc model, there is no phase during the early evolutionary stages where the galaxy SED is dominated by the FIR emission.As a matter of fact, we miss in this case the strong and heavily obscured burst of star formation in the bulge.Along the whole evolution a more regular process of star formation unrolls. THEORETICAL AND OBSERVATIONAL COLOURS OF GALAXIES In this section we examine the theoretical colours obtained from SEDs of galaxies of different morphological types and compare them with some observational data available in literature.Buzzoni (2005) presents a set of EPS models for template galaxies along the Hubble morphological sequence.These models account for the individual evolution of bulge, disc and halo and provide basic morphological features, along with bolometric luminosity and colour evolution, between 1 and 15 Gyr.The integrated colours and the morphological type are tightly related: this is due to the relative contribution of stellar populations in the bulge and disc (Arimoto & Jablonka 1991). The Buzzoni (2005) models deal with the stellar component, which is obviously the dominant contributor to the galaxy luminosity in the UV-optical region.The ISM gas has more selective effects on the SED by enhancing monochromatic emission, e.g. the Balmer lines.As far as galaxy broadband colours are concerned, at the present age, the gas influence is negligible.Internal dust could play the dominant role, especially at short wavelength (λ 3000 Å).Metallicity and stellar birth rate are constrained by comparing theoretical results with observational data.For all other details see Buzzoni (2005). Our models differ from those of Buzzoni (2005) in several aspects among which we recall: first of all, they do not consider the contribution of the halo.However, this should have a marginal effect, because the halo plays a secondary role in the mass and luminosity budget.More relevant here, our models consider the contribution of dust.In any case, we also consider the case of a dust-free ISM and hence dust-free galaxy emission (i.e.SEDs due only to stars) in order to compare the new dusty SEDs with those with no dust (Bressan, Chiosi & Fagotto 1994;Buzzoni 2002Buzzoni , 2005)).Furthermore, instead of assuming a simple prescription for the star formation law as in Buzzoni (2005), we follow the history of a galaxy with the aid of a complex model that put together suitable prescriptions for gas infall, the star formation rate, initial mass function, and stellar ejecta, all of which determine the total amounts Roberts & Haynes (1994) (red stars) and Buta et al. (1994) (black triangles).All the data have been properly corrected by dust extinction.The red star located at ∼ S/T= 1 represents the mean colour for EGs (Buzzoni 1995).Our theoretical colours are indicated with blue diamonds and red triangles: they have been calculated, respectively, by means of the EPS with dust and EPS corrected for the contribution of dust. As already mentioned, the chemical parameters for the galaxy models are chosen in such a way that the observational values for the ratio L Bulge /LT ot are reproduced (see below for more details).In contrast, for each Hubble type, Buzzoni (2005) calibrated the morphological parameter S/T = L(spheroid)/L(tot).As the S/T calibration does not vary in the infrared range, he choose the I luminosity as a reference for the model setup.In our simulations, the S/T ratio can not be determined a priori ; indeed, we start from the SED of a certain model galaxy, of which we know in advance the asymptotic infall mass.The SED is fed to the photometric code, which calculates colours and magnitudes in different photometric systems.In our case the disc and bulge luminosities are used a posteriori to get clues about the asymptotic mass of the galaxy that should be used as input.Carefully tuning this procedure, one may eventually obtain the correct initial values for the disc and bulge mass consistent with the S/T ratios for the different morphological types.Assumed the metallicity (and hence ζ, τ , and ν), there is an almost-linear relation between the bulge mass (the mass of the disc follows from M disc,⊙ = 10 11 − M bulge,⊙ because all the intermediate types have the same total mass of 10 11 M⊙) and the luminosity. As explained in Buzzoni (2005), observations of Figure 7.As in Fig. 5, but here the metallicity of the models is not fixed as in Fig. 5 (see the text for more details about this point). In order to account for the prescription of Buzzoni (2005), two different sets of galactic models are calculated; their parameters are listed and discussed in Sect. 5.The only difference between the sets concerns the final metallicity.Our chemical code does not allow (unless we force the input parameters to extreme values) to reach a supersolar metallicity for the bulge, in particular at the lowest galaxy masses.For realistic values of the parameters, our bulges reach solar or slightly super-solar metallicities.Also for the disc, we can not easily reach the low value suggested by Buzzoni (2005): the metallicity of our discs tends to be slightly higher.The fundamental parameter to vary is ζ.Metallicities slightly super-solar (for the bulge) and slightly lower than the average LMC value (for the disc) are the best values that can be obtained for plausible value of ζ in the Salpeter IMF in model galaxies with total asymptotic mass of 10 11 M⊙.We keep these values in order to maintain the general property that in any case the bulge (Z 0.02) is more metal-rich than the disc (Z 0.008).In order to evaluate the effect due to different values of the metallicity for the two galaxy components, disc and bulge, we calculate two different set of models: -The first set stands on the parameters already discussed in Sect. 5.In this case, the galactic code, performing the EPS, will interpolate between SEDs of SSPs taking for each of them the metallicity predicted by the chemical code at the time when that stellar population was born. -The second set stands on the same parameters, but in this case we force the galactic code to generate the same metallicities adopted by Buzzoni (2005) for the disc and bulge.In this case there is no interpolation on the SEDs of SSPs in metallicity, because it is fixed for both disc and bulge. The results of our simulations, together with the observed colours, are shown in the Figs. 5, 6 and 7.They represent galaxy colour distribution, that is: [B − V] vs. bolometric morphological parameter S/T=L Bulge /LT ot (upper panel) and [U − B] vs. bolometric morphological parameter S/T=L Bulge /LT ot (lower panel).Data are taken from Pence (1976) (magenta circles), Gavazzi, Boselli & Kennicutt (1991) (blue squares), Roberts & Haynes (1994) (red stars) and Buta et al. (1994) (black triangles).The red star located at ∼ S/T= 1 is the mean colour for elliptical galaxies (Buzzoni 1995).All galaxies have been corrected for reddening by the respective authors.In Fig. 5, our theoretical colours are indicated with blue diamonds and red triangles: they have been calculated, respectively, considering EPS with dust and EPS corrected for the extinction of dust, namely the classical bare EPS.These colours are obtained by fixing the metallicities of the disc and bulge.In Fig. 7, our theoretical colours are indicated with black stars and red diamonds: once more, they have been calculated considering EPS with dust and EPS with no dust.For the models of Fig. 5 we only fix the chemical parameters (see Sect. 5), thus leaving the spectro-photometric code free to interpolate in metallicity, according to the input values provided by the chemical simulations.For the models of Fig. 7, we forced the galactic code to adopt the metallicities of the disc and bulge according to Buzzoni (2005).In this case there is no interpolation on the SEDs of SSPs in metallicity.Finally, in Fig. 6, only theoretical colours generated by SEDs of EPS with dust are plotted, but taking into account the effect of the viewing angle. The agreement of our simulations with the data is good and the following considerations can be made: -As expected, in all the figures, the theoretical colours best fitting the extinction corrected data are the dust-free ones: between the classical EPS and the EPS with dust there is a difference of ∼ 0.2 for both colours.This difference can be easily explained since the colours in the plots are all in the optical region, thus being all absorbed by dust, with more extinction for the bands at shorter wavelengths.A stronger difference would be observed in optical -near IR colours, since the near IR radiation is less absorbed by dust. -The effect of dust is more evident in the late-type galaxies, richer in gas and dust.For the models of ETGs, at the present age (all the models have been calculated from z=z f or to z=0, that is tG = 13.30Gyr) only a small amount of dust is still present (see for instance the discussion in Sect.6.1).Colours obtained using EPS with or without dust are in practice indistinguishable, while the differences increase going from early-type toward late-type galaxies. -The effect of the metallicity is evident but not so remarkable: the same trend is followed by the theoretical colours, for both cases with fixed and not fixed metallicity, Figs. 5 and 7.This suggests that the our model well reproduces the observations. -The effect of the inclination of the disc strictly follows the discussion of Sect.6.2: the absorption due to dust is more spectacular when the galaxies are observed edge-on. COSMOLOGICAL EVOLUTION OF GALAXY COLOURS In this section we present the photometric evolution of our model galaxies as a function of the redshift in the ΛCDM Universe we have adopted and we compare the theoretical magnitudes and colours with some available observational data.For a source observed at redshift z, the relation between photons observed at a wavelength λ0 and emitted at wavelength λe is λe = λ0/(1 + z).Furthermore, if the source has an apparent magnitude m when observed in a certain photometric pass-band, its absolute magnitude M in the rest-frame pass-band satisfy the relation where DM is the distance modulus and Kcorr is the so-called K-correction.The distance modulus is defined by where DL(z) is the luminosity distance and 1pc = 3.086 × 10 18 cm.The luminosity of a source at redshift z is related to its spectral density flux (energy per unit time per unit area per unit wavelength) by where f (λ0) is the monochromatic flux of the source at the observer.The K-correction is defined as: In order to compare sources at different redshifts, we must convert the apparent photometric data (magnitudes, etc.) to rest-frame quantities by applying the K-corrections and also correct the rest-frame quantities for the expected evolutionary changes during the time interval corresponding to the redshift difference, the so-called evolutionary correction E(z).The K-and E-corrections are usually derived from the theoretical SEDs calculated with the stellar EPS technique.Given the above definitions, K(z) and E(z) can be expressed as magnitude differences in the following way: where M (0, t0) is the absolute magnitude in a pass-band derived from the rest frame spectrum of the source at the current time, M (z, t0) is the absolute magnitude derived from the spectrum of the source at the current time but redshifted at z, and M (z, tz) is the absolute magnitude obtained from the spectrum of the source at time tz and redshifted at z.To summarize the absolute magnitude, M (z), in some broadband filter and at redshift z and its apparent magnitude m(z) are expressed by and, passing to apparent magnitudes, The relation between the cosmic time t and redshift z, t(z), of a stellar population formed at a given initial redshift z f , depends on the adopted cosmological model of the Universe (and its parameters). In the next section we will compare the SEDs of our model galaxies with the luminosities of galaxies from the Takeuchi et al. (2010) database kindly provided to us by Takeuchi (2012, private communication).To this aim, it is useful to remind here the procedure to get the luminosities back from the apparent AB, ST and Vega magnitudes.The luminosity in a pass-band satisfies the relations where ∆ν0 and ∆λ0 are the integrals of the filter over the pass-band.Similar equations holds for the Vega and ST systems, provided that the corresponding photometric constants are used and that the monochromatic flux for the ST and Vega systems is expressed per Angstrom.The Takeuchi et al. (2010) database yields monochromatic fluxes normalized to the pivotal wavelength that are corrected for E(z) and K(z), i.e. simply The sample of galaxies represented is taken from the catalogue of galaxies observed in the COSMOS survey and selected in Tantalo et al. (2010).The total sample of galaxies is represented in orange, while the Early Type Galaxies are represented in yellow.Superimposed, the evolution of the colour [B J -r + ] for three models presented in this work or ad-hoc calculated for this redshift evolution, namely: (1) two elliptical galaxies with masses 10 10 M ⊙ and 10 12 M ⊙ and with the same choice of the input parameters as in Sect.6. Comparison with the observations Deep and large scale surveys, from earth and space, allow nowadays to obtain extremely rich samples of data at different redshifts and in different wavelengths, from the UV to the FIR.The main characteristic of these deep photometric surveys detecting a large number of galaxies is that a significant fraction of the detected objects appear as point sources.They can neither be easily distinguished from single stars, nor easily classified from a morphological point of view.It follows that the photometric study of their properties is crucial, also in order to produce some morphological classification. In this paper, we take into account the Cosmic Evolution Survey COSMOS official photometric redshift catalogue (Scoville et al. 2007), designed to probe the evolution of galaxies in the context of their large scale structure out to moderate redshift (see also Capak et al. 2007;Mobasher et al. 2007).Tantalo et al. (2010) selected an extended sample of ETGs in the COSMOS catalogue: the morphological selection is made with an automatic pipeline able to separate the objects by means of their bi-dimensional distribution of the light.COSMOS.The panels of Fig. 8 show the evolution with the redshift of the COSMOS colours [BJ -r + ] (left panel) and [KS-r + ] (right panel) for our model galaxies of different morphological types.We represent three cases: (i) a pure spheroidal system, for which two masses 10 10 M⊙ and 10 12 M⊙ are considered (the selected sample is indeed made by ETGs); (ii) a galaxy of intermediate type Sab, characterized by a disc and a bulge, with total mass 10 11 M⊙; (iii) finally a pure disc galaxy with no bulge (representing a Sd type).For each galaxy type, we also consider two cases, i.e. with (solid lines) and without (dashed lines) dust in the derivation of their SEDs.The colours [BJ -r + ](left panels) and [KS-r + ] (right panels) of galaxies of the COSMOS sample are also shown.The ETGs are identified by the parameter T phot 1.1 (see Tantalo et al. 2010, for more details).The orange diamonds show the total sample of galaxies, whereas the sample of ETGs selected by Tantalo et al. (2010) is represented by the yellow diamonds.For the pure spheroidal galaxy the agreement with the data is good, at least up to redshift z ∼ 1 − 1.5.In the redshift interval 0 < z < 1, where most of the ETGs is concentrated, colours are in better agreement with the observational data.In our simulations, the galactic wind stops abruptly the process of star formation so that the galaxy evolves almost passively from the redshift of the wind z = z twind to the present z = 0. We can notice as our colours, in particular for the most massive galaxy 10 12 M⊙ extend toward the region with the yellow circles representing ETGs.This interval, however, is delicate for our models because in the chemical simulations supporting the EPS code, the galactic wind starting at z ∼ 3 is an instantaneous process emptying the galaxy of gas; a more gradual process as we expect to happen in real galaxies would be more suitable allowing to avoid fluctuations in the calculated colours due to the discontinuity in the evolution of the gas mass.For redshift higher than z ∼ 3, about corresponding to the onset of the galactic wind, we have no data to test the agreement between observations and theoretical colours.We can notice, however, the effect of the A comparison of the monochromatic luminosities ν • L (ν) of 3 models with asymptotic mass 10 12 M ⊙ with a sample of of galaxies of various morphological types and masses by Takeuchi et al. (2010).We represent an elliptical galaxy (black lines), an intermediate type galaxy (blue lines) and a disc galaxy (red lines).Solid lines represent the edge-on model, more affected by the ISM extinction, while dashed lines represent the face-on model.The pass-bands on display are J, H and K bands of 2MASS, FUV and NUV of Galex, and u of SDSS. dust, by comparing the dashed (without dust) and solid lines (with dust).Dust absorbs stellar radiation stronger in the band BJ than r + .Both magnitudes grow, but, since the BJ band is more absorbed, the colour becomes redder.Finally we briefly comment on the colours of galaxies of intermediate type and pure disc.The results for COSMOS are quite interesting: the colours tend to stay in the region occupied by the yellow points, exactly where there are no ETGs.In particular for the [BJ -r + ] the result is good with a clearly different path in the colour-redshift plane followed by the different morphological types.In the [KS-r + ]-redshift plane, again the models of disc galaxies tend to populate the region of the orange diamonds, whereas those for the intermediate type ones fall in between the two extreme cases. Galaxy luminosities To conclude this section, in Figs. 9 and 10 we present a simple comparison of the luminosities of our models with the data for 607 galaxies of various morphological type by Takeuchi et al. (2010) observed in different photometric systems.Of course this sample contains objects spanning wide ranges of masses and morphological types so that much narrower grids of theoretical models would be required.This is beyond the purposes of this study and we leave it to future work.For now we limit ourselves to simply check that our models are consistent with the luminosity range indicated by the observations.To this aim, we plot the evolution of the monochromatic luminosity of our models for three massive galaxies (elliptical, intermediate and disc) of about 10 12 M⊙.Since the redshift range spanned by the data (from z=0 to z=0.16) is rather small we do not expect our models to evolve significantly in luminosity.This is what we see in Figs. 9 and 10.However, our the average the models fall in the range of the observations at all pass-bands, with some dispersion due to different inclinations of the disc.This effect is in particular relevant for the UV luminosities.For Akari, since dust does not absorb its own radiation there is no difference between different inclinations and the two lines, solid and dashed one, are coincident.As expected the model for elliptical galaxy model, present a low luminosity due to the low content of dust, whereas the dust-rich morphological types better agree with the observations. DISCUSSION AND CONCLUSION In this paper, improving upon the standard EPS technique, we have developed theoretical SEDs of galaxies, whose morphology goes from disc to spherical structures, in presence of dust in the ISM.Properly accounting for the effects of dust on the SED of a galaxy increases the complexity of the problem with respect to the standard EPS theory because it is necessary to consider the distribution of the energy sources (the stars) inside the ISM absorbing and re-emitting the stellar flux.This means that the geometry and morphological type of the galaxy become important and unavoidable ingredients of the whole problem, together with the transfer of radiation from one region to another.The emergent 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 10 6 10 8 10 10 above the equatorial plane.Stars and star forming MCs have, as a first approximation, the same distribution: R * d = R M C d and z * d = z M C d 3. 1 The diffuse ISM: extinction and emissionPiovan, Tantalo & Chiosi (2006a),Piovan et al. (2011a),Piovan et al. (2011b), andPiovan et al. (2011c) presented detailed studies of the extinction and emission properties of dusty ISMs.They took into account three dust components: graphite, silicates and PAHs and reached an excellent overall agreement between theory and observational data for the extinction and emission of the ISM in the MW, Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). Figure 1 . Figure 1.Left Panel: SED of the model elliptical galaxy of M = 10 11 M ⊙ (black solid line).We represent also the emission of both graphite and silicate grains (red dot-dashed line), the emission of PAHs (black dotted line), and the SED where only the extinction effect of the MCs is included (red solid line).Middle Panel: The same as in the left panel, but for a Sbc-Sab galaxy of Mtot = 10 11 M ⊙ .The meaning of the lines is the same as in left panel.The blue dashed line highlights the contribution of the emission of MCs.Right Panel:The same as the left and middle panels, but for a pure disc galaxy of M = 10 11 M ⊙ .The meaning of the lines is always the same as before. Figure 2 . Figure 2. SED of the model disc galaxies of M = 10 11 M ⊙ for different viewing angles. Figure 3 . Figure 3.Time evolution of the SED of modelled galaxies of early morphological types -upper panel: elliptical; middle panel: E-S0; lower panel: S0 -of M = 10 11 M ⊙ for four significative ages, as the legend indicates. Figure 4 . Figure 4. Time evolution of the SED of modelled galaxies of late morphological types -upper panel: Sab; middle panel: Sab-Sbc; lower panel: Sd-Irr (disc), of M = 10 11 M ⊙ for four significative ages, as the legend indicates. Figure 5 . Figure 5. Galaxy colour distribution (upper panel: [B − V] vs. bolometric morphological parameter S/T=L Bulge /L T ot ; lower panel: [U − B] vs. bolometric morphological parameter S/T=L Bulge /L T ot ).Data are from Pence (1976) (magenta circles),Gavazzi, Boselli & Kennicutt (1991) (blue squares),Roberts & Haynes (1994) (red stars) andButa et al. (1994) (black triangles).All the data have been properly corrected by dust extinction.The red star located at ∼ S/T= 1 represents the mean colour for EGs(Buzzoni 1995).Our theoretical colours are indicated with blue diamonds and red triangles: they have been calculated, respectively, by means of the EPS with dust and EPS corrected for the contribution of dust. Figure 6 . Figure6.The same as in Fig.5, but now only theoretical colours for EPS with dust are shown: the different symbols and colours indicate different viewing angles.Black triangles: galaxy seen edgeon; blue squares: galaxy observed at an angle of 60 • measured respect to the galactic equatorial plane; red diamonds: galaxy observed at an angle of 30 • measured respect to the galactic equatorial plane; green stars: galaxy seen face-on. Figure 8 . Figure8.Left Panel: Cosmological evolution with the redshift for the [B J -r + ] colour of the survey COSMOS (both filters are pass-bands of the Subaru telescope).The sample of galaxies represented is taken from the catalogue of galaxies observed in the COSMOS survey and selected inTantalo et al. (2010).The total sample of galaxies is represented in orange, while the Early Type Galaxies are represented in yellow.Superimposed, the evolution of the colour [B J -r + ] for three models presented in this work or ad-hoc calculated for this redshift evolution, namely: (1) two elliptical galaxies with masses 10 10 M ⊙ and 10 12 M ⊙ and with the same choice of the input parameters as in Sect.6.3 (black and blue lines); (2) an intermediate type model Sab of 10 11 M ⊙ (green line) and (3) a disc galaxy (Sd) of 10 11 M ⊙ (red line).In all the cases we show the evolution of the colour taking into account our dusty EPS (solid lines) and classical EPS without dust (dotted lines).Right Panel: The same as in the left panel but for the colour [K S -r + ] of the survey COSMOS (r + is a filter of the Subaru telescope, while K S is from the Kitt Peak national Observatory). Figure8.Left Panel: Cosmological evolution with the redshift for the [B J -r + ] colour of the survey COSMOS (both filters are pass-bands of the Subaru telescope).The sample of galaxies represented is taken from the catalogue of galaxies observed in the COSMOS survey and selected inTantalo et al. (2010).The total sample of galaxies is represented in orange, while the Early Type Galaxies are represented in yellow.Superimposed, the evolution of the colour [B J -r + ] for three models presented in this work or ad-hoc calculated for this redshift evolution, namely: (1) two elliptical galaxies with masses 10 10 M ⊙ and 10 12 M ⊙ and with the same choice of the input parameters as in Sect.6.3 (black and blue lines); (2) an intermediate type model Sab of 10 11 M ⊙ (green line) and (3) a disc galaxy (Sd) of 10 11 M ⊙ (red line).In all the cases we show the evolution of the colour taking into account our dusty EPS (solid lines) and classical EPS without dust (dotted lines).Right Panel: The same as in the left panel but for the colour [K S -r + ] of the survey COSMOS (r + is a filter of the Subaru telescope, while K S is from the Kitt Peak national Observatory). Figure9.A comparison of the monochromatic luminosities ν • L (ν) of 3 models with asymptotic mass 10 12 M ⊙ with a sample of of galaxies of various morphological types and masses byTakeuchi et al. (2010).We represent an elliptical galaxy (black lines), an intermediate type galaxy (blue lines) and a disc galaxy (red lines).Solid lines represent the edge-on model, more affected by the ISM extinction, while dashed lines represent the face-on model.The pass-bands on display are J, H and K bands of 2MASS, FUV and NUV of Galex, and u of SDSS. Table 1 . Baryonic masses of galaxies or galaxy components.Masses are in units of 10 12 M ⊙
2014-12-11T21:01:20.000Z
2014-12-11T00:00:00.000
{ "year": 2014, "sha1": "8fb4e0997759b5f0e17edbcc3e827963ac2719ce", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/450/3/2231/18506077/stv752.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "53fae54bdb486bc60059f4bd0c1a323fc7e71da8", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
264466849
pes2o/s2orc
v3-fos-license
Celecoxib Suppresses NF-κB p65 (RelA) and TNFα Expression Signaling in Glioblastoma Background: Glioblastoma (GBM) harbors significant genetic heterogeneity, high infiltrative capacity, and patterns of relapse following many therapies. The expression of nuclear factor kappa-B (NF-κB p65 (RelA)) and signaling pathways is constitutively activated in GBM through inflammatory stimulation such as tumor necrosis factor-alpha (TNFα), cell invasion, motility, abnormal physiological stimuli, and inducible chemoresistance. However, the underlying anti-tumor and anti-proliferative mechanisms of NF-κB p65 (RelA) and TNFα are still poorly defined. This study aimed to investigate the expression profiling of NF-κB p65 (RelA) and TNFα as well as the effectiveness of celecoxib along with temozolomide (TMZ) in reducing the growth of the human GBM cell line SF-767. Methods: genome-wide expression profiling, enrichment analysis, immune infiltration, quantitative expression, and the Microculture Tetrazolium Test (MTT) proliferation assay were performed to appraise the effects of celecoxib and TMZ. Results: demonstrated the upregulation of NF-κB p65 (RelA) and TNFα and celecoxib reduced the viability of the human glioblastoma cell line SF-767, cell proliferation, and NF-κB p65 (RelA) and TNFα expression in a dose-dependent manner. Overall, these findings demonstrate for the first time how celecoxib therapy could mitigate the invasive characteristics of the human GBM cell line SF-767 by inhibiting the NF-κB mediated stimulation of the inflammatory cascade. Conclusion: based on current findings, we propose that celecoxib as a drug candidate in combination with temozolomide might dampen the transcriptional and enzymatic activities associated with the aggressiveness of GBM and reduce the expression of GBM-associated NF-κB p65 (RelA) and TNFα inflammatory genes expression. Introduction High-grade gliomas constitute the majority of malignant brain tumors and are known to develop from mutant glial or glial progenitor cells [1].The most prevalent and deadly primary brain tumor, glioblastoma (GBM), accounts for 50% of all gliomas [2,3].Because the overall survival, tumor cell invasion, and therapeutic response for GBM are so dismal, molecular variables likely play a crucial role in the available therapy options [4].Genomescale gene expression profiling enables the molecular analysis of intratumor variability, revealing molecular signatures reflecting underlying pathogenic mechanisms and molecular traits that may be related to survival [5].The evolutionarily conserved transcription factors known as nuclear factor kappa B (NF-κB p65 (RelA)) proteins coordinate several important biological processes, including immunity, inflammation, cell death, and survival.An evolutionarily conserved Rel homology domain is shared by the five mammalian family members RelA (p65), RelB, c-Rel, NFKB1 (p105/p50), and NFKB2 (p100/p52), which promote DNA binding and dimerization with other NF-κB subunits [6,7].It has been suggested that targeting NF-κB p65 (RelA) increases survival by promoting a tumor microenvironment (TME) that is less immunosuppressive and more receptive to immunomodulation.Tumor necrosis factor-alpha (TNFα) and the accompanying receptor superfamily have been linked to the development of GBM, according to a prior study [8].The pro-inflammatory cytokine TNFα is linked to both pro-and anti-apoptotic responses through its signaling pathways [9].Interestingly, constitutively produced TNFα promotes glioma cell invasion and motility by activating NF-κB p65 (RelA) [10].Radiation therapy, chemotherapy, and surgical resection are the current therapeutic treatments that are most commonly used to treat GBM.However, the basic characteristic of GBM shows that cells typically invade the brain parenchyma.Additionally, one characteristic of GBM cells is their chemoresistance to TMZ.To combat the spread and invasion of tumor cells in GBM, more effective curative regimens are urgently needed.The prognosis for patients receiving the current standard of care is still quite dismal, with a five-year overall survival rate below 5% [11].Unfortunately, clinical trials investigating immunotherapies have shown limited success in GBM patients [12].Additionally, it has been shown that dysregulation of NF-κB signaling in human GBM enhances glioma cell survival, proliferation, and chemoresistance.[13].In this regard, therapeutically disabling NF-κB p65 (RelA) expression and enzyme functioning seems like a better approach to disrupting the NF-κB p65 (RelA) inflammatory signaling cascade by preventing the spread and invasion of tumor cells.According to multiple investigations, celecoxib suppresses the development of tumor cells by interacting with several Cyclooxygenase-independent (COX) targets [14].To treat recurrent malignant gliomas, celecoxib-based treatment therapies are in clinical trial phase I and phase II studies, and it has been determined that such combinations are safe [15,16].Here, we hypothesized that celecoxib-mediated antineoplastic responses in GBM may prevent NF-κB p65 (RelA) activation due to its various roles in GBM.Celecoxib is a nonsteroidal anti-inflammatory drug (NSAID) and a selective inhibitor of cyclooxygenase-2 (COX-2).COX-2 is involved in the production of pro-inflammatory prostaglandins.Prostaglandins can activate the NF-κB pathway.Celecoxib inhibits COX-2 and prostaglandin synthesis.It could interfere with NF-κB activation downstream of COX-2 [13]. In this investigation, a comparative study of the mRNA expression of NF-κB p65 (RelA) and TNFα in both 33 brain tumor samples and TCGA datasets has revealed that the transcriptional activity of these genes is significantly higher in tumor samples than in normal samples.NF-κB p65 (RelA) and TNF-α were discovered to be significantly expressed in tumor samples of various cancers through pan-cancer expression analysis.This was followed by an investigation of these genes' expression in the TCGA GBM datasets, and clinical biopsies of GBM patients confirmed the high expression of these respective genes.Additionally, functional enrichment analysis and immune infiltration were also carried out.After investigating the cytotoxic effects of TMZ and celecoxib in a GBM SF-767 cell line, the gene expression level of candidate genes was analyzed in a dose-dependent manner for both drugs to study their anti-inflammatory potential.Celecoxib's impact on tumor cell invasion in glioblastoma by regulating NF-κB activation and mRNA expression has not been the subject of any studies to date.The aim of this study was to assess the celecoxib effect on invasive characteristics of the human glioblastoma cell line SF-767 by modifying the NF-κB cascade and NF-κB p65 (RelA) transcriptional levels.Our research findings provided evidence that the antineoplastic activity of celecoxib is mediated via NF-κB p65 (RelA) signaling suppression in glioblastoma.The current investigation can act as a springboard to examine the effects of radiation, TMZ, and celecoxib combination therapy in GBM patients. Bioinformatics Analysis The Cancer Genome Atlas (TCGA) and Genotype-Tissue Expression (GTEx) databases, a data collection and analysis repository on cancer [17], are accessible at https://portal.gdc.cancer.gov(accessed on 19 March 2022).The databases were used to ascertain the expression levels of NF-κB p65 (RelA) and TNFα in high-grade glioma (DNA and mRNA) by using the following criteria: p-value = 0.05, fold change 2, and top 10% gene rank for all data types.Using the TCGA and GTEx datasets and the integration of the c-Bioconductors R packages (Bioconductor version 3.17 software packages) with servers, i.e., the GEPIA platform GEPIA2 2019 Release notes (http://gepia.cancer-pku.cn/(accessed on 19 March 2022)), gene expression profiles of several cancer types and pairs of normal samples were created [18].By examining 9736 tumors and 8587 normal RNA sequencing samples, which were gathered from the TCGA and GTEx programs, the GEPIA web server GEPIA2 (2019 release note) (http://gepia.cancer-pku.cn(accessed on 19 March 2022)) was used to obtain the gene expression profile of NF-κB p65 (RelA) and TNFα among the majority of cancer types.Then, by contrasting it to 207 normal samples and 163 tumors from TCGA and GTEx, it was possible to assess the pattern of NF-B p65 (RelA) and TNFα expression in GBM.Furthermore, we examined the relationship between overall survival (OS) for GBM patients and the differential expression of NF-κB p65 (RelA) and TNFα in pan cancer analysis.p < 0.05 was regarded as statistically significant for the survival curve.The expression of NF-κB-p65 (RelA) and TNFα in GBM was utilized to create the overall survival (OS) curves; patients with a high level of expression (>median expression value) and patients with a low level of expression (<median expression value) were defined.The Kaplan-Meier (KM) method was used to evaluate overall survival using a log-rank test (statistically significant: p-value < 0.05) and to determine the hazards ratio (HR) with a 95% confidence interval [19].In order to validate patient survival statistics, through UALCAN, we also ascertained the amounts of NF-κB p65 (RelA) and TNFα gene expression in GBM and also determined the functionality of genes that affected the patients' survival times.We examined the NF-κB-p65 (RelA) and TNFα association profiles in healthy brain tissue and GBM samples using the UALCAN database (http://ualcan.path.uab.edu(accessed on 19 March 2022)).We discovered a connection between candidate gene expression levels and the grade of GBM tumors.Kaplan-Meier survival analysis was employed by UALCAN and provides survival curves, log-rank p-values, and HRs with 95% confidence intervals.The statistical significance for the survival curve was set at p < 0.05 [20].TPM normalization for gene expression analysis was utilized by GEPIA and UALCAN [21,22] and adjusted by gene expression values by accounting for the total number of reads and the transcript length, allowing for accurate comparisons of gene expression levels across samples [23,24].GeneMANIA was used in this study to analyze the networks and roles of the NF-κB p65 (RelA) and TNF α proteins.Through the GeneMANIA network, we accessed NF-κB p65 (RelA) and TNFα interactive genes [25].Following that, functional studies of these genes were performed using FunRich and Metascape [26].The expression levels of NF-κB p65 (RelA) and TNFα in GBM were measured using TIMER, and the relationship between these expression levels and immune infiltration levels in GBM was assessed [27]. Ethics Statement The study was carried out in accordance with the Declaration of Helsinki, and it was approved by the Capital University of Science and Technology (CUST), Islamabad, Pakistan (Ref: BI and BS/ERC/19-2 and 23 September 2019).All the patients gave their verbal and written agreement to the use of their data for research purposes.The biopsy samples of 33 patients with glioblastoma (23 men, 10 women, median age 50 ± 13 years) who underwent brain surgery between January 2018 and December 2021 were obtained from various surgery departments of public sector tertiary care hospitals in Pakistan.None of the study subjects had received any radiotherapy or chemotherapy prior to sample collection. Tissue Samples The samples were initially obtained from patients primarily from the affected brain regions, specifically the frontal and temporal sites of the primary tumor, through surgical resections.The collection of tumor tissue samples was conducted by considering variations in cellularity and the presence of necrotic areas in patients with glioblastoma multiforme (GBM).Tumor-associated normal tissues (TANT) were typically obtained from the region adjacent to the tumor mass [28,29].The minimum weight required for processing, as per internal guidelines, includes 125 mg of tumor tissue and 50 mg of adjacent normal tissue.Volumetric measurement was utilized to assess the size of GBM tumor samples.The tissue specimens were sectioned into small fragments (approximately 1-2 mm 3 in size) through the utilization of a sterile scalpel.These fragments were subsequently subjected to preservation techniques involving formalin fixation and later paraffin embedding (FFPE) for the purpose of histopathological examination and immunohistochemistry. Additionally, the tissue fragments were appropriately stored at ultra-low temperatures (−80 • C) in order to maintain the integrity of nucleic acids and proteins and prevent degradation [30,31]. Quantitative qRT-PCR Analysis In order to reduce degradation by ubiquitous DNases and RNases, bio specimens of glioblastoma designated for genomic analysis were microdissected and kept in the nucleic acid stabilizing reagent RNA later (Sigma-Aldrich, Cat No. R0901, Saint Louis, MI, USA).Specimens were immediately frozen in liquid nitrogen after ablation and stored at −80 • C until RNA extraction.Total RNA was extracted using the TriZol reagent (Thermo Fisher Scientific, Cat No, 15596018, Carlsbad, CA, USA).Superscript II reverse transcriptase (Invitrogen, Paisley, UK) was used to create cDNA with the cDNA synthesis kit (Thermo Fisher Scientific, Cat No. K1622, Vilnius, Lithuania), and the SYBR ® Green Master Mix kit (Maxima SYBR Green/ROX qPCR Master Mix (2×) (Thermo Fisher Scientific Cat No. K0221 Cat# K0221, Vilnius, Lithuania) was utilized for qPCR to amplify the particular products of PCR of all three genes presented in this work (Thermo Scientific, Carlsbad, CA, USA).Using a Nano Drop One spectrophotometer from Thermo Fisher Scientific, the purity of each RNA sample was determined.Reactions for each sample were performed in triplicate using a PCR protocol.Following 3 min of initial denaturation at 95 • C, the cycling conditions were 40 cycles consisting of denaturation at 95 • C for 10 s followed by annealing and extension at 60 • C for 30 s.The results were presented as CT values, defined as the threshold PCR cycle number at which an amplified product was first detected.The average CT value was calculated for both NF-κB p65 (RelA) and TNFα, and the ∆CT value was determined as the mean of the triplicate CT values.The 2 −∆∆CT method was used to analyze the relative changes in gene expression [32,33].The primers used for TNFα were (Forward Primer CCTCTCTCTAATCAGCCCTCTG and Reverse Primer GAGGACCTGGGAGTAGATGAG) and for NF-κB p65 (RelA) (Forward Primer AGGCAAGGAATAATGCTGTCCTG and Reverse Primer ATCATTCTCTAGTGTCTGGTTGG), and for β-actin (Forward Primer CATGTACGTTGCTATCCAGGC and Reverse Primer CTCCTTAATGTCACGCACGAT) [34,35]. ELISA Prior to protein extraction, high-grade glioma biopsy samples were placed in sterile containers, frozen, and kept at −80 • C. The supernatants were slowly defrosted on ice.A total of 100 µL of supernatant was measured using 96-well enzyme-linked immunosorbent assays (ELISA).Protein-specific ELISA kits (Abcam Elisa kits USA) were used to measure the levels of the genes TNFα and NF-κB p65 (RelA) in accordance with the manufacturer's instructions.The known concentrations of TNFα and NF-κB p65 (RelA) were added to the ELISA plate.The OD values obtained from the standards were used to plot a calibration curve, which interpolates protein concentrations based on their OD values. Blank correction was used to correct background noise, and OD values were obtained for each well of the ELISA plate at 450 nm as per the guidelines.Sample processing involved homogenization of glioblastoma tissue samples, extracting proteins of interest, incubation, washing, detection, and substrate addition.The specific details of the ELISA test procedure were followed by the specific kit and manufacturer's instructions (Abcam Elisa Kits, Boston, MA, USA).Cytokine levels were assessed using the appropriate ELISA MAXTM Deluxe Set in accordance with the manufacturer's guidelines (TNFα ELISA Kit, Cat.No. (ab181421), NF-κB p65 (RelA) ELISA Kit, Cat No. (ab176648)).The specific binding optical density at 450 nm was determined by a spectrophotometer [36]. Statistics The results of the RT-PCR and ELISA data are expressed as the mean ± SD from at least three independent experiments for statistical analysis and were analyzed by GraphPad Prism 9 (Prism 9.5.0)software.The chi-square test and the two-tailed Student's t-test were used to compare the two groups' statistical significance.The D'Agostino and Pearson tests were used for the normality assessment. In vitro study of NF-κB p65 (RelA) and TNFα in the SF-767 human glioblastoma cell line. Cell Line and Culture Conditions Human glioma cell line SF-767 was cultivated as monolayers in 75 cm 2 tissue culture flasks in Iscove's Modified Dulbecco's Medium (IMDM), supplemented with 10% fetal bovine serum (FBS), 1% glutamine, 100 IU/mL penicillin, and 100 µg /mL streptomycin combination.Cell cultures were subcultured three times weekly and kept at 37 • C in a humidified 5% CO 2 environment.Utilizing cell cultures at low passages, each assay using glioma cell lines was carried out separately in triplicate. MTT Cellular Proliferation Assay The antiproliferative impact of the therapy was assessed using the MTT assay (Roche Diagnostic GmbH, Basel, Switzerland).The yellow tetrazolium salt MTT [3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl tetrazolium bromide] can only be broken down into purple formazan crystals by metabolically active cells.Three repetitions of 10,000 cells/well in 200 µL medium were used to seed the 96-well culture plates.Approximately 10 µL of MTT reagents were added to each well following each treatment, and the plates were then incubated at 37 • C for 4 h.A spectrophotometer set at λ = 595 nm was used to measure the optical density (OD) after the cells had been lysed with 100 µL of solubilization buffer.Results are given as percentages compared to the control.The mean values acquired from the cell viability studies were statistically compared using the Student's t-test in Microsoft Excel with one-tailed distributions.The analysis of variance (ANOVA) and t-test were used to examine the significance of differences between the study groups.Statistics were judged significant for values with p < 0.05.Results are presented as the mean standard deviation (SD) for all data.Each study was carried out in triplicate. Quantitative Expression Analysis by qRT-PCR To isolate total RNA, SF-767 cell line cells were treated with 50 µM, 100 µM, and 150 µM TMZ and celecoxib for 48 h in six-well plates.Trizol was used to extract the total RNA from SF-767 cell line cultures that had undergone control and stress.Using a nanodrop spectrophotometer, the RNA quantity was calculated at 260 nm absorbance.The RNA was cleaned in 1 mL of ethanol before being dissolved in 50 µM of water treated with 0.1% Diethyl Pyrocarbonate (DEPC) and stored at −80 • C until usage.According to the instructions provided by the manufacturer, 1 µL of RNA was reverse transcribed into cDNA using the RevertAidTM first-strand synthesis kit (Thermo Scientific, Cat No. K1622).As previously mentioned, the sequences of the forward and reverse primers (0.5 µM each) employed in the current study are listed below.The final reaction volume of 20 µL received the addition of the template cDNA (2 µL).qRT-PCR was carried out using a StepOne Plus thermocycler from Applied Biosystems and SYBER Green PCR Master Mix from Thermofisher (catalogue number K0221).Selected genes (NF-κB p65 (RelA) and TNFα) had their transcriptome expressions adjusted to the internal control GAPDH gene.All real-time PCR assays were carried out in triplicate, and the results were presented as the mean of three independent experiments to detect any significant differences between cells treated with TMZ, celecoxib, and untreated control cells.The results of each experiment, presented as the mean standard deviation, were carried out at least three times.Statistical evaluations were performed using GraphPad Prism 9 (Prism 9.5.0)software (GraphPad Software Inc., La Jolla, CA, USA).Data analysis employed one-way analysis of variance (ANOVA).The significance criterion for differences between means was set at p < 0.05. The NF-κB p65 (RelA) sequencing data by GEPIA also revealed increased expression in GBM transcripts per million, as demonstrated in Figure 1b.Similar to this, higher levels of TNFα expression were seen in the following cancer types: BLCA, BRCA, CESC, CHOL, COAD, DLBC, ESCA, GBM, HNSC, KRIP, KRIC, LAML, LGG, LIHC, MESO, OV, PAAD, PCPG, PRAD, READ, SARC, STAD, TGCT, UCEC, and UCS, while lower levels of TNFα expression were seen in the following cancer types (Figure 1c).Similarly, TNFα showed differential expression among different cancers through a bar plot (Supplementary Figure S2).Although TNFα expression was increased in a number of malignancies, it was shown that glioblastoma had the highest amount of enhanced TNFα expression.Then, using GEPIA, we examined the TNFα RNA sequencing data.The highest TNFα transcript expression levels per million were seen in GBM compared to matched normal tissues (Figure 1d).Additionally, GBM had the highest levels of NF-κB p65 (RelA) gene expression according to the TIMER database (Figure 1e).The GBM showed differential TNFα gene expression levels in the TIMER database (Figure 1f).These results demonstrated that the expression of NF-κB p65 (RelA) and TNFα in GBM was much higher than in normal tissues.As a result, it was worthwhile to investigate further the link between NF-κB p65 (RelA) and TNFα and related genes in the network of GBM since it may have a possible diagnostic value for GBM. NF-κB p65 (RelA), TNFα and Survival in GBM We assessed the predictive significance of NF-κB p65 (RelA) and TNFα in cancer using GEPIA to determine whether the expression levels of these proteins are connected to the prognosis of cancer patients.Although NF-κB p65 (RelA) and TNFα expression levels varied depending on the type of tumor, we found that the GBM exhibited an association between NF-κB p65 (RelA) and TNFα expression levels and overall survival time (OS).Furthermore, the result demonstrates that GBM with overexpression of NF-κB p65 (RelA) had a poor OS prognosis, and low levels of NF-κB p65 (RelA) had a higher median survival but were not statistically significant (Figure 2a).Similarly low levels of TNFα also showed higher median survival but were not statistically significant (Figure 2b).These genes were further investigated because of their trend towards poor survival with higher marker expression.Thus, we used global databases to compare and investigate the connection between these genes and GBM.NF-κB p65 (RelA) and TNFα overexpression were linked to poor survival outcomes in GBM patients, according to data from the UALCAN database, respectively.(Figure 2c,d), which, for the most part, coincided with the results from the GEPIA2 databases. PPI Network and Functional Enrichment Analyses The NF-κB p65 (RelA) and TNFα proteins showed functional networks in PPI, which were primarily enriched in various functions.Metascape analyzed the biological functions of the NF-κB p65 (RelA) and TNFα interaction genes.We discovered that these genes strongly influenced response to stimuli, metabolism, biological regulation, the immune system, multicellular organismal processes, cellular component organization or biogenesis, and developmental processes (Figure 3a,b).The biological functions and gene interactions of NF-κB p65 (RelA) and TNFα were also assessed by GeneMANIA (Figure 3c,d), and the results were quite comparable to those of Metascape.This provided evidence of the molecular processes connected to the interaction between NF-κB p65 (RelA) and TNFα genes.The STRING database has been used to conduct protein-protein interaction enrichment analysis for each supplied gene list (Figure 3e).The subset of proteins that physically interact with at least one additional member of the list is found in the resulting network.The Molecular Complex Detection (MCODE) algorithm 10 has been used to discover densely connected network components if the network contains between 3 and 500 proteins.The MCODE networks for the specific genes MCODE 1 p-65 RELA and MCODE 1 TNF-α have been compiled and are displayed in (Figure 3f,g).Each MCODE component was subjected to pathway and process enrichment analysis separately, and the three terms with the highest p-values were kept as the functional descriptions of the associated components, as indicated in Table 1 underneath the relevant network plots in Figure 3f,g. Correlation between Expression Levels and Immune Cell Infiltration Levels We used the TIMER web server with the integration of EPIC, CELL, CIBERSORT, and QUANTISEQ to visualize the correlation between NF-κB p65 (RelA) and TNFα gene expression levels and immune infiltration levels in GBM.We found that the expression levels of TNFα were positively correlated with B cells, CD8 + T cells, CD4+, monocyte, macrophages, myeloid dendritic cells, and NK cell infiltration levels in GBM, and TNFα was also negatively correlated with Treg cells (Figure 4a).Similarly, NF-κB p65 (RelA) was positively correlated with CD4+, Treg cells, myeloid dendritic cells, B cells, Macrophage M0, NK cells, and neutrophils, and NF-κB p65 (RelA) was also negatively correlated with CD8 + T cells and Macrophage M2, as shown in Figure 4b.(f) The main biological processes in MCODE1 of NF-B p65 (RelA) involving the interacting genes are depicted using a cluster analysis from Metascape.In Metascape, MCODE complexes can be recognized automatically based on their IDs.Network representation of enriched biological pathways facilitates the connections between various biological processes.(g) Network of the enriched term of TNFα that was entered into this system for analysis.A circle node represents each term, and the node size is directly proportional to the number of input proteins grouped into each term.The node's color denotes its cluster identity.GO terms with a similarity score >0.3 are connected by an edge, and the edge thickness represents the similarity score. Correlation between Expression Levels and Immune Cell Infiltration Levels We used the TIMER web server with the integration of EPIC, CELL, CIBERSORT, and QUANTISEQ to visualize the correlation between NF-κB p65 (RelA) and TNFα gene expression levels and immune infiltration levels in GBM.We found that the expression levels of TNFα were positively correlated with B cells, CD8 + T cells, CD4+, monocyte, macrophages, myeloid dendritic cells, and NK cell infiltration levels in GBM, and TNFα was also negatively correlated with Treg cells (Figure 4a).Similarly, NF-κB p65 (RelA) was positively correlated with CD4+, Treg cells, myeloid dendritic cells, B cells, Macrophage M0, NK cells, and neutrophils, and NF-κB p65 (RelA) was also negatively correlated with CD8 + T cells and Macrophage M2, as shown in Figure 4b. Consistency of mRNA Expression Profiling and Validation in a GBM Patient Sample To evaluate the potential role of NF-κB p65 (RelA) and TNFα in glioblastoma, we quantified the expression of NF-κB p65 (RelA) and TNFα in 33 GBM samples.mRNA and protein expression of NF-κB p65 (RelA) and TNFα were significantly increased in glioblastoma biopsy samples.We examined the gene expression of targeted genes among glioblastoma specimen sections within the tumor and tumor-associated normal tissue (TANT).Tumor-associated normal tissue is obtained from the vicinity of the tumor site and serves as a comparison or control tissue for studying various aspects of tumor biology, including gene expression, signaling pathways, and cellular interactions.In the current study, it is obtained from the region adjacent to the tumor mass.All tissue samples were initially cut from four regions of the specimen, but only a selection with sufficient RNA quality and quantity was subjected to RT-PCR gene expression analysis.Normal QQ plots explain the same distribution and indicate the univariate normality of the dataset.NF-κB p65 (RelA) and TNFα exhibited increased expression in tumor tissue biopsy samples (Figure 5a,b).These data are also consistent with ELISA findings, with the indication of a univariate normality test.Each gene expression fold change FC was computed, and genes with |log2FC| > 2 and a p-value were identified (adjusted by the false discovery rate (Figure 5c,d)). Consistency of mRNA Expression Profiling and Validation in a GBM Patient Sample To evaluate the potential role of NF-κB p65 (RelA) and TNFα in glioblastoma, we quantified the expression of NF-κB p65 (RelA) and TNFα in 33 GBM samples.mRNA and protein expression of NF-κB p65 (RelA) and TNFα were significantly increased in glioblastoma biopsy samples.We examined the gene expression of targeted genes among glioblastoma specimen sections within the tumor and tumor-associated normal tissue (TANT).Tumor-associated normal tissue is obtained from the vicinity of the tu- In vitro expression study of NF-κB p65 (RelA) and TNFα in the SF-767 glioblastoma cell line after treatment with temozolomide and celecoxib. The Effect of Temozolomide and Celecoxib Treatment on Glioblastoma Cells The TMZ treatment effects on the glioblastoma SF-767 cell line were evaluated for 48 h at concentrations of 10, 50, 100, 150, and 200 µM.The SF-767 cell line was exposed to In vitro expression study of NF-κB p65 (RelA) and TNFα in the SF-767 glioblastoma cell line after treatment with temozolomide and celecoxib. The Effect of Temozolomide and Celecoxib Treatment on Glioblastoma Cells The TMZ treatment effects on the glioblastoma SF-767 cell line were evaluated for 48 h at concentrations of 10, 50, 100, 150, and 200 µM.The SF-767 cell line was exposed to TMZ 10 µM, and following the treatment, the cells were inhibited by 33.1%.A higher TMZ concentration (200 µM) was more lethal, resulting in 89% of the GBM cells dying following the treatment.Similarly, after being treated for 48 h with 10 µM of celecoxib, 12.8% of the cells died.In contrast, after being treated for 48 h with 200 µM of celecoxib, 75% of the cells died (Figure 6a).The results showed that cell viability decreased while increasing the concentration and duration of treatment.A higher dose of TMZ resulted in a higher cytotoxic effect in the MTT assay, as shown in Figure 6a. Moreover, the expression of inflammatory biomarkers NF-κB p65 (RelA) and TNFα was studied after treatment with temozolomide and celecoxib.A quantitative RT-PCR analysis was performed in the SF-767 cell line treated with temozolomide at three concentrations (50 µM, 100 µM, and 150 µM) to assess the mRNA expression level of inflammatory genes (NF-κB p65 (RelA) and TNFα).Pro-inflammatory genes were significantly elevated in the stress groups after stimulation with IL-1 beta, a potent stimulator of inflammatory responses.The cells were treated for 48 hrs.Compared to the stress group, the TMZ did not significantly reduce the expression of NF-κB p65 (RelA) and TNFα in the glioblastoma SF-767 cell line in a dose-dependent manner (Figure 6b,c).Furthermore, the mRNA expression level of NF-κB p65 (RelA) and TNFα in the celecoxibtreated SF-767 cell line was studied at a similar concentration as TMZ.Compared to the stress group, celecoxib significantly reduced the expression of NF-κB p65 (RelA) and TNFα in the glioblastoma SF-767 cell line in a dose-dependent manner (Figure 6d,e).Moreover, the expression of inflammatory biomarkers NF-κB p65 (RelA) and TNFα was studied after treatment with temozolomide and celecoxib.A quantitative RT-PCR analysis was performed in the SF-767 cell line treated with temozolomide at three concentrations (50 µM, 100 µM, and 150 µM) to assess the mRNA expression level of inflammatory genes (NF-κB p65 (RelA) and TNFα).Pro-inflammatory genes were significantly elevated in the stress groups after stimulation with IL-1 beta, a potent stimulator of inflammatory responses.The cells were treated for 48 hrs.Compared to the stress group, the TMZ did not significantly reduce the expression of NF-κB p65 (RelA) and TNFα in the glioblastoma SF-767 cell line in a dose-dependent manner (Figure 6b,c).Furthermore, the mRNA expression level of NF-κB p65 (RelA) and TNFα in the celecoxib-treated SF-767 cell line was studied at a similar concentration as TMZ.Compared to the stress group, celecoxib significantly reduced the expression of NF-κB p65 (RelA) and TNFα in the glioblastoma SF-767 cell line in a dose-dependent manner (Figure 6d,e). Discussion Glioblastoma (GBM) is an aggressive brain tumor with a more than 90% chance of recurrence.Determining biomarkers for GBM's early diagnosis and prognosis is important.The NF-κB pathway transcription factor NF-κB p65 (RelA) and its related TNFα were found to be prospective targets in GBM by our comprehensive integrated approach through bioinformatics and clinical sample analysis.In this study, NF-κB p65 (RelA) and TNFα were found to be highly expressed in many tumor types, including GBM from global databases.In vitro glioblastoma's ability to invade and infiltrate, NF-κB p65 (RelA) and TNFα play crucial roles, according to a wealth of evidence [37,38].Therefore, we investigated a variety of datasets, including the Oncomine, GEPIA, and TIMER databases, to study the relationship between NF-κB p65 (RelA) and TNF-α expression in GBM.Our findings revealed that NF-κB p65 (RelA) expression levels were comparatively higher in GBM than in other tumors, while TNFα showed differential expression in GBM and other cancers through bioinformatics analysis.However, in the case of GBM, various proteins and signaling pathways are dysregulated, which could lead to NF-κB p65 (RelA) activation.TNFα is an extremely potent NF-κB p65 (RelA) activator.In the central nervous system (CNS), astrocytes, microglia, and certain neurons release the pro-inflammatory chemical TNFα.TNFα may indeed exhibit its effects through two receptors, TNFα receptors 1 and 2 (TNFR1 and TNFR2, respectively) [39].The majority of cells typically express TNFR1, although oligodendrocytes and immune cells, particularly microglia, express TNFR2.Additionally, it was discovered that GBM and its associated endothelial cells expressed higher levels of TNFR1 compared to normal brain tissues and low-grade gliomas [40].It is suggested that TNFα may be a possible diagnostic marker for GBM in response to that NF-κB signaling cascade.The dysregulation of numerous signaling pathways or growth factors and the triggering of a pro-inflammatory microenvironment in gliomas may lead to the activation of NF-κB p65 (RelA) [41,42].High constitutive NF-κB p65 (RelA) activity is characteristic of GBM [43]. The impact of NF-κB p65 (RelA) and TNFα expression on the survival time of GBM patients was then assessed utilizing the UALCAN and GEPIA databases.These results revealed that high NF-κB p65 (RelA) and TNFα expressions were independent predictors of decreased OS for GBM.As in the previous studies, patients with GBM had shorter survival times due to upregulation of NF-κB p65 (RelA) and TNFα [44].Nevertheless, constitutive NF-κB p65 (RelA) activation appears to promote the growth and metastasis of tumors by a range of mechanisms, including tumor metastasis, apoptosis, cell proliferation, angiogenesis, and metabolic reprogramming.It has been established that NF-κB p65 (RelA) stimulates the development of an inflammatory milieu that is conducive to the establishment of cancer [45].Activation of constitutive NF-κB p65 (RelA) promotes survival and development in GBM. These findings imply that these two genes can serve as a significant predictive marker for people with GBM [46].Additionally, we used qRT-PCR to reanalyze the NF-κB p65 (RelA) and TNFα gene expression levels and transcripts by calculating the relative fold change in gene expression between a control sample (TANT) and experimental samples (samples from GBM patients) through the ∆∆Ct normalizing method and discovered their higher levels.β-actin, also known as ACTB, is a commonly used reference gene for normalization in qRT-PCR.Due to its ubiquitous expression and consistent expression levels, it has been validated in previous studies that β-actin has been extensively used as a reference gene in numerous studies, including glioblastoma research.Its selection as a reference gene for normalization in glioblastoma qRT-PCR experiments is based on its consistent and reliable expression across samples [47].It was inferred that NF-κB p65 (RelA) and TNFα might be an early diagnostic marker for GBM patients since the expression trend of the NF-κB p65 (RelA) and TNFα proteins was essentially compatible with the transcript [48].The PPI network of NF-κB p65 (RelA) and TNFα by GeneMANIA has been investigated, and the biological processes associated with NF-κB p65 (RelA) and TNFα interaction genes were analyzed with Metascape in order to better understand why elevated expression levels of NF-κB p65 (RelA) and TNFα are significant for the poor prognosis of GBM patients.However, it is justified by different previous studies that NF-κB p65 (RelA) and TNFα are biologically plausible candidates and play an important role in inflammatory processes involved in various cancers, including GBM [49].A poor prognosis can be associated with elevated levels of markers including C-reactive protein (CRP) and interleukin-6 (IL-6), which indicate systemic inflammation and link it to tumor aggressiveness and decreased overall survival [50].Secondly, the release of pro-inflammatory cytokines by immune cells involved in inflammation that are linked to tumors, such as macrophages and microglia, contributes to the development of GBM and the disease's resistance to treatment.Inflammatory mechanisms like the NF-κB pathway are activated during GBM growth and therapy.As a result, both systemic and tumor-specific inflammation are associated with a poor prognosis for GBM patients [51,52]. In the current study, we hypothesized that the biological functions of NF-κB p65 (RelA) and TNFα were connected with immunological processes, resulting in poor prognosis with elevated NF-κB p65 (RelA) and TNFα expression levels in GBM.Based on this presumption, TIMER was employed to examine the correlation between NF-κB p65 (RelA) and TNFα expression levels and immune cell infiltration levels in GBM.Numerous studies highlight NF-κB p65 (RelA) and TNFα mediated exacerbation of inflammation in the tumor microenvironment [53,54]. The NF-κB p65 (RelA) family of pleiotropic transcription factors is sequestered in the cytoplasm of most normal cells by noncovalent interaction.Recent investigations have demonstrated that various tumor cells express NF-κB p65 (RelA) constitutively activated [55].Interestingly, in glioblastoma, TNFα induces tumor cell motility and invasion via activating NF-κB [56].As anticipated, TNFα increased SF-767 cell invasion because of other metabolic stimuli due to the presence of LDL protein and receptors, which increased the cell proliferation turnover of growing tumor cells in this study and caused NF-κB p65 (RelA) activation.SF-767 cells revealed high-affinity LDL binding and maximum binding capacity [57].Our results were categorically established in GBM SF-767 cells with NF-κB p65 (RelA) overexpression and silencing as a positive modulator of NF-κB signaling by enhancing the translation of the p65 transcript.Temozolomide (TMZ), an oral alkylating cytostatic medication, is frequently used to treat GBM; however, over 50 percent of individuals who use it do not experience any benefits [58]. Therefore, the SF-767 glioblastoma cell line was used for in vitro analysis in the current study.According to the data, when the SF-767 cell line was exposed to 10 µM TMZ, the inhibition was 33.1%.A higher concentration of TMZ (200 M) proved to be lethal in GBM cells, resulting in 89% cell death after the treatment.Due to the heterogeneity of the GBM tumor and its highly angiogenic and metastatic characteristics, combination therapies are now regarded as an essential component of anticancer therapy.Cancer monotherapy has become a rare chemotherapeutic treatment choice [59].The standard treatment for GBM is temozolomide therapy combined with surgery and radiation therapy, but because this approach has minimal effect on patients' overall survival, it is crucial to create drugs that can maximize their advantages and prevent tumor resistance [60].The combination of TMZ with celecoxib would be a workable strategy to treat GBM, even though TMZ has been successful in treating GBM.Celecoxib can inhibit NF-κB p65 activation, combat the pro-inflammatory milieu, and improve therapeutic outcomes.The synergistic effects of celecoxib and TMZ on GBM cells' apoptosis indicate that celecoxib may improve the effectiveness of TMZ, the current standard treatment for GBM [61].Combining celecoxib with conventional therapies such as radiation or chemotherapy may be able to potentially target cancer stem cells [62].Celecoxib additionally enhanced the effects of glucocorticoids triggering apoptosis in GBM cells by suppressing cyclooxygenase-2 (COX-2), which resulted in Akt-mediated activation of NF-κB and subsequent apoptosis [63,64].Previous studies provided evidence that combining celecoxib with existing treatment modalities, such as TMZ or glucocorticoids, can have synergistic effects in GBM cells.These findings support the potential benefits of celecoxib as an adjunctive therapy to enhance the effectiveness of GBM treatments currently being used.However, mounting evidence pointing to NSAIDs' wide variety of COX-dependent targets, such as the presence of NF-κB, B-CATENIN, PPAR DELTA, NAG-1, and BCL-2, suggests that various molecular pathways are implicated in the anticancer effect of these medications [65].Understanding the regulation of NF-κB in cancer has led to the exploration of novel therapeutic approaches.Studies have demonstrated the inhibitory effects of celecoxib on the activation of the NF-κB signaling pathway, which is implicated in both inflammatory responses and the progression of tumors [13].Celecoxib has exhibited potential anti-cancer effects in both preclinical and clinical studies by virtue of its capacity to modulate NF-κB activity.Furthermore, celecoxib has received approval for therapeutic use in colon carcinogenesis, rheumatoid arthritis, and various inflammatory disorders.Studies have demonstrated its ability to induce apoptosis and inhibit angiogenesis [66].Celecoxib has been investigated as an adjunct therapy for certain cancer types, including colorectal cancer and pancreatic cancer.Prior studies have demonstrated that the utilization of celecoxib, alongside conventional chemotherapy protocols, has the potential to augment treatment efficacy and enhance overall patient survival rates [67,68].The potential of celecoxib to impact crucial cellular processes associated with tumor growth, angiogenesis, and metastasis lies in its ability to target NF-κB. Moreover, the investigation of NF-κB regulation and its modulation in specific cancer types has provided insights into potential therapeutic strategies beyond traditional chemotherapy.Targeted therapies aimed at inhibiting NF-κB signaling have been explored as a means to overcome drug resistance and improve treatment outcomes in cancers such as lymphoma, multiple myeloma, and breast cancer [69,70].These studies highlight the potential clinical significance of targeting NF-κB in specific cancer contexts.Further research is needed to establish celecoxib's clinical efficacy, determine optimal treatment strategies, and identify biomarkers for personalized patient selection.In this study, we offer evidence that celecoxib inhibits NF-κB activation while inhibiting the development of GBM cells.The effectiveness of TMZ and COX-2 inhibitors in treating GBM in vivo and in vitro has been demonstrated in earlier research, but the underlying molecular mechanism has not been clarified [71].However, it has recently been found that the NSAIDs indomethacin and flurbiprofen suppress the growth of glioma cells [72,73].Celecoxib, a medication used to treat inflammation, is now also used to treat cancer.There is growing evidence that, despite being a selective inhibitor of COX-2, it exerts anti-tumor effects on cancer cells that do not contain the COX-2 enzyme.In order to determine if celecoxib alone or in conjunction with other drugs is beneficial in treating glioblastoma, several researchers are now engaged in Phase II clinical studies [13].Celecoxib and temozolomide were also used to treat a rat orthotropic glioma model, proving that both medications work well together to treat gliomas [15].Our research supports the in vitro findings, but mounting evidence points to NSAIDs' wide spectrum of COX-independent targets, such as NF-κB p65 (RelA) and TNFα, indicating that a number of molecular pathways may be involved in the inhibit-neoplastic action of these drugs.In the present research, we provide evidence that celecoxib suppresses the growth of GBM cells by inhibiting NF-κB activation and its signaling pathway.Additionally, individuals with glioblastoma receiving temozolomide, dexamethasone, and cranial radiation therapy for peritumoral brain edema could take celecoxib without any danger [74].Celecoxib use has increased due to these trials, offering a desirable anti-glioma treatment plan.However, previous studies also highlighted some of the side effects and limitations of celecoxib use.Including cardiovascular risks, gastrointestinal effects, renal complications, allergic reactions, and drug interactions [75,76].Additionally, a recent study revealed some inconsistencies regarding COX-2 inhibitors and GBM invasion, and contraindications regarding COX-2 inhibitors and GBM invasion have been reported [73].To manage potential adverse effects and ensure patient wellbeing, adequate monitoring, an appropriate dose, and regular follow-up are important [77,78].The study's findings could be strengthened by considering the potential influence of age, gender, ethnicity, and environmental factors on the effects of COX-2 inhibitors in glioblastoma.These factors may contribute to variations in treatment response and outcomes, and exploring their impact could provide valuable insights for personalized medicine approaches. In summary, our findings showed that celecoxib exhibits inhibitory effects on NF-κB activation, which is associated with inflammation, and it also hampers proliferation and triggers apoptosis in GBM cells.These findings highlight the potential of celecoxib as a therapeutic agent in the treatment of GBM. Conclusions The expression profile of NF-κB p65 (RelA) and TNFα in GBM patients was studied using the TCGA database and biopsy samples from glioblastoma patients who underwent surgery.We found high expression of these genes in GBM patients.NF-κB is a ubiquitous transcription factor that regulates the response to a diverse range of stimuli.Our research contributes to the individualized prognostic management of glioblastoma patients and provides evidence for targeting NF-κB and TNF family members.With over 90 percent of recurrent glioblastoma responding poorly to a second line of chemotherapy, acquired resistance to chemotherapy is a severe consequence of temozolomide therapy.This study explored the inhibitory effect of celecoxib on decreasing the expression of NF-κB p65 (RelA) and TNFα in the GBM cell line in comparison with TMZ.Our findings imply that celecoxib reduces the expression of Nfkb linked to suppression of COX2, hence reducing the proliferation of glioblastoma.Temozolomide therapy has a greater effect on cell viability.In the future, if these drugs are used in combination, they may show a synergistic effect by decreasing cellular proliferation and cell viability by celecoxib and temozolomide, respectively, against the SF-767. Figure 1 . Figure 1.Expression levels of NF-κB p65 (RelA) and TNFα in various human cancers.(a) Expression profiles of NF-κB p65 (RelA) in tumors and paired normal tissue samples from the TCGA database in Figure 2 . Figure 2. Comparisons of the effects of high and low expression levels of NF-κB p65 (RelA) and TNFα on the survival time of GBM patients using the GEPIA database.(a) Elevated expression levels Figure 3 . Figure 3. Proteinprotein interaction and functional enrichment analysis.(a) Clustered enrichment ontology categories (GO and KEGG terms) across the input gene NF-κB p65 (RelA), colored by pvalues by Metascape.(b) Metascape also constructed biological processes from the histogram of TNFα.(c-e) Gene interaction of NF-κB p65/RelA and TNFα by GeneMANIA and STRING.(f)The main biological processes in MCODE1 of NF-B p65 (RelA) involving the interacting genes are depicted using a cluster analysis from Metascape.In Metascape, MCODE complexes can be recognized automatically based on their IDs.Network representation of enriched biological pathways facilitates the connections between various biological processes.(g) Network of the enriched term of TNFα that was entered into this system for analysis.A circle node represents each term, and the node size is directly proportional to the number of input proteins grouped into each term.The node's color denotes its cluster identity.GO terms with a similarity score >0.3 are connected by an edge, and the edge thickness represents the similarity score. Figure 3 . Figure 3. Proteinprotein interaction and functional enrichment analysis.(a) Clustered enrichment ontology categories (GO and KEGG terms) across the input gene NF-κB p65 (RelA), colored by p-values by Metascape.(b) Metascape also constructed biological processes from the histogram of TNFα.(c-e) Gene interaction of NF-κB p65/RelA and TNFα by GeneMANIA and STRING.(f)The main biological processes in MCODE1 of NF-B p65 (RelA) involving the interacting genes are depicted using a cluster analysis from Metascape.In Metascape, MCODE complexes can be recognized automatically based on their IDs.Network representation of enriched biological pathways facilitates the connections between various biological processes.(g) Network of the enriched term of TNFα that was entered into this system for analysis.A circle node represents each term, and the node size is directly proportional to the number of input proteins grouped into each term.The node's color denotes its cluster identity.GO terms with a similarity score >0.3 are connected by an edge, and the edge thickness represents the similarity score. Figure 4 . Figure 4. Relationships between NF-κB p65 (RelA) and TNFα expression levels and immune cell infiltration levels in GBM.Correlation between the abundance of immune cells and the expression of (a) TNFα and (b) NF-κB p65 (RelA). Figure 4 . Figure 4. Relationships between NF-κB p65 (RelA) and TNFα expression levels and immune cell infiltration levels in GBM.Correlation between the abundance of immune cells and the expression of (a) TNFα and (b) NF-κB p65 (RelA). J . Clin.Med.2023, 12, x FOR PEER REVIEW 14 of 23 mor site and serves as a comparison or control tissue for studying various aspects of tumor biology, including gene expression, signaling pathways, and cellular interactions.In the current study, it is obtained from the region adjacent to the tumor mass.All tissue samples were initially cut from four regions of the specimen, but only a selection with sufficient RNA quality and quantity was subjected to RT-PCR gene expression analysis.Normal QQ plots explain the same distribution and indicate the univariate normality of the dataset.NF-κB p65 (RelA) and TNFα exhibited increased expression in tumor tissue biopsy samples (Figure 5a,b).These data are also consistent with ELISA findings, with the indication of a univariate normality test.Each gene expression fold change FC was computed, and genes with |log2FC| > 2 and a p-value were identified (adjusted by the false discovery rate (Figure 5c,d)). Figure 5 . Figure 5. Quantification, expression, and verification of NF-κB p65 (RelA) and TNFα in glioblastoma patients.(a,b) Elevated expression levels of the two candidate genes in biopsy tissue of glioblastoma.Distributions of gene expression levels are displayed, with the statistical significance of differential expression evaluated using the t-test.*** p < 0.001.The values of three biological replicates are shown, indicating a univariate normality test.The graphs were plotted with the GraphPad Prism 9 (Prism 9.5.0)software.(c,d) An enzyme-linked immunosorbent assay validated the expression of DEGs in glioblastoma patients with univariate normalization using the t-test.** p < 0.01 and * p < 0.05, respectively. Figure 5 . Figure 5. Quantification, expression, and verification of NF-κB p65 (RelA) and TNFα in glioblastoma patients.(a,b) Elevated expression levels of the two candidate genes in biopsy tissue of glioblastoma.Distributions of gene expression levels are displayed, with the statistical significance of differential expression evaluated using the t-test.*** p < 0.001.The values of three biological replicates are shown, indicating a univariate normality test.The graphs were plotted with the GraphPad Prism 9 (Prism 9.5.0)software.(c,d) An enzyme-linked immunosorbent assay validated the expression of DEGs in glioblastoma patients with univariate normalization using the t-test.** p < 0.01 and * p < 0.05, respectively. Figure 6 . Figure 6.Assessment of cytotoxicity by MTT proliferation assay.The inhibition ratio of GBM cells after treatment with TMZ and celecoxib at different concentrations for 48 h.(a) Black and red curve TMZ and celecoxib in a dose-dependent manner.Increasing the concentration of any of these drugs makes the cytotoxic response more potent.The quantitative analysis of MTT is represented as the mean ± SD of three independent experiments.when compared to the untreated con- Figure 6 . Figure 6.Assessment of cytotoxicity by MTT proliferation assay.The inhibition ratio of GBM cells after treatment with TMZ and celecoxib at different concentrations for 48 h.(a) Black and red curve TMZ and celecoxib in a dose-dependent manner.Increasing the concentration of any of these drugs makes the cytotoxic response more potent.The quantitative analysis of MTT is represented as the mean ± SD of three independent experiments.when compared to the untreated control.(b) Quantitative RT-PCR analysis of mRNA expression levels of the inflammatory marker NF-κB p65 (RelA) in the glioblastoma SF-767 cell line treated with TMZ at 50 µM, 100 µM, and 150 µM.The GAPDH Table 1 . MCODE components of NF-κB p65 (RelA) and TNFα by p-value as the functional description of the corresponding component network plots.
2023-10-26T15:36:36.172Z
2023-10-01T00:00:00.000
{ "year": 2023, "sha1": "98a1aa203e56fa04f4ce598c90db8f6b8df1553b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/20/6683/pdf?version=1698041627", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8b3571c99ef2a16394da194c5f3f7678a2d639d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
55134582
pes2o/s2orc
v3-fos-license
Effects of Jigsaw Learning Method on Students’ Self-Efficacy and Motivation to Learn Jigsaw learning as a cooperative learning method, according to the results of some studies, can improve academic skills, social competence, behavior in learning, and motivation to learn. However, in some other studies, there are different findings regarding the effect of jigsaw learning method on self-efficacy. The purpose of this study is to examine the effects of jigsaw learning method on self-efficacy and motivation to learn in psychology students at the Faculty of Medicine, Universitas Lambung Mangkurat. The method used in the study is the experimental method using one group pretest and post-test design. The results of the measurements before and after the use of jigsaw learning method were compared using paired samples t-test. The results showed that there is a difference in students’ self-efficacy and motivation to learn before and after subjected to the treatments; therefore, it can be said that jigsaw learning method had significant effects on self-efficacy and motivation to learn. The application of jigsaw learning model in a classroom with large number of students was the discussion of this study. Introduction A teacher has an essential role in making the university teaching and learning process effective and beneficial for the psychological condition of the students. Good teachers are expected to be aware of their teaching-learning process's effectiveness in improving their students' academic achievement and psychological condition as well as helping them succeed in understanding the taught materials. One of the learning processes that is considered useful in fostering students' independence in learning, known for having many positive effects on the psychological condition of the learners, is the Cooperative Learning approach. Johnson (1986) stated that cooperative learning method helps learners acquire critical thinking skills because it creates a situation where learners must explain, discuss various perspectives, and have a greater understanding of the material they learn. In cooperative learning, the learners need elaborative thinking to exchange information. The results of previous studies, such a study done by Moskowitz, Malvin, Schaeffer, and Schaps (1985) showed that cooperative learning had an impact on the development of academic and social competence. It is also concluded in other recent studies that cooperative learning with jigsaw technique affected the academic achievement of students at the University of Nigeria (Mari and Gumel, 2014), and contributed to motivation and cognitive activities (Hanse and Berger, 2007). Although some previous studies showed positive impacts of cooperative learning on a person's behavior, it seemed that cooperative learning did not necessarily bring a significant impact on one's psychological condition, namely perceived self-efficacy. In contrary, Şengül and Katranci (2014) found that cooperative learning with jigsaw method, applied to students through experimental measurements of pre-test and post-test, showed no significant difference in the level of perceived self-efficacy in mathematics. This result was also consistent with the study conducted by Mari and Gumel (2014); in addition to examining the differences in student achievements between the students applying jigsaw cooperative learning method and traditional method, they also examined the differences of self-efficacy. The results of the study on self-efficacy did not reveal any difference based on the two learning methods. Differences in the results of previous studies, especially on the psychological condition of students such as self-efficacy in the application of jigsaw learning technique, require reexamination especially for the Department of Psychology, the Faculty of Medicine, Lambung Mangkurat University in seeking for a model for effective learning in a large class of more than 30 students as the first step to prepare for the implementation of competencybased curriculum (CBC). Jigsaw learning method is a new method new and has never been done in previous lecturing processes at the Department of Psychology so a study to test the effectiveness of jigsaw learning method as the foundation for further learning process is required, especially since students' self-efficacy and motivation to learn because cannot be separated. Bandura (1997) defined self-efficacy as a person's belief in his/her capacity to organize and implement measures to achieve set goals and assess the level and strength in all activities and contexts. Self-efficacy, as defined by Friedman and Schustack (2008), is an important cognitive element described as the expectation or belief (hope) about how much a person can perform a behavior in each situation. A positive self-efficacy is the belief of being able to perform the behavior. In the absence of self-efficacy which is a very situational belief, someone may have no desire to perform a behavior. It indicates that self-efficacy has a close relationship with one's motivation to learn. Pintrich and Schunk (1996) defined motivation as the process whereby goal-directed activity is instigated and sustained. Method Subjects for this study were 81 first semester students in the Department of Psychology, Faculty of Medicine, Universitas Lambung Mangkurat, who took General Psychology in the first semester. There were three variables in this study, namely (1) jigsaw learning method (2) self-efficacy, a person's belief in his ability to understand lessons and pass the exam, and (3) motivation to learn, the motivation to learn the material of General Psychology. The methods used to collect data were the self-efficacy and motivation to learn scale. The design of the study was the experiment using one group pretest and posttest design. At the first meeting, we informed the students that there would be a group discussion during the ninth meeting (after midterms). Students were divided into seven groups of 12 to 13 people per group. Before the study, the subjects were measured for self-efficacy and motivation to learn during the ninth meeting. The study was also carried out by giving a treatment, applying the jigsaw model of cooperative learning, to the groups of the study subjects. After this jigsaw learning model was applied for seven meetings, at the fifteenth meeting (final class meeting), the subject was given the scales again to measure their self-efficacy and motivation to learn. The scores of subjects' self-efficacy and motivation to learn that were obtained from the two measurements were compared and tested for the differences. Data were analyzed using the paired samples t-test. Self-efficacy scale was arranged on the aspects of magnitude, generality, and strength based on Bandura (1997). Meanwhile, motivation to learn scale was based on the aspects stated by McCown, Driscoll, & Roop (1995), consisting of the desire and initiative to learn, involvement in assignment completion, and commitment to continue learning. Scale trials were conducted among 29 third semester students in the Department of Psychology. The At the beginning of this study, we divided students into home groups. One student from each home groups was then selected to be a member of the expert groups, namely the groups discussing the schools in psychology consisting of group one to seven. These expert groups discussed the schools of structuralism, functionalism, gestalt, psychoanalysis, behaviorism, cognitive, and humanism, respectively. The division of the groups was held at the first meeting by a random drawing. Each member of the expert groups was then asked to study the material and prepare an individual presentation, scheduled the midterms. Members of the expert groups would present their knowledge on the material related to their expert group to their original home group. The students were asked to actively discuss and ask the things important to them. After the discussion, the teacher provided feedback to develop students' common perception and conducted a quiz as an evaluation of the learning process on that day. The students were also given the opportunity to provide an assessment to the members of the expert groups who explained the materials to them on the day as the consideration for teacher assessment. The procedures for jigsaw learning model were conducted for seven meetings with different materials. At the last meeting (the fifteenth meeting), the 81 respondents were measured for their self-efficacy and motivation to learn with the same scales. Results Measurements results of the subjects' self-efficacy and motivation to learn before and after the experiment were compared and tested to determine the difference with paired samples t-test. The results of the self-efficacy variable analysis by comparing pre-test and post-test scores showed the t value -3,170 with ρ = 0.002 (p < 0.05, one-tailed). These results indicated that jigsaw learning method significantly increased self-efficacy. The results of the motivation-to-learn variable analysis by comparing pre-test and post-test scores showed the t value of -2,158 with ρ = 0.034 (p < 0.05), indicating that jigsaw learning method very significantly increased motivation to learn. In addition to examining the effects of jigsaw learning method, the researcher also studied the relationship between self-efficacy and motivation to learn. The results of data analysis showed a correlation between self-efficacy and motivation-to-learn by 0.624 with a significance level less than 0.05. Description of the findings is listed in Table 1. Discussion Studies on self-efficacy have widely been carried out, such as the difference between selfconcept and self-efficacy (Bong and Schaalvik, 2003), self-efficacy as a mediator in the relationship between self-oriented perfectionism and academic procrastination (Seo, 2008), and effects of competition on students' self-efficacy in vicarious learning (Chan and Lam, 2008). Studies on motivation have also been carried out, such as the relationship between motivation and academic achievement (Eppler and Harju, 1997), and Academic Motivation in Self-Efficacy, Task Value, Achievement Goal Orientations, and Attributional Beliefs (Bong, 2004). However, the studies on the relation of self-efficacy to the use of jigsaw learning method are diverse with inconsistent results. The study conducted by Araban, Zainalipour, Saad, Javdan, Sezide, and Sajjadi (2012) showed that jigsaw learning method augured well for self-efficacy while the studies conducted by Şengül and Katranci (2014) and Mari and Gumel (2014) showed the opposite results. Those studies become the basis and additional evidence that self-efficacy and motivation to learn can be influenced significantly (ρ < 0.005) by jigsaw model of cooperative learning. The researcher also performed a correlation analysis on the variables of self-efficacy and motivation to learn because the relationship between the variables cannot be separated in shaping the behavior of students. This is in line with Hergenhahn (2010) who stated, "people who consider their capability pretty high will try harder, achieve more, and be more persistent in performing tasks compared to those who consider their capability low". Results of Pearson correlation test in this study indicated a relationship between self-efficacy and motivation to learn of psychology students. The success self-efficacy and motivation to learn testing lay in the correct procedures of jigsaw learning model. However, there was a step, the forming of groups consisting of 5-6 persons for each, which could not be met because of the number of the psychology students in one class. This is proof that jigsaw learning method can also be applied to a class with large number of members in each discussion group. The other key to the success of the jigsaw learning process in affecting self-efficacy and motivation to learn was due to the quiz at the end of the lectures, as described by Aronson (2008). He believed that the last step in the use of jigsaw learning model was the need for a quiz or test to avoid making the students feel that their efforts were in vain. The students not only had discussions within the expert groups but also received feedback and evaluation after the discussion process was completed. The members of the expert groups were also assessed by the members of their home groups for their efforts in describing and explaining the schools in philosophy. The selected students in the expert groups would try to understand and explain the materials to their home group properly, and the other students as the participants in the discussion would also try to achieve good evaluation of learning outcomes. This effort is called motivation to learn, to be the basis of increasing their self-efficacy. The average values of the increasing points earned by self-efficacy (3.81) and motivation to learn (2.34) seemingly had something to do with the limitation of this study, namely the large number of students engaged in jigsaw learning model. Bigger classes and too many members of each group could interfere with the discussion activity, and as a result, not all students in the home groups were actively involved in the discussion. Remeasurement of the variables of self-efficacy and motivation to learn in further studies is required to see the consistency of both variables after the application of jigsaw learning model ends, and the traditional learning model is reapplied.
2018-12-11T16:01:25.078Z
2017-12-18T00:00:00.000
{ "year": 2017, "sha1": "e1ca087e6e585218d54784f814684cd00e42b6fe", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.12928/jehcp.v6i3.8314", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4e1b290ee6e04c87fc61858f58733ef639fe3801", "s2fieldsofstudy": [ "Education", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
15782541
pes2o/s2orc
v3-fos-license
Tissue Degeneration following Loss of Schistosoma mansoni cbp1 Is Associated with Increased Stem Cell Proliferation and Parasite Death In Vivo Schistosomiasis is second only to malaria in terms of the global impact among diseases caused by parasites. A striking feature of schistosomes are their ability to thrive in their hosts for decades. We have previously demonstrated that stem cells, called neoblasts, promote homeostatic tissue maintenance in adult schistosomes and suggested these cells likely contribute to parasite longevity. Whether these schistosome neoblasts have functions independent of homeostatic tissue maintenance, for example in processes such as tissue regeneration following injury, remains unexplored. Here we characterize the schistosome CBP/p300 homolog, Sm-cbp1. We found that depleting cbp1 transcript levels with RNA interference (RNAi) resulted in increased neoblast proliferation and cell death, eventually leading to organ degeneration. Based on these observations we speculated this increased rate of neoblast proliferation may be a response to mitigate tissue damage due to increased cell death. Therefore, we tested if mechanical injury was sufficient to stimulate neoblast proliferation. We found that mechanical injury induced both cell death and neoblast proliferation at wound sites, suggesting that schistosome neoblasts are capable of mounting proliferative responses to injury. Furthermore, we observed that the health of cbp1(RNAi) parasites progressively declined during the course of our in vitro experiments. To determine the fate of cbp1(RNAi) parasites in the context of a mammalian host, we coupled RNAi with an established technique to transplant schistosomes into the mesenteric veins of uninfected mice. We found transplanted cbp1(RNAi) parasites were cleared from vasculature of recipient mice and were incapable of inducing measurable pathology in their recipient hosts. Together our data suggest that injury is sufficient to induce neoblast proliferation and that cbp1 is essential for parasite survival in vivo. These studies present a new methodology to study schistosome gene function in vivo and highlight a potential role for schistosome neoblasts in promoting tissue repair following injury. Introduction Schistosomes infect over 200 million people and are a major cause of morbidity in the developing world. The primary driver of this morbidity is the prodigious egg production of these parasites, which can lay several hundred eggs every day while living in the vasculature of their hosts [1]. A large fraction of these eggs are swept into the circulation and become lodged in host organs (such as the liver and bladder), leading to inflammatory responses that can compromise organ function [2]. The pathological consequences of schistosome egg production are compounded by the fact that schistosomes can survive and produce eggs for decades inside their human hosts [1,3]. Understanding the developmental forces that promote parasite longevity is essential for understanding the chronic nature of this disease. Schistosomes possess a population of somatic stem cells similar to the neoblasts found in free-living flatworms (e.g., freshwater planarians) [3,4]. In schistosomes, these neoblast-like cells appear to represent the only proliferative somatic cell type [4] and support the homeostatic renewal of tissues such as the intestine [4] and tegument [5]. Together, these data suggest that schistosome neoblasts are likely critical for long-term parasite survival in their hosts. What is not clear is whether neoblasts serve other important functions in these parasites. In free-living planarians, neoblasts are essential for both homeostatic tissue maintenance and for tissue regeneration [6,7]. Following amputation, there is a burst in planarian neoblast proliferation, which fuels the regeneration of damaged and missing tissues [8,9]. Unlike planarians, schistosomes live exclusively in the vasculature of mammalian hosts and are unlikely to face the same types of mechanical insults (e.g., amputation) that planarians do [3]. Therefore, whether schistosome neoblasts are capable of interpreting injury signals and modulating their behavior to repair damage is not clear. However, since schistosomes are likely subjected to a myriad of immunological and chemical insults inside their mammalian host, it is possible that neoblasts could possess the capacity to respond to various types of injury. Thus, understanding how parasites respond to injury, and the role of neoblasts in tissue repair, would provide important new insights into the mechanisms that support parasite longevity in vivo. During the course of a systematic effort to identify factors with the potential to regulate schistosome neoblast function we characterized Sm-cbp1 (for brevity, we will refer to this gene as cbp1), a gene that encodes a homolog of the mammalian CBP/p300 family of proteins [10]. In mammals, these proteins serve as transcriptional co-activators that possess histone acetyltransferase (HAT) activity [11]. In schistosomes, cbp1 was previously demonstrated to act as a transcriptional co-activator in vitro [10] and suggested to regulate genes important for schistosome egg production via its HAT activity [12]. Here we show that abrogation of cbp1 function leads to simultaneous increases in cell death and neoblast proliferation. Based on our observation that physical injury similarly induces parasite cell death and neoblast proliferation, we suggest that increases in neoblast proliferation following cbp1(RNAi) is a strategy by the parasite to cope with cell death-mediated tissue damage. In addition, we report a novel application of existing techniques to examine adult schistosome gene function in vivo and show that cbp1 is essential for parasite survival in mice. These data suggest an important function for cbp1 in parasite survival and highlight a potential role for neoblasts in regenerative processes in schistosomes. Depletion of cbp1 levels by RNA interference (RNAi) results in increased neoblast proliferation Using whole-mount in situ hybridization (WISH) we found that cbp1 was expressed in adult parasites in a variety of cells throughout the worm's parenchyma and in cells within the male testes and female ovaries (Fig 1A and 1B). To characterize how broadly cbp1 was expressed in the parenchyma, we performed fluorescence in situ hybridization (FISH) with two markers of somatic cells residing in the parenchyma: Histone H2B to mark neoblasts [4] and tsp-2 to label tegument-associated cells [5]. In addition to being expressed in both Histone H2B + and tsp-2 + cells, we weakly detected cbp1 transcripts in most cells within the schistosome parenchyma ( Fig 1C and 1D). While we cannot conclude that cbp1 is expressed in every cell in the worm, our data suggest this gene is expressed in a large number of schistosome cell types. To explore a role for cbp1 in regulating schistosome stem cells, we performed RNAi experiments. In comparison to controls, depletion of cbp1 mRNA levels (S1A and S1B Fig) led to a dramatic (Fig 1E and 1F) and statistically significant (Fig 1G and 1H) increase in the number of neoblasts that incorporated the thymidine analog EdU. Similar increases in cell proliferation were observed with dsRNAs targeting two distinct regions of the cbp1 gene, indicating these effects are specific to the reduction of cbp1 levels (S1A and S1C Fig) and not due to off-target effects. To explore this observation further, we also performed WISH with the neoblast markers Histone H2B and fgfrA [4] (Fig 1I and 1J) and FISH ( Fig 1K) with Histone H2B. Similar to our observations with EdU incorporation, we noted an increase in the number of cells expressing neoblast markers (Fig 1I-1K). Together, these data suggest that loss of cbp1 increases the number of proliferative neoblasts. Two simple stem cell behaviors can explain our observations following cbp1(RNAi). First, loss of cbp1 could block the ability of neoblasts to differentiate, effectively locking the cells in a proliferative state. This type of behavior is observed following perturbations that block planarian neoblast differentiation [13]. Alternatively, the cells could maintain the capacity for differentiation but the size of the stem cell pool is expanded via an increased rate of cell proliferation. To Images are tiled from multiple confocal stacks acquired from parasites at D14 of RNAi. Parasites were fixed after an Tissue Damage following cbp1(RNAi) Causes Increased Stem Cell Proliferation and Schistosome Death distinguish between these possibilities, we performed WISH for the neoblast differentiation progeny marker tsp-2. Previously we demonstrated that tsp-2 is expressed in a tegument-associated cell population that is the primary differentiation progeny of schistosome neoblasts [5]. Since tsp-2 + cells are short lived and rapidly renewed by neoblasts [5], they are a sensitive measure of the capacity for neoblasts to differentiate. Consistent with neoblasts in cbp1(RNAi) parasites maintaining the ability to differentiate, we observed substantial increases in the number of tsp-2 + cells in cbp1(RNAi) parasites ( Fig 1L). Together, these data suggest that loss of cbp1 expands the size of the neoblast pool and this results in an increased rate of production of at least one differentiated cell type. Local increases in neoblast proliferation accompany degeneration of the esophageal gland The schistosome esophageal glands are located anterior to the intestine (Fig 2A) and are thought to secrete factors that aid in the digestion of blood cells [14,15]. By both EdU labeling (S1C Fig) and FISH for Histone H2B (Fig 2B) we noted a focus of proliferative neoblasts in the vicinity of the esophageal glands in cbp1(RNAi) animals. We explored this observation more closely by double FISH for Histone H2B and the esophageal gland marker meg-4 [16,17]. Consistent with our prediction, at D11 of RNAi-treatment, masses of neoblasts are observed surrounding the esophageal gland of cbp1(RNAi) parasites ( Fig 2C). In some cases we observed "holes" in the esophageal gland that were occupied by Histone H2B + neoblasts ( Fig 2C, top cbp1(RNAi) panels). In the most severe cases, the esophageal glands were degenerated and only small numbers of meg-4 + cells remained (Fig 2C, bottom cbp1(RNAi) panels). To explore the degeneration of the esophageal glands in more detail, we performed time course analyses examining the expression of meg-4 by WISH. We observed a progressive degeneration of the esophageal gland in cbp1(RNAi) parasites and by D18 cbp1(RNAi) parasites possessed few traces of meg-4 + gland cells (Fig 2D and 2E). We next explored the relationship between neoblast proliferation and the degeneration of the esophageal glands in cbp1(RNAi) parasites. In principle, the observed masses of neoblasts (Fig 2B and 2C) could either be a cause of esophageal gland degeneration, an effect of this degeneration, or unrelated to the disappearance of the gland. Given how prominent the masses of proliferative neoblasts are surrounding the gland (Fig 2B and 2C), we believe the latter of these possibilities is unlikely. Therefore, to determine if neoblast proliferation is a cause or an effect of gland degeneration, we treated parasites with γ-irradiation, which rapidly depletes neoblasts [4], and examined meg-4 expression by FISH. In control(RNAi) parasites, neoblast depletion had no observable effect on the morphology of the esophageal gland at D11 (Fig 2F). In contrast to control(RNAi) parasites, irradiated and unirradiated cbp1(RNAi) parasites displayed extensive degeneration of the esophageal glands (Fig 2F), suggesting that neoblast over proliferation is not likely a direct cause of gland loss. Although we observed substantial gland degeneration in both irradiated and unirradiated cbp(RNAi) parasites, we noted more scattered meg-4 + cells in unirradiated cbp1(RNAi) where neoblasts were present (arrowheads, Fig 2F). Based on this observation we speculate that many of these remaining meg-4 + cells in unirradiated cbp1(RNAi) parasites represent newly born differentiation progeny of the neoblasts. cbp1(RNAi) results in elevations of cell death Our data indicated that between D8 and D14 a large fraction of parasites had esophageal glands that were in intermediate stages of degeneration (Fig 2D and 2E). To determine if programmed cell death was playing a role in this degeneration, we developed a whole-mount assay to examine Terminal deoxynucleotidyl transferase dUTP Nick-End Labeling (TUNEL). TUNEL is a methodology to detect double stranded breaks in the DNA of cells undergoing the process of programmed cell death [18], and has been successfully used to detect apoptosis in both free-living flatworms [19] and in sectioned adult female schistosomes [20]. Using this assay we determined that at D10 28% of cbp1(RNAi) parasites had large clusters of TUNEL + cells within their esophageal glands (Fig 3A-3C). Visualizing glands with the lectin PNA [21], large pockets of TUNEL + cells were not observed in cbp1(RNAi) parasites with largely intact glands nor in parasites with severely degenerated glands. Rather the presence of large numbers of TUNEL + cells was restricted to glands that appeared to be in the early to intermediate stages of degeneration. These data suggest that programmed cell death is a likely driver of esophageal gland cell loss. In the esophageal glands of cbp1(RNAi) parasites we noted elevations in cell proliferation and cell death at roughly similar time points after beginning dsRNA treatment (Figs 2C and 3C). Since we also noted increases in cell proliferation throughout the bodies of cbp1(RNAi) parasites ( Fig 1F-1H) we explored if cell death was similarly elevated in the trunks and tails of cbp1(RNAi) parasites ( Fig 3A). Although we did not note measurable changes by D4 of RNAi, at both D10 and D14 we observed statistically significant increases in TUNEL + cells in cbp1 (RNAi) parasites ( Fig 3D-3F); by D14 cbp1 RNAi treatment on average resulted in 4.6 and 4.8-fold elevations in TUNEL + nuclei in trunks and tails of male parasites, respectively ( Fig 3D and 3E). Interestingly, at both D10 and D14 the levels of cell death varied considerably among individual cbp1(RNAi) parasites: some cbp1(RNAi) parasites possessed levels of TUNEL + nuclei comparable to controls, whereas in other parasites the number of TUNEL + nuclei was dramatically elevated (Fig 3D and 3E). This observation mirrors what we observed in the esophageal glands where large numbers of dying cells were only present in a subset of parasites in which the glands were in the process of degenerating ( Fig 3C). Therefore, these elevations in cell death observed in the trunks and tails may similarly reflect the sudden degeneration of one or more tissue types in cbp1(RNAi) worms. Unfortunately, given the paucity of cell type-specific markers that are compatible with TUNEL staining, it is presently not possible to determine if this elevated rate of cell death was restricted to a specific cell/tissue type or whether all tissues were undergoing similar levels of cell death. Nevertheless, our data suggest that in addition to being required for preventing cell death, and degeneration of the esophageal gland cells, cbp1 is important for maintaining normal levels of cell death in other tissues within the parasite. Parasite injury induces cell death and subsequent elevations in cell proliferation In diverse organisms (e.g., Hydra [22], Drosophila [23], planarians [19]) tissue injury induces apoptosis and precedes increases in stem cell proliferation [24]. Therefore, one attractive model to explain the simultaneous elevations of both neoblast proliferation and cell death observed in cbp1(RNAi) parasites could be that cbp1 is required for the survival of various cell types in the worm (e.g., esophageal gland cells) and that death of these cells induces neoblast proliferation. Alternatively, cbp1 could be acting in some cells (e.g., esophageal gland cells) to promote cell survival and acting independently in neoblasts to repress proliferation. To indirectly distinguish between these possibilities, and examine if tissue injury can induce both cell death and neoblast proliferation in schistosomes, we physically injured male parasites. For these experiments, parasites were immobilized on an agarose pad and poked with a sharpened tungsten needle (Fig 4A). Consistent with this injury regime inducing tissue damage and subsequent cell death in the worm, we noted substantial numbers of TUNEL + nuclei at wound sites 4 hours post-injury (Fig 4B). We next examined injured parasites with neoblast and cell proliferation markers 48-72 hours following injury. Consistent with injury inducing neoblast proliferation, we noted accumulations of EdU-incorporating cells (Fig 4C) and Histone H2B + neoblasts (Fig 4D) surrounding wound sites at both 48 and 72 hours post-injury. Similarly, by immunofluorescence we noted increases in cells positive for M-phase specific marker Phospho-Histone H3 at sites adjacent to wounds (Fig 4E). Interestingly, examination of parasites at 48-hours post-injury by TUNEL staining found that although increases in cell death could often still be detected at the wound site, rates of cell death were depressed in the tissues immediately adjacent to wounds relative to the rest of the parasite (Fig 4E). This suggests that injury may repress physiological rates of cell death in tissues near wound sites. This repression of cell death may serve as a mechanism to preserve the function of tissues undergoing repair. Taken together, our data suggest that injury, and perhaps cell death, is capable of stimulating neoblast proliferation. Furthermore, these data suggest that schistosomes may be capable of utilizing neoblast-mediated tissue renewal to fuel tissue repair following injury. cbp1 is essential for schistosome survival in vivo Presumably due to elevations in cell death and declining tissue function, we observed that cbp1 (RNAi) parasites became progressively sicker during in vitro culture (Fig 5A, S1 Movie). By D8, male and female cbp1(RNAi) parasites became unpaired and lost the ability to attach to the surface of the tissue culture dish (Fig 5A). By D15, parasite movement became uncoordinated and often times the heads of male worms curled ventrally (Fig 5A, S1 Movie). At D19, movement in cbp1(RNAi) parasites was limited to irregular and jerky motions (S1 Movie). The progressive decline in the vitality of parasites was not likely due to elevations in cell proliferation since irradiated cbp1(RNAi) parasites were indistinguishable from unirradiated cbp1(RNAi) parasites with regards to male-female pairing and attachment to the substrate (S1 Movie). Given the complexity of the schistosome lifecycle and the lack of robust transgenic tools, few studies to date have examined adult schistosome gene function in the context of a mammalian host [25]. To explore if cbp1 is essential for parasite survival in vivo, we coupled in vitro RNAi treatment with a procedure pioneered by Cioli in the 1970's for the surgical transplantation of schistosomes into the mesenteric veins of rodent hosts [26]. For these experiments, 4 to 5 week old parasites were recovered from mice, treated for 4 days with control or cbp1 dsRNA in vitro, and then surgically transplanted into the mesenteric veins of recipient mice (Fig 5B). At D26 post-transplantation, we euthanized the mice, performed hepatic portal vein perfusion, and measured both the percent recovery of transplanted parasites and extent of schistosome induced host pathology. In mice that received control(RNAi) worms, we noted hepatosplenomegaly consistent with the transplanted parasites establishing a productive infection (Fig 5C). Following hepatic portal vein perfusion, we recovered about 70% of the male control(RNAi) parasites originally transplanted (Fig 5D). In contrast to controls, mice receiving cbp1(RNAi) parasites did not display hepatosplenomegaly ( Fig 5C) and we failed to recover any male (Fig 5D). We also noted obvious signs of egginduced liver pathology in control(RNAi) recipient mice ( Fig 5E); no evidence of egg-induced granuloma formation was observed in cbp1(RNAi) recipient mice (Fig 5E). Examination of histological sections from the livers of control and cbp1(RNAi) recipient mice confirmed that control parasites were capable of generating egg-induced pathology whereas no egg-induced inflammation was observed in cbp1(RNAi) recipient mice (Fig 5F and 5G). Although we detected no signs of egg-induced inflammation, we did note large masses located at the periphery of the livers of cbp1(RNAi) recipient mice (Fig 5E, arrowhead). Examination of these livers in histological sections found these masses to be cbp1(RNAi) parasites trapped in the liver of these mice (Fig 5H). Observing these sections in more detail, we identified worms at several stages of deterioration: some parasites were relatively intact with an uninterrupted tegument (Fig 5H, left panel) whereas others were severely degenerated with virtually no organized schistosome tissues (Fig 5H, right panel). The composition of host cells surrounding the parasite, and the apparent maturity of the immunological response to the worms, correlated with the structural integrity of the worms. More intact worms were surrounded by large numbers of neutrophils and lymphocytes (indicative of early host response) whereas more degenerated worms were found in lesions encased in fibroblasts (indicative of a mature host response to the parasites). These data suggest cbp1(RNAi) parasites are incapable of establishing an infection. Based on what we observe in vitro (Fig 5A and S1 Movie), we hypothesize that within 4-5 days following transplantation these parasites lose the ability to attach to the host endothelium and are washed into the liver. In the liver, the health of the parasites continues to decline and they are susceptible to being killed by the host immune system, perhaps in a similar fashion as schistosomes treated with praziquantel in vivo [27]. Based on these data, we suggest that cbp1 is essential for schistosome survival in vivo. Discussion Aside from supporting new cell birth during the physiological turnover of tissues (e.g., the tegument [5]), we know relatively little about the roles that neoblasts play in the biology of adult schistosomes. Here, we report that reductions in cbp1 levels result in simultaneous elevations of both cell proliferation and cell death. The esophageal glands were emblematic of this: apoptosis driven cell death was accompanied by massive accumulations of proliferative neoblasts. These observations suggested that neoblasts might be equipped to respond to lost or damaged tissues, an observation we confirmed by demonstrating that physical wounding induced proliferative neoblasts to accumulate around wound sites. Based on these data we suggest a model in which reduction of cbp1 levels leads to cell death and tissue loss throughout the parasite (Fig 6). This cell loss is (directly or indirectly) sensed by neoblasts resulting in an increased rate of neoblast proliferation. Since we observe large increases in the number of cells expressing the neoblast progeny marker tsp-2, it is likely the neoblasts then differentiate to restore lost cells. Because cbp1 levels remain depressed due to the effects of RNAi these newly differentiated cells die, inducing more neoblast proliferation. Tissue degeneration and the inability of neoblasts to restore tissue function eventually results in parasite death. While physical injury induces schistosome neoblast proliferation, the precise role apoptosis and other types of cell death (e.g., necrosis) play in this process are not known. In the cnidarian animals at 48 hours post-injury. Mitotic neoblasts 48 hours post-injury are clustered at wound sites (n = 28/29 parasites), whereas the number of TUNEL + cells are reduced in tissues adjacent to wound sites (n = 24/26 male parasites). Arrowhead indicates approximate site of injury. Scale Bars: 100 μm. D and E are titled images from multiple confocal stacks. Hydra, programmed cell death releases Wnt molecules that are required to induce stem cell proliferation and regeneration following amputation [22]. In Drosophila, genetic induction of apoptosis stimulates proliferation of intestinal stem cells [23]. In planarians, injury induces apoptosis although the requirement for cell death in fueling regeneration is not clear [19]. Therefore, dying cells in schistosomes may directly signal to induce neoblast proliferation. quantification of the percent recovery of control and cbp1 RNAi-treated parasites from mice. Each dot represents percent recovery from an individual mouse. Two separate sets of transplantations were performed with n = 5 mice for controls and n = 8 mice for cbp1(RNAi). p < 0.0001, t-test. Representative livers from mice transplanted with control or cbp1 RNAi-treated parasites. Livers from mice that received control(RNAi) parasites were large and contained large numbers of granulomas. Livers from mice receiving cbp1(RNAi) parasites were normal sized and contained very few granulomas. Few large granuloma-like structures were often found at the periphery of livers from mice that received cbp1(RNAi) worms (arrowhead in inset). Plot depicting number of schistosome eggs per liver section from mice transplanted with control or cbp1 RNAi-treated parasites. Each dot represents the mean number of eggs counted from two liver sections from an individual mouse. n = 4 livers for both control and cbp1 RNAi treatment groups. H&E staining of liver tissue from mice transplanted with control or cbp1 RNAi-treated worms. Arrowheads point to eggs inside granulomas. Large granuloma-like masses in livers of mice from cbp1(RNAi) treatment group correspond to worms at various stages of degeneration. Left panel shows a male worm with a clearly identifiable tegument and intestine (labeled teg and gut in inset, respectively) surrounded by neutrophils and lymphocytes. As panels move to the right, worms appear to become structurally compromised and lesions contain more host fibroblasts, suggesting these lesions possess a more mature immune response to the parasites. Scale Bars: A,E 1 mm; G-H 100 μm. Alternatively, a myriad of other factors (e.g., loss of tissue integrity and/or loss of cell-cell contacts) may stimulate neoblast proliferation. As tools to study schistosome cell death mature, it should be possible in the future to determine precisely how apoptosis influences neoblast behavior. Mammalian cbp1 homologs serve as transcriptional co-activators linking transcription factors to the core transcriptional machinery [11]. These mammalian cbp1 relatives also possess acetyltransferase activity and can acetylate a variety of substrates including histones and nonhistone proteins [11]. Whether these activities of cbp1 are important for maintaining schistosome cellular viability is not presently clear. However, previous studies have shown that pharmacological inhibition of histone deacetylase activity induces apoptosis in larval schistosomes [28,29]. Thus, not unexpectedly, maintaining normal chromatin structure is likely important for schistosome cellular survival. Since cbp1 possesses histone acetyltransferase activity in vitro [10], the cell death induced by cbp1 may be due to alterations in chromatin landscape in certain cell types. Further exploration of chromatin-modifying enzymes may represent fertile ground for the development of novel therapeutics. Here, we combine a previously described method for the surgical transplantation of schistosomes and RNA interference to demonstrate that cbp1 is required for parasite survival in vivo. Not only do these studies validate the potential to target cpb1 therapeutically, they provide a novel methodology to explore the functions of schistosome genes in vivo. A potentially useful application of this approach is for studies of schistosome reproduction. Since schistosome reproduction ceases within one week of in vitro culture [20], this approach could help identify genes required for the development and maintenance of the schistosome reproductive system. One potential limitation of this approach is the persistence of the effects of RNAi. Although the effects of RNAi have been reported to last for several weeks in larval schistosomes in vitro [30], how this translates to older parasites in vivo is not known. However, as tools to manipulate the schistosome genome (i.e., transgenic expression and genome editing) continue to mature, we suggest surgical transplantation could become an invaluable tool to explore gene function in vivo. Our observation that injury is met with increases in neoblast proliferation indicates schistosomes may possess the capacity to regenerate following certain types of injury in vivo. The regenerative potential of schistosomes has not been extensively characterized and conflicting reports exist. Senft and Weller reported that schistosomes amputated during recovery from mice were capable of regenerating new tails in vitro [31]. However, this conflicts with another account where in vitro cultured worms were capable of rapidly healing wounds, but incapable of regeneration [32]. Thus, the ability of schistosomes to perform whole-body regeneration (i.e., regenerating new heads and/or tails) is unresolved and may be a function of culture conditions and the nature of the injury. What is less controversial is the ability of schistosomes to repair tissues following in vivo exposure to sublethal doses of the anthelminthic drug praziquantel [33]. Thus, future studies exploring roles for neoblasts in tissue repair, following a variety of injuries (e.g., amputation and drug treatment) and in a variety of culture conditions, are necessary and could have important implications for understanding the longevity and resilience of these parasites in vivo. Parasite Acquisition and Culture Adult S. mansoni (6-8 weeks post-infection) were obtained from infected female mice by hepatic portal vein perfusion with 37°C DMEM (Sigma-Aldrich, St. Louis, MO) plus 10% Serum (either Fetal Calf Serum or Horse Serum) and heparin. Parasites were cultured as previously described [5]. Unless otherwise noted, all experiments were performed with male parasites. Molecular Biology cDNAs used for in situ hybridization and RNA interference were cloned as previously described [34]. Quantitative PCR analyses were performed as previous described [5]. Oligonucleotide sequences are listed in S1 Table. RNA interference, parasite labeling, and imaging EdU labeling, whole-mount in situ hybridization and fluorescence in situ hybridization analyses were performed as previously described [4,5]. For RNAi experiments, 5-10 freshly perfused male parasites (either as single worms or paired with females) were treated with 30 μg/ml dsRNA for 4 days in Basch Media 169 [35]. dsRNA was generated by in vitro transcription [4] and was replaced every day. As a negative control for RNAi, we used a non-specific dsRNA containing two bacterial genes [4]. Sequences used for dsRNA synthesis are listed in S2 Fig. For irradiation of RNAi-treated parasites, worms were exposed to 100 Gy of Gamma Irradiation using a J.L. Shepard Mark I-30 Cs 137 source. Lectin labeling was performed as previously described [21]. For TUNEL labeling, parasites were fixed for 4 hours in 4% Formaldehyde in PBS + 0.3% Triton X100 (PBSTx), dehydrated in methanol, and stored at -20°C. Parasites were subsequently rehydrated with PBSTx, permeabilized with 20ug/ml Proteinase K (Invitrogen, Carlsbad, CA) in PBSTx for 45 min, and post-fixed with 4% Formaldehyde in PBSTx. Following fixation parasites were processed for TUNEL labeling using the In situ BrdU-Red DNA Fragmentation (TUNEL) Assay Kit (Abcam). For this procedure, post-fixed worms were briefly incubated in the kit provided "wash" buffer, incubated in "DNA labeling solution" (2 to 3 male worms per 50 ul) for 4 hours at 37°C, rinsed twice in PBSTx, blocked with "FISH Block" (0.1 M Tris pH 7.5, 0.15 M NaCl and 0.1% Tween-20 with 5% Horse Serum and 0.5% Roche Western Blocking Reagent [36]), and incubated overnight in Anti-BrdU-Red Antibody (1:20) in "rinse buffer". After several PBSTx washes, worms were either mounted on slides in Vectashield (Vector Labs, Burlingame, Ca) or further processed for immunofluorescence or lectin labeling. For immunofluorescence, permeabilized worms were blocked in FISH Block and incubated overnight at 4°C in Anti-Phospho-Histone H3 (Ser10) (Rabbit mAB, D2C8, Cell Signaling) diluted 1:1000 in FISH block. Following 6 x 1 hour washes in PBSTx worms were incubated overnight at 4°C in Goat anti-Mouse IgG secondary antibody conjugated to AlexaFluor 488 diluted in FISH block (Thermo Fisher). Following several washes in PBSTx, parasites were mounted on slides in Vectashield. Confocal imaging of fluorescently labeled samples and brightfield imaging (i.e, wholemount in situ hybridizations and histological sections) were performed using a Zeiss LSM700 Laser Scanning Confocal Microscope or a Zeiss AxioZoom V16 equipped with a transmitted light base and a Zeiss AxioCam 105 Color camera, respectively. All images of fluorescentlylabeled samples represent maximum intensity projections. To perform counts of EdU + and TUNEL + cells, cells were manually counted in maximum intensity projections derived from confocal stacks; to normalize between samples cell counts were divided by the total volume of the stack in μm 3 . All plots and statistical analyses were performed using GraphPad Prism. Worm Injury For injury, worms were gently pipetted onto the surface of a 35 mm Petri dish filled with solidified 4% agarose diluted in H 2 O. After removal of excess liquid, worms were perforated with a sharpened tungsten needle. The impaled parasites were then carefully removed from the needle into fresh media using a pipette tip. As a control, "mock" injured parasites were similarly transferred to Petri dishes but were not injured; we observed no changes in cell death or cell proliferation in these parasites. Surgical Transplantation of schistosomes Methods for surgical transplantation of schistosomes are based on a procedure originally developed for hamsters [26]. 4 to 5 days prior to surgery parasites 4-5 weeks post-infection were recovered from mice and treated with 30 μg/ml dsRNA for 4 days in Basch Media 169 [35] as previously described [4]. Media and dsRNA were changed daily. Before mice were anesthetized, 8 male parasites (either paired or unpaired with female, see below) were sucked into a 1ml syringe, the syringe was fitted with a custom 25G extra thin wall hypodermic needle (Cadence, Cranston, RI), the air and all but~300 μL of media were purged from the needle, and the syringe was placed needle down in a test tube to settle the parasites to the bottom of the syringe. We attempted to inject male/female worm pairs, but it was not always clear if females were present in the gynecophoral canal. Therefore, each injection also included a few unpaired female parasites to ensure maximal potential for mating. Once the syringe was loaded with parasites, young male Swiss Webster mice (~25-30G) were anesthetized with Isoflurane using a vaporizer system equipped with both an induction chamber and nose cone. Abdomens of anesthetized mice were shaved and the area was sterilized with three alternating scrubs with betadine and ethanol. A single longitudinal incision (~1.5 cm) centered on the navel was made to expose the intestines. A sterile piece of gauze with a 2 cm slit in the center was dampened with sterile saline and placed over the incision. The intestines were gently fed through the gauze to expose the large vein running along the cecum. The intestines were kept damp throughout the entire procedure with sterile saline. Making sure the bevel of the needle remained facing down, the worms were injected into the cecal vein. To avoid hemorrhage, prior to removing the needle a small piece of hemostatic gauze (Blood Stop) was placed over the injection site. As the needle was removed, gentle pressure was applied to the injection site. Once bleeding stopped (~1-2 minutes) the hemostatic gauze was removed and the intestines returned into the abdominal cavity. The cavity was filled with sterile saline and abdominal muscles and skin were sutured (Maxon, Absorbable Sutures, Taper Point, Size 4-0, Needle V-20, ½ Circle). Following wound closure, mice received a single subcutaneous dose of buprenorphine for pain (30 μl of 1 mg/ml) and were allowed to recover on a warm heating pad. After transplant, needles were flushed with media to determine how many parasites had been injected into each mouse. Mice were group housed and individual mice were tracked by marking their tails with a permanent marker. On day 26 post-transplantation mice were sacrificed and perfused to recover parasites. Male and female parasites were counted and livers were removed and fixed for 30-40 hours in 4% formaldehyde in PBS. The percentage parasite recovery was determined by dividing the number of male worms transplanted by the number of male parasites recovered following perfusion. Counting male parasites was the most informative since the initial number of female parasites was not accurately quantified (see above). Livers from individual mice were sectioned and processed for Haematoxylin and Eosin staining by the UT Southwestern Molecular Pathology Core. Supporting Information S1 Fig. cbp1(RNAi) treatment specificallyreduces cbp1 transcript levels. (A) Cartoon of cbp1 cDNA (top) and cDNA regions (in bp) targeted by two independent RNAi constructs (pJNC9.1 and pJC259.1). pJNC9.1 contains a cDNA fragment that spans from 2057bp to 3060 bp of the cbp1 cDNA. pJC259.1 contains a cDNA fragment that spans from 4838bp to 5015bp and 5621bp to 5839bp of the cbp1 cDNA; this cDNA appears to be alternatively spliced relative to the cbp1 gene model. Full-length sequences of these cDNA fragments are found in S2 Fig. (B) Expression of cbp1 in control and cbp1(RNAi) parasites relative to a proteasome subunit (Smp_056500) as measured by qPCR. cbp1(RNAi) treatment using dsRNA produced from pJNC9.1 results in a statistically significant reduction in cbp1 mRNA levels (p < 0.025, t-test, n = 3 biological replicates from male parasites with their heads and testes removed). Similar levels of knockdown were observed with pJC259.1. Error bars represent 95% confidence intervals. (C) EdU labeling in control and cbp1(RNAi) parasites treated with dsRNA generated from pJC259.1 at D13 of RNAi. cbp1(RNAi) using dsRNA produced from pJC259.1 resulted in elevations in cell proliferation similar to RNAi treatment using dsRNA from pJNC9.1. Parasites were pulsed with EdU overnight prior to fixation. Arrowhead indicates approximate position of esophageal gland where we often noted large numbers of proliferative neoblasts.
2018-04-03T03:53:48.444Z
2016-11-01T00:00:00.000
{ "year": 2016, "sha1": "ea5f909093c40064bf989e51072c1e75eca254eb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1005963&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ea5f909093c40064bf989e51072c1e75eca254eb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251939186
pes2o/s2orc
v3-fos-license
Photometric behavior of Ryugu’s NIR spectral parameters Context. JAXA’s Hayabusa2 mission rendezvoused the Ryugu asteroid for 1.5 years to clarify the carbonaceous asteroids’ record for Solar System origin and evolution. Aims. We studied the photometric behavior of the spectral parameters characterizing the near-infrared (NIR) spectra of Ryugu provided by the Hayabusa2/NIRS3 instrument, that is to say 1.9 µ m reflectance, 2.7 and 2.8 µ m band depths (ascribed to phyllosilicates), and NIR slope. Methods. For each parameter, we applied the following empirical approach: (1) retrieval of the equigonal albedo by applying the Akimov disk function (this step was only performed for the reflectance photometric correction); (2) retrieval of the median spectral parameter value at each phase angle bin; and (3) retrieval of the phase function by a linear fit. Results. Ryugu’s phase function shows a steepness similar to Ceres, according to the same taxonomy of the two asteroids. Band depths decrease with increasing phase angle: this trend is opposite to that observed on other asteroids explored by space missions and is ascribed to the very dark albedo. NIR and visible phase reddening are similar, contrary to other asteroids, where visible phase reddening is larger: this could be due to surface darkness or to particle smoothness. Albedo and band depths are globally uncorrelated, but locally anticorrelated. A correlation between darkening and reddening is observed. Introduction Ryugu is the near-Earth, C-type asteroid selected as the target of the JAXA/Hayabusa2 sample return mission (Watanabe et al. 2017;Tachibana 2021) to clarify the C-type asteroids' record for Solar System origin and evolution, as well as the link between these asteroids and the carbonaceous chondrites (i.e., the most primitive meteorite group). Hayabusa2 rendezvoused with Ryugu for about 1.5 yr (i.e., from June 2018 to November 2019), observed the asteroid by means of an orbiter and two rovers, performed two touchdown operations (TD-1 on 21 February 2019 and TD-2 on 11 July 2019) with related sampling, as well as an artificial impact experiment (5 April 2019, Arakawa et al. 2020). The mission determined that Ryugu is a rubble-pile asteroid, as suggested from its small size (mean radius ∼450 m, Watanabe et al. 2019; and equatorial radius ∼500 m, Suguta et al. 2019), low density (i.e., 1.19 ± 0.03 g cm −3 , Watanabe et al. 2019), top shape with an equatorial ridge (Watanabe et al. 2019), and high number density of boulders larger than 20 m (Suguta et al. 2019). Similar properties have also been found for the asteroid Bennu, which is the target of the NASA/OSIRIS-REx sample return mission (Lauretta et al. 2015(Lauretta et al. , 2021, suggesting a rubble-pile origin even for this asteroid (e.g., Scheeres et al. 2019). Ryugu's optical as well as visible and near-infrared (NIR) spectral properties have been studied from the Optical Navigation Camera (ONC, Kameda et al. 2017) and the Near-Infrared Spectrometer (NIRS3, Iwata et al. 2017). Ryugu is one of the darkest planetary objects observed so far from space missions, with a geometric albedo of ∼0.043 (Suguta et al. 2019). It is weakly hydrated, as suggested from the shallow O-H stretching bands centered at 2.72 and 2.8 µm, respectively (Kitazato et al. 2019;Galiano et al. 2020). Ryugu surface's optical and physical properties have also been studied by analyzing its photometric behavior, that is the A185, page 1 of 6 Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This article is published in open access under the Subscribe-to-Open model. Subscribe to A&A to support open access publication. reflectance trend as a function of illumination and observation angles. Tatsumi et al. (2020) studied Ryugu's global photometric properties by applying the Hapke model to the combined ONC and ground-based data. They obtained an average geometric albedo of 0.04 and an average reflectance factor of 0.018 in the v band (centered at 0.55 µm). Moreover, they retrieved a visible phase reddening (i.e., increase in the visible spectral slope with a phase angle) of (2.0 ± 0.3) × 10 −3 µm −1 deg −1 and found a general correlation between darkening and reddening (with a few exceptions). Pilorget et al. (2021) applied the Hapke model to NIRS3 data and found that Ryugu's photometric properties in the NIR range are quite constant across the surface. However, darker areas associated with a slightly deeper 2.72 µm band and a few brighter areas were observed. In this work, we study the photometric properties of Ryugu by applying a statistical, semiempirical model, as has already been done for other small bodies, such as Vesta (Longobardo et al. 2014), Lutetia (Longobardo et al. 2016), 67P/Churyumov-Gerasimenko (Longobardo et al. 2017), and Ceres (Longobardo et al. 2019). This approach provides complementary information with respect to the Hapke model, such as the photometric behavior of spectral parameters (spectral slope and band depths) and related scientific implications. Section 2 introduces the NIRS3 instrument and the dataset considered for this work. The studied spectral parameters are defined in Sect. 3. The photometric model is explained in Sect. 4, with results given in Sect. 5 and discussed in Sect. 6. Finally, conclusions are summarized in Sect. 7. Data We used the data provided by the NIRS3 point spectrometer (Iwata et al. 2017) acquired between the two touchdowns. After the second touchdown, a new radiometric calibration was performed ); therefore, we limited our analysis to the spectra calibrated with the same instrument transfer function. The NIRS3 spectral range is 1.8-3.2 µm, with a spectral resolution of 18 nm and a field of view of 0.1 • . The spatial resolution in the considered dataset varies from a few to about 10 m pixel −1 , while the phase angle range is 15-40 • . The phase interval considered for studying small bodies' photometry is generally wider, especially for a case in which the study is based on a statistical analysis of the dataset. Therefore, we redefined the photometric parameters used to compare small bodies, as described in Sect. 4. All the spectra were corrected for the thermal contribution by applying the procedure described by Kitazato et al. (2019). Tools The photometric correction was applied on the radiance factor at 1.9 µm (hereafter referred to as reflectance), on hydration band depths at 2.72 µm (hereafter 2.7 µm) and 2.8 µm, and on the infrared spectral slope between 1.9 and 2.5 µm. All these spectral parameters are important descriptors of the regolith composition, granulometry, and weathering; therefore, their photometric correction is a fundamental data reduction operation. Definitions of these parameters are described by Galiano et al. (2020) and summarized below. The reflectance at 1.9 µm was calculated as the average between the three NIRS3 spectral bands closest to the considered wavelength, so as to minimize signal oscillations and therefore maximize the signal-to-noise ratio. To define the band shoulders, we smoothed the spectra by replacing the reflectance at each wavelength with the reflectance averaged on the three closest NIRS3 spectral bands. Then, for both bands we defined the left and right shoulder as the maximum reflectance in the range 2.50-2.65 µm and 2.85-2.95 µm, respectively. The spectral continuum is the straight line connecting the two shoulders. The band center was defined as the band minimum after the continuum removal: it was calculated in the range 2.70-2.75 µm for the 2.7 µm band and in the range 2.80-2.85 µm for the 2.8 µm band. Band depths, BD, were defined by adopting the definition by Clark & Roush (1984), that is where R c and R cont are the reflectance and the continuum at the band center. Spectral slope S IR was also defined on smoothed spectra and calculated as where R λ are the reflectance at the wavelengths λ and S IR was calculated in µm −1 . Preliminary operations We first verified the absence of systematic effects, which may affect the retrieved photometric functions, by investigating three issues: (1) residuals of the thermal emission removal procedure that could affect the spectra longward of 2.5 µm (Kitazato et al. 2019), and therefore the retrieval of the band depths (Galiano et al. 2020); (2) the temperature influence on the spectral slope , which could be due to inaccuracies of the surface temperature retrieval and to the removal of thermal contribution; and (3) bad spectra collected by NIRS3 just after the TD1 (Kitazato, priv. comm.). Concerning the first issue, we applied the procedure by Galiano et al. (2020) to evaluate the thermal emission removal residuals. To this end, the reflectance behavior was studied as a function of the temperature at two wavelengths, one inside (i.e., 1.9 µm) and one outside (i.e., 3.0 µm) the thermal emission region. In fact, the reflectance ratio R 3.0 /R 1.9 has a slight dependence on the temperature ( Fig. 1), suggesting that an overcorrection for the thermal contribution may be present. Nevertheless, the reflectance variation is no larger than 3% (i.e., between 1.03 and 1.06), which corresponds to an albedo variation of about 0.001. This variation can be considered negligible because of the following: (a) it is much lower than the reflectance spread observed on Ryugu (Galiano et al. 2020); (b) it is similar or lower than the uncertainties introduced from the photometric correction procedure (Sect. 4.2); and (c) it is much lower than the reflectance variation with observation angles. In conclusion, if an overcorrection of the thermal contribution is present on the NIRS3 spectra, it does not affect the results. Concerning the second issue, Riu et al. (2021) found that the spectral slope between 1.9 and 2.5 µm increases slightly with temperature (i.e., <1% per degree). This variation is much smaller than the spectral slope variation with the phase angle (Sect. 5). In addition, even if it was significant, it would introduce a trend opposite to that observed (a spectral slope increase with the phase angle). Therefore, the phase behavior observed on the spectral slope (Sect. 5) is not affected by this temperature dependence. Concerning the third issue, the statistical analysis adopted A185, page 2 of 6 A. Longobardo et al.: Photometric behavior of Ryugu's NIR spectral parameters and explained in the next subsection automatically excludes all the bad spectra, which are not statistically significant. Retrieval of photometric behaviors The method used to retrieve the photometric behavior of all spectral parameters introduced in Sect. 3 was applied in the three following steps: 1. The removal of topography influence (i.e., influence from incidence and emission) by applying the Akimov disk function (Shkuratov et al. 1999) is defined as follows: where γ = arctan f raccos i − cos e cos φ cos e cos φ is the photometric longitude and β = arccos cos e cos γ is the photometric latitude. In previous studies on the photometry of small, dark bodies (Longobardo et al. 2014(Longobardo et al. , 2016(Longobardo et al. , 2017(Longobardo et al. , 2019, several disk functions were tested, and the Akimov disk function was selected as the best one in all cases. For this reason, we considered only this disk function and verified its goodness a posteriori. Because the disk function is a multiplying term of the reflectance, this step was only applied to the reflectance, R, photometric correction: in this case we obtained the equigonal albedo R/D. We did not apply this step for parameters defined as a reflectance ratio (band depths and spectral slope). 2. The second step involved the retrieval of the median spectral parameter behavior as a function of the phase angle (phase angle bins of 1 • were considered). In the case of the 1.9 µm reflectance, we considered the phase behavior of the equigonal albedo. 3. The last step involved a phase function retrieval by a least squares fit. Photometric parameters To compare the photometric behavior of small bodies, Longobardo et al. (2016Longobardo et al. ( , 2017 defined two spectral parameters: R30, that is the radiance factor in the visible range at a phase angle of 30 • , and PCS (phase curve slope), that is the phase function steepness between 20 • and 60 • phase angles. We note that PCS = 1 − R60/R20, where R60 and R20 are the radiance factors in the visible range at 60 • and 20 • phase angles, respectively. In this case, we adapted these definitions for the infrared spectral range and for the available phase angle range. Therefore, our photometric parameters were redefined as follows: R30 is the NIR reflectance and PCS is the phase curve steepness between 15 • and 40 • phase angles, that is PCS = 1 − R40/R15, where R40 and R15 are the radiance factors in the NIR range at 40 • and 15 • phase angles, respectively. The photometric parameters defined above can only be calculated for three asteroids explored by space missions, that is Eros (Clark et al. 2002), Vesta (Longobardo et al. 2014), and Ceres (Longobardo et al. 2019). A comparison between the photometric parameters of the four asteroids was finally performed. Figure 2 shows the 1.9 µm equigonal albedo obtained by applying the Akimov disk function as a function of the incidence angle. A slight decreasing trend is still observable, but residual variations are lower than 3%, that is less than 0.001, which, as discussed in Sect. 4.1 is considered insignificant. The R/D variation with the emission angle is even lower. Based on this and on photometric studies on other dark, small bodies (Longobardo et al. 2017, 2019), we did not test other disk functions and considered the Akimov one. Results The phase function is shown in Fig. 3. Due to the narrow phase angle considered, we calculated the phase function by applying a linear fit. The phase function slope obtained is −(4.1 ± 0.1) × 10 −4 deg −1 . In Table 1, we show the photometric parameters of Ryugu, Ceres (Longobardo et al. 2019), Eros (Clark et al. 2002), and Vesta (Longobardo et al. 2014), as defined in Sect. 4.3. The PCS of the four asteroids was calculated in the same phase angle range, that is between 15 • and 40 • . The NIR spectral slope as a function of the phase angle is shown in Fig. 5. The slope of the linear fit modeling this trend is the phase reddening. We applied a linear fit for an easy comparison with other small bodies. In fact, the S IR behavior seems to be A185, page 3 of 6 A&A 666, A185 (2022) Fig. 3. Equigonal albedo at 1.9 µm as a function of the phase angle (error bars are not shown for clarity, but are included within the symbols). The straight line is Ryugu's phase function that was retrieved. Notes. The uncertainties on R30 are lower than 0.01. better reproduced by a second order polynomial. We ascribe this to the small phase angle range considered. However, the R 2 value of the linear fit is 0.8, which is better than R 2 obtained for other bodies (e.g., 67P/Churyumov-Gerasimenko, Longobardo et al. 2017). NIR phase functions: Comparative analysis Even if the comparison of photometric parameters shown in Table 1 involves only four asteroids, we can clearly discern two families: (1) C-type asteroids (Ceres and Ryugu), which are characterized by a low albedo and a higher PCS, and (2) bright asteroids (Vesta and Eros), with a lower PCS. Therefore, Ryugu's photometric behavior is what is to be expected for a dark, C-type body, where multiple scattering (i.e., the main factor responsible of phase function flattening) has a negligible role. Band depths The band depth decreasing with increasing phase angle observed on Ryugu had never been observed on other dark bodies explored by space missions before. The band depth behavior as a function of the phase angle for different small bodies is summarized in Table 2. In the case of Vesta, the band depth increases with phase more steeply with an increasing opaque amount (i.e., with decreasing albedo): in Vesta's bright terrains, the band depth's increasing rate with phase is lower than for dark terrains. Longobardo et al. (2014) ascribed this to the more important role of multiple scattering in bright terrains, which redistributes radiation at different phase angles and flattens the phase curve. A different explanation should be considered for dark bodies (i.e., albedo lower than 0.01) because, in this case, the multiple scattering role is negligible. Moreover, the band depth versus phase trend has the opposite behavior with respect to bright asteroids, because it is steeper for greater albedos. Longobardo et al. (2017) ascribed the band depth's photometry on Churyumov-Gerasimenko to the surface darkness: when the albedo decreases, it suppresses not only the band, but also its photometric behavior, that is to say the band depth is phase-independent. When the albedo further decreases (as is the case of Ryugu), we could observe a band depth decreasing with phase due to A185, page 4 of 6 A. Longobardo et al.: Photometric behavior of Ryugu's NIR spectral parameters Notes. ( * ) The band depths considered include 0.9 and 1.9 µm for Vesta (due to pyroxenes), 2.7 and 3.1 µm for Ceres (due to ammoniated phyllosilicates), 3.2 µm for 67P/Churyumov-Gerasimenko (due to organics and ammonium salts), and 2.7 and 2.8 µm for Ryugu (due to phyllosilicates). the combination of the following effects: (a) I/F decreases more steeply with the phase angle; and (b) darkening tends to suppress absorption bands (with an equal band carrier abundance), as is generally observed on planetary surfaces. This would generate the observed photometric trend. While this interpretation is not definitive, it is consistent with the photometric behavior of the Murchison meteorite, which is darker than Ryugu (being its albedo 0.03) and still shows a band depth decreasing trend with the phase (Cloutis et al. 2018). This interpretation is independent of the composition of the band carrier (being related only to the average albedo) and is related to a general behavior. It does not consider particular surface physical properties, such as grain size and roughness. Pilorget et al. (2021) conclude that 2.7 and 2.8 µm bands are deeper in dark regions. We related the photometric corrected band depths and infrared albedo, as shown in Fig. 6. Albedo versus band depth We are still able to observe a band deepening at decreasing albedo, but this is very slight (less than 2%), lower than that observed by Pilorget et al. (2021), and limited to albedo values lower than 0.03. However, the two conclusions are not contradictory because of the different approaches used by two works. In fact, Pilorget et al. (2021) considered mean band depth values (instead of median ones, as we do) and studied the local correlation between albedo and band depths, instead of a global one. In particular, the areas selected by Pilorget et al. (2021) are among the darkest ones of Ryugu and this amplifies the observed band deepening with decreasing albedo. Therefore, while we observe that the band depth is quite independent of the albedo, a local anticorrelation is not excluded. Infrared slope From the infrared slope phase function (Fig. 5), we retrieved a phase reddening of (2.1 ± 0.3) × 10 −3 µm −1 deg −1 . This value is very similar to that found by Tatsumi et al. 2020 on ONC data in the visible range (Table 3). Ceres and 67P instead show a visible phase reddening about 3 times the infrared one (Table 3). In the case of Bennu, the visible phase reddening is very similar to Ryugu ), but average phase reddening from the visible to NIR is about 4 times less . This suggests that Bennu shows a phase reddening decrease from the visible to NIR, as Ceres and 67P and different than Ryugu (Table 3). We discuss three possible explanations for this result, two optical and one physical: Multiple scattering. On 67P, the larger, even if weak, multiple scattering in the NIR (evidenced by the higher albedo, Capaccioni et al. 2015) may produce a phase function shallowing (i.e., a decreasing phase reddening) at longer wavelengths, while the multiple scattering role on Ryugu would be the same (i.e., almost null) in the two spectral intervals (similar albedos). This interpretation has two weaknesses: (a) the 67P's PCS is quite constant from 0.5 to 2 µm (Longobardo et al. 2017); and (b) Ceres and Bennu show a similar albedo (e.g., Raponi et al. 2021;Barucci et al. 2020), but different phase reddening values in the two spectral intervals. Spectral slope. Phase reddening could follow spectral slope variations. Ryugu's visible and infrared spectral slopes are similar; whereas, on the other three asteroids, the visible slope is larger (in absolute value) than the infrared one. However, in the case of Bennu, slopes are negative (i.e., blue spectrum, Barucci et al. 2020), which is different than Ceres and 67P. A185, page 5 of 6 A&A 666, A185 (2022) Sub-µm roughness. At the scale of the particle surface, submicron roughness can explain the monotonic phase reddening (Schröder et al. 2014). The different interaction of visible and infrared light with particle roughness can produce a different phase reddening response. In the case of Ryugu, the response is the same in the two spectral intervals, that means visible and infrared light would "see" the same microroughness. This would mean that submicrometric roughness (a) is absent, that is Ryugu particles are smooth, or (b) is present, but does not play a role in light scattering because the internal scattering is killed by the low albedo. Therefore, Ryugu behaves even in the visible range as Ceres and 67P do in the infrared, where the roughness role is minimized. According to this explanation, Bennu should have a regolith particles' roughness, which is different than Ryugu. However, due to their similar formation processes (Michel et al. 2020), this is unlikely. Nevertheless, we should consider that the obtained phase reddening values could be affected by: the narrow phase angle range considered, in the case of Ryugu, and a segment jump at 0.66 µm in the case of Bennu. Our preliminary conclusion is that the weak phase reddening observed on Ryugu from the visible to NIR could be due to its low albedo, while analysis of returned samples will clarify the role of regolith's physical properties. Albedo versus infrared slope Photometrically corrected NIR slopes are generally steeper in dark areas and flatter in brighter ones. This confirms the wellknown relation between surface darkening and reddening (e.g., Galiano et al. 2020), ascribed to thermal metamorphism and space weathering. Conclusions We studied the photometric behavior of infrared parameters describing the Ryugu spectra. To this end, we applied a semiempirical approach based on a statistical analysis of the Hayabusa2/NIRS3 dataset. The phase function at 1.9 µm was modeled by combining the Akimov disk function with a linear fit (due to the narrow phase angle interval). From a comparison between asteroids' NIR phase functions, we identified two families: C-type (Ryugu and Ceres) and bright (Eros and Vesta) asteroids. The 2.72 and 2.8 µm band depths decrease with increasing phase angle, which is different than the behavior observed on other asteroids. This has been ascribed to the dark Ryugu albedo, which mainly suppresses the bands at higher phase angles. Near-infrared and visible phase reddening are similar, contrary to other small bodies characterized by a larger phase reddening in the visible range. This could be ascribed to the surface darkness, though particle smoothness can play a role: analysis of returned Ryugu (and Bennu) samples can clarify this issue. Albedo and photometrically corrected band depths are globally uncorrelated. Nevertheless, a local anticorrelation is possible, while a correspondence between darkening and reddening is observed due to thermal metamorphism and space weathering.
2022-08-31T15:04:43.193Z
2022-08-30T00:00:00.000
{ "year": 2022, "sha1": "407f5d7095d54bc0d8db7e7e2491b62290f4977f", "oa_license": "CCBY", "oa_url": "https://www.aanda.org/articles/aa/pdf/2022/10/aa44097-22.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "98520c9afafb96b5977431304fde19566823ec77", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
16470820
pes2o/s2orc
v3-fos-license
STAT3 signaling drives EZH2 transcriptional activation and mediates poor prognosis in gastric cancer Background STAT3 signaling plays the pivotal role in tumorigenesis through EZH2 epigenetic modification, which enhanced STAT3 activity by increased tyrosine phosphorylation of STAT3. Here, another possible feedback mechanism and clinical significance of EZH2 and STAT3 were investigated in gastric cancer (GC). Methods STAT3, p-STAT3 (Tyr 705) and EZH2 expression were examined in 63 GC specimens with matched normal tissues by IHC staining. EZH2 and STAT3 were also identified in five GC cell lines using RT-PCR and western blot analyses. p-STAT3 protein was detected by western blotting. In order to investigate whether EZH2 expression was directly regulated by STAT3, EZH2 expression was further detected using siRNA for STAT3 or IL-6 stimulation, with dual luciferase reporter analyses, electrophoretic mobility shift assay (EMSA) and chromatin immunoprecipitation (ChIP) assays. The clinical significance of STAT3, p-STAT3 and EZH2 expression was evaluated by multi-factor COX regression and Kaplan-Meier analyses. Results Hyper-activation of STAT3, p-STAT3 and EZH2 expression were observed in GC cells and tissues. STAT3 signaling was correlated with EZH2 expression in GC (R = 0.373, P = 0.003), which was consistent with our data showing that STAT3 as the transcriptional factor enhanced EZH2 transcriptional activity by binding the relative promoter region (-214 ~ -206). STAT3 was an independent signature for poor survival (P = 0.002). Patients with STAT3+/EZH2+ or p-STAT3+/EZH2+ had a worse outcome than others (P < 0.001); Besides, high levels of STAT3 and EZH2 was associated with advanced TNM staging (P = 0.017). Moreover, treatment with a combination of siSTAT3 and EZH2-specific inhibitor, 3-deazaneplanocin A (DZNEP), increased the apoptotic ratio of cells. It is benefit for targeting STAT3-EZH2 interplay in GC treatment. Conclusions Our results indicate that STAT3 status mediated EZH2 upregulation, associated with advanced TNM stage and poor prognosis, suggesting that combination with knockdown of STAT3 and EZH2 inhibitor might be a novel therapy in GC treatment. Collectively, STAT3, p-STAT3 and EZH2 expression were provided for the precision medicine in GC patients. Electronic supplementary material The online version of this article (doi:10.1186/s12943-016-0561-z) contains supplementary material, which is available to authorized users. Background Although the prevalence of gastric cancer (GC) has gradually decreased, it still accounts for a large portion of cancer-related deaths in China [1]. One of the most informative prognostic factors is the tumor stage, which involves both the depth of invasion and extent of metastasis. The size and histologic type of a tumor may also be useful factors in prognostication [2]. Despite the complexity of gastric tumorigenesis, several molecular studies have identified novel prognostic biomarkers. Consequently, many efforts have been made to identify and validate novel biomarkers that are not only useful for predicting prognosis and patient survival, but also for predicting the tumor response to specific anticancer drugs [3][4][5]. Signal transducer and activator of transcription 3 (STAT3) or enhancer of zeste homologue 2 (EZH2) is the potential molecular biomarker for tumor progression and mainly serve as the poor predictor of outcome [6][7][8][9]. Many recent studies have suggested that inflammation plays an important role in the development of GC. Aberrant IL-6/STAT3 signaling in cancer cells have emerged as a major mechanism for cancer initiation and development [10,11]. IL-6 induces STAT3 activation, leading to cell proliferation and malignancy [9,12,13]. Upon activation, it is mostly involved in carcinogenesis [13,14]. Judd et al. reported that mice with STAT3 hyperactivation developed GC in association with chronic gastritis [15]. However, it still remains unclear how constitutive activated STAT3 in GC development. IL-6/STAT3 signaling plays an important role in regulating epigenetic aberrance during tumorigenesis, especially in the expression of certain key epigenetic enzymes, such as EZH2 [16]. EZH2, also called histone lysine methyltransferase (HKMT), was cloned as a gene belonging to the polycomb group of genes, which epigenetically silences the expression of some tumor suppressor genes (TSGs) [17]. It has been shown to be abundantly expressed in various malignancies with poor prognosis, including gastric, prostate, breast, and bladder cancers, and hematologic malignancies [6,[18][19][20][21][22]. Knockdown of EZH2 by siRNA has been demonstrated to inhibit breast cancer cell proliferation, whereas pharmacological inhibition of EZH2 results in the apoptosis of breast cancer cells, but not normal cells [23]. Recently, EZH2 binds to and methylates STAT3, leading to enhanced STAT3 activity by increasing tyrosine phosphorylation of STAT3 [24,25]. The specific EZH2 inhibitor reverses the silencing of polycomb target genes and diminishes STAT3 activity. EZH2 has been shown to directly interact with and regulate the activity of the following DNA methyltransferases (DNMTs): DNMT1, DNMT3a, and DNMT3b [26,27]. DNMTs transfer a methyl group from S-adenosylmethio nine to the 5′-position of cytosine in CpG dinucleotides present in gene promoters, thereby maintaining a consistent pattern of epigenetic gene silencing of TSGs in cancer cells [28]. Although the genes methylated in cancer cells are packaged along with nucleosomes containing 3Me H3H27marked genes, which are silenced in cancer, they have been shown to be independent of promoter DNA methylation, thus highlighting that 3Me H3K27 could potentially be an independent mechanism for silencing TSGs [29]. DNA methylation and transcriptional silencing of cancer genes have been shown to persist, despite the depletion of EZH2 [30], suggesting that simultaneously inhibiting EZH2 would be more effective in reversing 3Me H3K27 and DNA methylation [31]. Today, 3deazaneplanocin A (DZNep) was reported to decrease the expression levels of the polycomb repressive complex 2 (PRC2) in cancer cells, where loss of the 3Me H3K27 marked derepression of epigenetically silenced targets [32][33][34][35][36]. Moreover, DZNep is a novel inhibitor of histone methyltransferase EZH2 [32,35,37,38]. Previous studies have shown that increased EZH2 is involved in the pathology of gastrointestinal inflammation and associated cancers [39]. Moreover, EZH2 serves as an anti-apoptotic factor in GC development during IL-6/ STAT3 activation. Taken together, it is tempting to speculate that EZH2 may be a target of STAT3 and mediate the functions of IL-6/STAT3 signaling. Until now, there is no report the potential interaction between STAT3 signaling and EZH2 during GC development. Hence, we will explore the relationship between STAT3 and EZH2 as well as other clinicopathological features in GC. Results Co-expression of STAT3 and EZH2 in GC cell lines and primary GCs IL-6/STAT3 signaling plays a critical role in carcinogenesis by regulating various genes. EZH2, a protein that epigenetically silences tumor suppressor genes, was induced by IL-6 stimulation. To determine the inner relationship between STAT3 and EZH2, we first analyzed mRNA and protein expression levels of STAT3 and EZH2 in five GC cell lines using RT-PCR and real-time PCR (Fig. 1a, Additional file 1: Figure S5 and S6) and western blot analyses (Fig. 1b), respectively. All five GC cell lines expressed high STAT3 and EZH2 mRNA and protein levels. Both mRNA and protein levels were higher in SGC7901 cell lines than those in AGS, BGC823, MGC803, and MKN45 cell lines. Western blot analyses revealed two immunoreactive signals, an 85-kDa for STAT3 and an 86-kDa for EZH2. The levels of STAT3, p-STAT3 and EZH2 expression in GC tissues and their corresponding non-cancerous gastric mucosa were analyzed by Western blot. The levels of STAT3 status and EZH2 expression in GC were Fig. 1 Hyperactivation of STAT3, p-STAT3 and EZH2 was associated with poor survival in the GC cohort. a Differential expression of STAT3 and EZH2 mRNA was detected in GC cells using RT-PCR. b Differential expression of STAT3, p-STAT3 and EZH2 in GC cells using Western Blotting. c Co-expression of STAT3 status and EZH2 protein was detected in GC and matched normal tissues using Western Blotting; d High or low levels of STAT3 status and EZH2 expression in GC and adjacent normal tissues using IHC staining; p-STAT3 and EZH2 showed a focal or diffuse pattern in the nuclei (200 × magnification). Kaplan-Meier analyses show the effect of STAT3 (e), EZH2 (f), p-STAT3 expression (g), or the combination between STAT3 and EZH2 expression (h) in overall survival higher than those in normal tissues (Fig. 1c). IHC staining showed that higher level of STAT3 expression in GC tissues (43/63, 68.2%) than that in the corresponding normal tissues (24/63, 38.1%, P = 0.003; Table 1 and Fig. 1d). Intense nuclear staining was observed for EZH2 in GC (Fig. 1d). As shown in Table 1, the protein expression of EZH2 in the nuclei of GC tissues was significantly higher (47/63; 74.6%) than those in normal tissues (21/63, 33.3%, P = 0.001). A close correlation between STAT3 and EZH2 was observed in the cohort by χ 2 -test (Spearman rank correlation coefficient = 0.373, P = 0.003; Fig. 1d; Additional file 1: Table S2). Co-expression of STAT3 and EZH2 correlated with poor survival in GC patients A significantly elevated expression of STAT3 and EZH2 was noted in 43 (68.2%) and 47 (74.6%) cases, respectively. Furthermore, 82.5% (52/63) of the patients were either EZH2 + and/or STAT3 + , of which 35 (55.6%) showed a combined positivity for STAT3 and EZH2 (Additional file 1: Table S2). Notably, more than half the patients belonged to the high-expression group. Further supporting these results, the overall survival (OS) rate was significantly correlated with STAT3 protein expression. In this cohort, the 5-year OS rate was 31.6%. As expected, the OS was significantly higher in the STAT3 − group than in the STAT3 + group (P = 0.025, log-rank test; Fig. 1e). The OS of pSTAT3 − group was also better than pSTAT3 + group (P = 0.002, log-rank test; Fig. 1g). Furthermore, it was observed that OS was better in the STAT3 − /EZH2 − group than in other groups. In particular, the 5-year OS rate was significantly lower in STAT3 + /EZH2 + patients (22.3%) than in STAT3 − /EZH2 − patients (52.4%, P < 0.001). Additionally, patients with EZH2 + /STAT3 + expression had poorer prognosis than those with either EZH2 − and/or STAT3 − expression (P = 0.007, log-rank; Fig. 1h). Similar results were shown in the combination between p-STAT3 and EZH2 expression (Additional file 1: Figure S3). Therefore, these results indicate that STAT3 and EZH2 are important genetic markers in predicting a poor prognosis for GC patients undergoing resection. Thus, our findings highlight the value of EZH2 as a predictor of survival that could be more significant when considered in conjunction with STAT3 in GC. Co-expression of STAT3 and EZH2 correlated with TNM stage in GC In addition, most of GC samples exhibited coexpression of STAT3 and EZH2 in the cohort (Fig. 1c). Furthermore, the association of STAT3 and EZH2 immunohistochemical expression levels with clinicopathological features was evaluated in 63 GC samples ( Table 2). STAT3 expression in GC tissues was found to be significantly associated with patients' age (P = 0.024), TNM stage (P = 0.0001), and lymph node metastasis (P = 0.016), and EZH2 expression was positively correlated with patients' gender (P = 0.043) and TNM stage (P = 0.002). The present study showed that hyperactivation of STAT3 and EZH2 in GC tissues was significantly associated with advanced TNM stage. Notably, 40% of GC tissues, corresponding to TNM stages I and II, and 65.7% of GC tissues, corresponding to TNM stages III & IV, showed a significantly high expression of both STAT3 and EZH2 (Additional file 1: Table S3, P = 0.017). STAT3 signaling enhances EZH2 promoter activity in GC cells Given the co-expression of STAT3 and EZH2 in GC, we investigated whether STAT3 could regulate the expression of EZH2; thus, we analyzed EZH2 expression at both mRNA and protein levels in SGC7901 cells transfected with three pairs of siSTAT3 primers and scrambled negative control siRNA. Interestingly, STAT3 siRNAs decreased the level of STAT3 and EZH2 expression ( Fig. 2a and b, Additional file 1: Figure S4). And the high levels of STAT3 and EZH2 were induced by IL-6 stimulation (Fig. 2c), subsequently, siRNA of STAT3 after IL-6 addition, the luciferase reporter was reduced at the original level of background (Fig. 2e). Our results indicated that EZH2 was a potential target gene of STAT3 signaling. We performed transient expression studies in order to explore the effect of STAT3 signaling on EZH2 promoter activity. The level of EZH2 promoter activity in siSTAT3-treated SGC7901 cells was found to be significantly lower than that in the untreated control. The relative activity of EZH2 was decreased by siSTAT3 (P = 0.0248), but it was increased to 6.18-fold by IL-6 stimulation (P = 0.0035; Fig. 2e). Combined siRNA for STAT3 with IL-6 addition, decreased activity of EZH2 promoter luciferase reporter was detected apparently compared with IL-6 stimulation alone in SGC7901 cells. Our study highlights the potential interplay that STAT3 signaling promotes EZH2 expression in GC cells. We had also performed a detailed analysis of the EZH2 promoter in the NCBI database, and identified that it contained three conserved STAT3-binding sites at the main promoter region of EZH2 gene (Additional file 1: Figure S1). STAT3 binds to two known sequences, HIS and GAS, to exert its anti-apoptotic and oncogenic effects. These sites contain the canonical STAT3-binding motifs TTC(N) 2-4 GAA or TT(N) 4-6 AA [40]. Hence, we determined that the STAT3-responsive elements are present in the EZH2 promoter at position −346 to +52, which, in turn, corresponds to the consensus STAT3binding site TTN (4)(5)(6) AA. Corroborating these findings, the results of our study demonstrated a significant decrease in luciferase activity for the shorter length EZH2 promoter gene (−436 to +52), as compared to that of the full length EZH2 promoter (−1702 to +52; Fig. 2d, P = 0.024; Additional file 1: Figure S1), indicating that the promoter region between −436 and +52 is critical for EZH2 promoter activation in response to STAT3. This fragment contains the 3 STAT3-binding motifs described above. Subsequently, we performed ChIP-PCR analysis using SGC7901 cells to determine the precise consensus sequences for EZH2 promoter activation and to further investigate the role of the promoter fragment −436 to +52 containing three motifs of STAT3. Furthermore, in order to confirm that STAT3 bound to the EZH2 specific pro-moter_(-436-+ 52), we used a ChIP-PCR procedure, comprising 33 PCR cycles optimized to achieve amplification of DNA that had been precipitated with STAT3. In the absence or presence of siSTAT3 after IL-6 stimulation or not, the enrichment of STAT3-binding to EZH2 fragments was decreased after knocking down of STAT3 by PCR and quantitative real-time PCR analyses (Fig. 2f, Additional file 1: Figure S7), coinciding with the downregulation of EZH2 expression at mRNA and protein levels (Fig. 2a, and b). Our results demonstrated that STAT3 was recruited to EZH2 promoter region at the main three STAT3-binding motifs, which indicates that EZH2 transcription is required for STAT3-binding enrichment. Three STAT3 cis-element sequences, STAT3-1, STAT3-2 and STAT3-3, were located at different regions of the EZH2 main promoter. To further confirm the effect of STAT3 fragments on the transcriptional activity of EZH2 promoter, four vectors were constructed, including vector p373 (Fragment 1) containing the three Stat3-binding fragments, vector p222 (Fragment 2) lacking the Stat3-1 fragment, vector p163 (Fragment 3) lacking both Stat3-1 and STAT3-2 fragments, and vector p131 (Fragment 4), which has no Stat3-fragment (Additional file 1: Figure S2). 3) or not (Region 2); luciferase activity normalized for Renilla luciferase activity and expressed relative to the activity of the untreated group; the higher activity of EZH2 was detected in Region 1 and Region 3, which contained STAT3-binding sites (Fig. 2d, P < 0.001). e Dual luciferase reporter analysis of EZH2 promoter. The construct with full-length of EZH2 promoter (-1702/+52) was inactivated by siSTAT3 treatment with or without IL-6 stimulation. f The specific region (−436/+48) of EZH2 promoter was detected by ChIP-PCR. STAT3 mediated fold-enrichment of STAT3-binding regions of EZH2 promoter. Further, the binding activity was increased by IL-6 stimulation compared with the untreated group (P = 0.0059). When knockdown of STAT3 using siRNA, the binding activity was decreased with or without IL-6 (P = 0.0043). g The nuclear extracts from SGC7901 cells were incubated with biotin-labeled wild-type before the mutant or cold probes were added for 20 min. Figure S2). And there was no obvious different between Fragment 1 and 2 for the EZH2 luciferase activity (P = 0.094; Additional file 1: Figure S2). It means that the second Stat3 motif was important for regulation of EZH2. These data narrowed the enhancer activity to the -222 bp to -163 bp region, and suggested that this Stat3-binding region contained a cis-acting element that interacted with STAT3 to induce transcription. To investigate which Stat3 motif region binding to EZH2 promoter, an EMSA was performed using synthetic 26-bp oligo-nucleotides containing the Stat3binding fragment (Additional file 1: Table S1). We examined the binding activity of the nuclear extract to candidate nucleotide sequences in order to identify the STAT3-responsive element. Several possible permutations of the STAT3-binding site were also systematically synthesized and tested for their ability to bind activated STAT3. The results showed that STAT3 enhanced the binding activity of nuclear extracts to the probe (containing Stat3 motif ). The synthesized mutant sequences were found to show little or no binding to activated STAT3. Further experiments with the wild-type probe, without biotin modification, which "cold-competed" in the EMSA assay, demonstrated a significant decrease in the STAT3 target-binding capacity in vivo (Fig. 2g), it indicated that stat3-binding site located in the -214 bp to -206 bp region, played the important role in the transcriptional activity of EZH2 gene. Anti-apoptotic activity of STAT3-mediated EZH2 in GC cells Aberrant expression of STAT3 is known to contribute to malignancy [12]. In the present study, inhibition of STAT3 signaling by siRNA significantly increased caspase-3/9 positivity in SGC7901 cells (Fig. 3e and f ), suggesting that STAT3 contributes to the antiapoptotic effect in GC cells (Fig. 3d). We further evaluated the intracellular changes in GC cells treated with the EZH2-specific inhibitor, DZNep. We observed that DZNep reduced the expression of EZH2, which resulted in increased caspase-3/9 activity (Fig. 3e and f), leading to apoptosis of SGC7901 cells (Fig. 3a, b and c). The single agent DZNep treatment did induce G1/G0 phase arrest in our study (Fig. 3g, P = 0.0038). Thus, our findings demonstrate that EZH2 is the downstream target gene of STAT3 signaling and plays an important role in the anti-apoptotic effect of the latter in GC cells. In addition, our clinical data showed that hyperactivation of STAT3 and EZH2 occurred in GCs. Further supporting this clinical observation, knockdown experiments involving the treatment of GC cells with STAT3 siRNA in the presence of DZNep demonstrated an increased apoptotic rate, as well as enhanced caspase-3/9 activity (Figs. 3e and f, P = 0.003, P = 0.027, respectively), which in turn resulted in the down-regulation of EZH2 at both mRNA and protein levels. We next examined cell cycle variation and apoptosis in cells treated with siSTAT3 or DZNep alone or in combination. As shown in Fig. 3g, cells treated with DZNep alone can induce apoptosis and G1/G0 phase arrest; the combination of STAT3-siRNA and DZNep induced more apoptosis than others (P < 0.05; Fig. 3c). It will be important that these trials can measure high-resolution DNA methylation and histone modifications in order to correlate clinical responses with candidate epigenetic changes. Discussion Several studies have suggested that STAT3 and EZH2 are closely associated with cell proliferation, invasion, and metastasis [37][38][39][40]; our findings demonstrate that co-expression of STAT3 and EZH2 in GC cells is associated with poor prognosis. In particular, the activation of EZH2 and STAT3 is significantly correlated to TNM stage and patient survival, suggesting that a combination of STAT3 and EZH2 expression could determine clinical TNM stage and predict disease outcome. STAT3 is positively correlated with EZH2 expression in GC cells and tissues. Knockdown of STAT3 resulted in down-regulation of EZH2 at the mRNA and protein levels. We next determined whether STAT3 could be a transcriptional regulator of the EZH2 gene. We transfected SGC7901 GC cells with a reporter vector encoding luciferase under control of the EZH2 promoter. Knockdown of STAT3 by siRNA decreased EZH2 promoter activity, which was abrogated by mutation of the STAT3 DNA-binding site in the EZH2 promoter. We next carried out ChIP assay to assess whether STAT3 could directly bind to the EZH2 promoter. STAT3 protein binding at the EZH2 promoter was significantly increased in SGC7901 cells, and was increased 6.18-fold with IL-6 stimulation, which is consistent with the results of studies showing that EZH2 protein expression is induced by IL-6 in multiple myeloma cell lines [39]. As expected, transfecting cells with a siSTAT3 significantly decreased luciferase reporter activity. Upon further study, we found three STAT3 cis-element-binding sites in the EZH2 promoter. Deletion analysis showed that the second STAT3-binding site was active in the luciferase reporter assay following IL-6 stimulation. Mutation of this STAT3-binding motif decreased its ability to bind STAT3. This study demonstrates that STAT3 directly regulates EZH2 expression by binding to EZH2 promoter, which is consistent with the results of the study in CRC [41]. Furthermore, down-regulation of STAT3 promoted apoptosis through the suppression of EZH2, and the activated caspase-3/9 were detected. Our results further strengthened the results of previous studies that demonstrated an anti-apoptotic effect of EZH2 and STAT3 over-expression via the Akt/Bad/Bcl-xL apoptotic pathway [8,42]. The present study was first to demonstrate an underlying mechanism involving the regulation of EZH2 by STAT3 and to propose the existence of a positive functional loop between STAT3 and EZH2. The functional relationship between EZH2 and STAT3 could be the mechanism by which STAT3 regulates cell proliferation. Indeed, DZNep, an EZH2-specific inhibitor, increased apoptosis of GC cells, when combined treatment with DZNep and siSTAT3 further increased the apoptotic rates in GC cells. In conclusion, our study identified EZH2 protein as an important molecule downstream of STAT3, which mediates an anti-apoptotic effect in concert with STAT3 (Fig. 4). Our findings suggest that EZH2 possesses anti-apoptotic activity in gastric tumorigenesis following STAT3 activation. Further studies are required to elucidate the detailed functional roles of these molecules in order to exploit them as candidates for new therapeutic targets. Conclusion We found that both mRNA and protein expression levels of EZH2 were decreased by knocking down of STAT3 with siRNA in GC cells. Moreover, STAT3, the transcriptional factor, induced EZH2 activation by binding to the specific Stat3 motif of EZH2 promoter (-214~-206). High levels of EZH2, STAT3 and p-STAT3 expression were significantly associated with poor prognosis in GC patients. Furthermore, combined EZH2 and STAT3 or p-STAT3 was apparently associated with worse clinical outcome, suggesting that the panel of EZH2 and STAT3 could be served as the potential of molecular prognostic signature. Our study also found that treatment of GC cells with STAT3-siRNA and/or the presence of EZH2 specific inhibitor, DZNep, enhanced the downregulation of EZH2 expression with increasing apoptosis. Thus, combination between siSTAT3 and EZH2 inhibitors could be contributed for the potential epigenetic therapy against GC patients. Patients and tissue specimens The study was scrutinized and approved by the Hospital Bioethics Committee, and patient consent was obtained prior to the initiation of the study. The prospective study group comprised 63 patients who had primary gastric adenocarcinomas and underwent gastrectomy between January and December in 2008 at the Department of Gastroenterology Surgery, Surgical Oncology Laboratory, Beijing Cancer Hospital and People's Hospital. The inclusion criteria for the study were as follows: (a) the patient had no concurrent diseases precluding the administration of systemic chemotherapy, and (b) the patient had not received preoperative radiotherapy. All patients were followed-up prospectively for a maximum period of 66 months. Tissue samples of GC as well as adjacent non-cancerous (normal appearance) gastric tissues were fixed in 10% neutral formalin, processed for paraffin sections, and used for histopathology and immunohistochemistry (IHC) studies. Clinicopathological information was obtained from medical charts, and histopathological examination was performed according to the 6 th edition of the American Joint Committee on Cancer System (AJCC) staging system [41,43]. All available H&E-stained slides of the surgical specimens were reviewed. Immunohistochemical analysis Formalin-fixed paraffin-embedded sections (4-μm-thick) from samples were collected for IHC experiments. STAT3 and EZH2 were detected using rabbit polyclonal antibodies. Briefly, sections were incubated with rabbit anti-STAT3 and anti-EZH2 (1:200) overnight at 4°C. Normal goat serum was used as a negative control. After washing, tissue sections were treated with biotinylated anti-rabbit secondary antibody (Santa Cruz, CA) followed by further incubation with streptavidin-horseradish peroxidase complex (Dako, Carpinteria, CA), and then immersed in 3,3-diaminobenzidine, counterstained with 10% Mayer's hematoxylin, dehydrated, and mounted. Both nuclear and cytoplasmic staining was observed for STAT3, while only nuclear staining was seen for EZH2. The intensity of immunoreactivity was assessed for EZH2 and STAT3 as follows: high expression, ≥ 50% cells showing intense immunoreactivity; low expression, < 50% cells showing intense immunoreactivity. The mean percentage of positive tumor cells was determined in at least five areas using a high-power field microscopy. Immunopositivity was independently evaluated by two senior pathologists. RNA extraction, reverse transcriptional PCR and real-time PCR analyses Total RNA was extracted from cells by using Trizol reagent (Invitrogen, Life Technologies) according to the manufacturer's instructions. The extracted RNA was pretreated with RNase-free DNase, and 5 μg RNA from each sample was used for cDNA synthesis primed with random hexamers. PCR amplification of STAT3 or EZH2 cDNA using STAT3 or EZH2-specific primers (Additional file 1: Table S1) was performed under the following conditions: initial denaturation step at 95°C for 5 min; followed by 30 cycles consisting of denaturation at 95°C for 30 s, primer annealing at 60°C for 30 s, primer extension at 72°C for 30 s; and a final extension at 72°C for 5 min. β-actin was used as an internal control. Furthermore, these gene expression was running by ABI 7500 Fast Real-time PCR System (Applied Biosystems, Carlsbad, CA, USA) with SYBR green (TransGen Biotech Co., Ltd., Beijing, China). 1 μl cDNA and 1 μl primers were mixed to the final volume of 12 μl. The final q-PCR conditions were described briefly: a pre-denaturation at 95°C for 20 s, followed by 40 cycles at 95°C for 3 s and extension at 60°C for 30 s. The endogenous control was β-actin. Each sample was performed in triplicate. Flow cytometry analysis for apoptosis SGC7901 cells were treated with DZNep and or siSTAT3 for 48 h, followed by harvesting, counting (1 × 10 6 cells) and re-suspending in 100 μl of phosphate-buffered saline (PBS). Afterward, 5 μl of Annexin V (1 μg/ml) (Beckman Coulter, Fullerton, CA) was added and incubated at RT for 15 min, then 10 μl of propidium iodide (PI, 1 μg/ ml) was added and incubated for additional 5 min at room temperature in the dark. Finally, the cells were subjected to flow cytometry (FCM) to measure the apoptosis rate with an Epics-XL-MCL flow cytometer (Beckman Coulter, USA). Flow cytometry analysis for cell cycle Cell cycle analysis was performed using flow cytometry employing a Cell Cycle Detection kit, according to the manufacturer's instructions (KeyGEN Biotech, Nanjing, China). The SGC7901 cells, were treated with DZNep (500 μM) and/or siSTAT3 mimics for 48 h. Subsequently, the cells were sorted using a FACSCalibur (BD Biosciences, Franklin Lakes, NJ, USA), and cell-cycle profiles were analyzed using WinMDI v 2.9 software (Scripps Research Institute, La Jolla, CA, USA). Promoter and Luciferase reporter assays Luciferase assays were performed using the Dual Luciferase Reporter Assay System (Promega, Madison, WI, USA). Promoter constructs for the assays were generated by cloning the region of the human EZH2 promoter from −1702 to +52 and inserting it between the SacI and XhoI restriction sites of the pGL3-Basic vector (Promega). After cloning and confirmation of the nucleotides of the EZH2 promoter by sequencing, the construct was named EZH2-promoter-Luc (Additional file 1: Table S1). SGC7901 cells were co-transfected with 800 ng of EZH2-promoter-Luc and 6 ng Renilla luciferase plasmid pRL-TK using Lipofectamine 2000 transfection reagent (Life Technologies), with or without IL-6 treatment (1000U/ml). Cells were also transfected with 300 ng of siSTAT3 to inhibit STAT3 signaling (Life Technologies). At 24 h after transfection, the cells were washed, lysed, and evaluated sequentially for firefly luciferase and Renilla luciferase activities (Promega) using a BD Monolight 3010 luminometer (BD Biosciences) or a Lumat luminometer (LB 9507, Germany). The results obtained were normalized for Renilla luciferase activity and expressed relative to the activity of the untreated cell group transfected with EZH2-promoter-Luc. Promoter activity was reported as mean ± SD values. Chromatin immunoprecipitation (ChIP) and quantitative real-time PCR assay SGC7901 parent cells or knockdown of STAT3 in cells, followed by stimulation with or without IL-6 (1000 U/ ml), were fixed with 1% formaldehyde and lysed for 10 min at 37°C, and sonicated to obtain sheared DNA fragments of approximately 200~1000 bp. The chromatin was then incubated and precipitated with antibodies against the STAT3 antibody or IgG (Santa Cruz), after which DNA-protein immunocomplexes were collected, using protein A/G-agarose beads (Pierce Biotechnology, Rockford, IL, USA), and treated with RNase A (Sigma) and proteinase K (Sigma), primers were designed for Stat3-ChIP enriched the promoter region of EZH2 gene. Real-time PCR was performed on non-amplified Stat3, rabbit IgG, and Input of SGC7901 cells. ChIP DNAenrichments was used SYBR Green Master Mix reagents with an ABI PRISM 7900HT sequence detection system (pre-denaturation at 95°C for 5 min, followed 95°C for 10 s, 60°C for 10 s, 72°C for 30 min, 40 cycles). The ready-to-use primers were employed for studying transcriptional regulation of EZH2 at or around its transcriptional start site (TSS). The enrichment of Stat3 motifs binding at or around the TSS of EZH2 could be reliably detected and quantified by ChIP real-time PCR assay or the PCR products were run on 2% agarose gel in 1× TBE buffer. Electrophoretic mobility shift assay (EMSA) EMSA nuclear extracts were prepared using the Applygen protocol (Applygen Technologies Inc., Beijing, China). For the mobility shift assay for STAT3-DNA binding activity, a nucleotide sequence corresponding to the 5′-flanking region of human EZH2, containing three conserved STAT3binding motifs, was used (Additional file 1: Figure S1). The EMSA probe contained the second conserved STAT3binding motif (Additional file 1: Table S1). DNA probes for EMSA were synthesized as oligonucleotides (Sangong, Shanghai; Additional file 1: Table S1). The "hot probe" was generated by labeling the 5′-end with biotin. Furthermore, we employed a "cold probe" or "mutation probe", which lacked 5′-biotin labeling, in the competitive EMSA to assess the involvement of STAT3 (Additional file 1: Table S1). SGC7901 cells were pre-treated with or without STAT3 siRNA, and nuclear protein was extracted as described previously. SGC7901 cells were pre-treated with or without siSTAT3, and nuclear protein was extracted. EMSA was carried out with a Gel Shift assay System (Promega) in accordance with the manufacturer's recommendation. Nuclear protein (10 μg) was pre-incubated in a final volume of 15 μl of buffer containing 10 mM Tris-HCl (pH 7.5), 1 mM MgCl 2 , 50 mM NaCl, 0.5 mM EDTA, 4% glycerol, 0.5 mM DTT, and 0.5 mg of poly (dI:dC) for 10 min, and the biotin-labeled probe was added to the mixture; samples were incubated for 20 min at room temperature. The protein-DNA complexes were then electrophoresed on a 7% acrylamide gel and analyzed by autoradiography. Apoptosis and caspase assay Annexin V-FITC/PI analysis (BD Bioscience, Franklin Lakes, NJ, USA) was used to measure apoptosis induction in the siSTAT3 transfected groups with or without the EZH2 inhibitor, DZNep treatment. Harvested cells were washed twice in buffer and resuspended at a concentration of 5 × 10 5 SGC7901 cells in 1 ml of buffer containing at least 40 mM Ca 2+ . Cells were then added to a tube containing 5 ml of fluorescent-Annexin V/PI. Fluorescence was quantified on a Becton Dickinson FACScan flow cytometer (BD Biosciences) for at least 10,000 events. After cells were transfected with siSTAT3 (or scrambled siRNA), they were cultured for 12 h in 6-cm dishes containing serum-free medium with or without DZNep. The cells were washed with 1× PBS and resuspended in lysis buffer. Caspase-3/9 activity was assessed using a Colorimetric CaspACE™ assay System (Promega). The lysate was mixed with Z-DEVD-pNA and Z-LEHD-pNA in microplates according to the manufacturer's protocol. The plates were read at OD 405 nm. Statistical analyses Data were analyzed using SPSS (version 16.0; SPSS Inc., Chicago, IL). Statistical significance was evaluated using chi-square (χ 2 ) and Mann-Whitney U-tests. Spearman rank correlation coefficients and Fisher's exact tests were used to assess significant associations between different clinicopathological variables. Kaplan-Meier survival analysis, followed by the log-rank test for Pair-wise comparisons, was used to analyze the influence of STAT3 and EZH2 protein expression on overall survival of GC patients. A P-value of < 0.05 was considered significant and exact two-sided P values are reported.
2017-08-03T01:59:49.662Z
2016-12-09T00:00:00.000
{ "year": 2016, "sha1": "10f96023a12530920f1be2f5df0f9174a99d680f", "oa_license": "CCBY", "oa_url": "https://molecular-cancer.biomedcentral.com/track/pdf/10.1186/s12943-016-0561-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f594d85cf012d17ecf029de747af93f319fe7836", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
28744326
pes2o/s2orc
v3-fos-license
Potential Risk Factors for Cutaneous Squamous Cell Carcinoma include Oral Contraceptives: Results of a Nested Case-Control Study Recently, a population-based case-control study observed a 60% increased odds ratio (OR) for cutaneous squamous cell carcinoma (SCC) among women who had ever used oral contraceptives (OCs) compared with non users (95% confidence interval (CI) = 1.0–2.5). To further characterize the putative association between OC use and SCC risk, we conducted a nested case-control study using a large retrospective cohort of 111,521 Kaiser Permanente Northern California members. Multivariable conditional logistic regression was used to estimate ORs and CIs adjusting for known and hypothesized SCC risk factors. Pre-diagnostic OC use was associated with a statistically significant increased OR for SCC in univariate analysis (OR = 2.4, CI = 1.2–4.8), with borderline statistical significance in multivariable analysis (CI = 2.0, CI = 0.91–4.5). Given the high incidence of SCC in the general population and the prevalent use of OCs among women in the United States, there is a need for more large, carefully designed epidemiologic studies to determine whether the observed association between OC use and SCC can be replicated and to better understand the etiologic basis of an association if one exists. Introduction Cutaneous squamous cell carcinoma (SCC) is the second most common malignancy in the United States [1] and its incidence is steadily rising [2,3]. Although surgical excision is often curative, cutaneous SCCs can metastasize and become fatal, especially in immunocompromised patients [4,5]. Treatment can cause significant disfigurement and morbidity and accounts for high health-care expenditure [6]. Known risk factors associated with SCC can be divided into host-related and environmental factors. Host-related factors include innate pigmentary characteristics [7,8], sun sensitivity [9], precursor lesions [10], genetic predisposition (xeroderma pigmentosa, epidermoplasia verruciformis) [11], and immunologic factors [12]. Environmental factors include exposure to physical agents, the best characterized of which is ultraviolet light [13][14][15]. Both UVA and UVB light initiate and promote carcinogenesis [16] and immunosuppression [17]. Chronic cumulative exposure to ultraviolet radiation has been established as a risk factor for SCC [7,9,18]. Other physical agents include ionizing radiation [18], chemical agents (polycyclic aromatic hydrocarbons, arsenic, and nitrosamines) [11], medications used for immunosuppression, chronic inflammation (ulcers, sinus tracts), trauma (burns and scars), and viruses (certain types of human papillomavirus) [20,21] also have been implicated. Epidemiological studies suggest that individuals with cutaneous SCC are more likely to develop other malignancies compared with individuals who have no history of non-melanoma skin cancer (NMSC) [22][23][24]. Additionally, in patients who have a history of NMSC, there is an increased incidence of and mortality from leukemia, non-Hodgkin's lymphoma, and cancers of the lung, bladder, breast, testis, salivary gland, small intestine, and pharynx [25]. According to several recent epidemiological studies, a history of NMSC may be associated with 20 to 30 percent increased mortality from another type of cancer [23,25]. Understanding exposures that predispose individuals to cutaneous SCCs also may shed light on risk factors for these other types of malignancies. Recently, a population-based case-control study reported a statistically significant increased risk estimate for SCC among oral contraceptive (OC) users than non users [26]. Using an established cohort of Kaiser Permanente Northern California (KPNC) members with data on self-reported pre-diagnostic characteristics, we performed a nested case-control study to further examine the putative association between OC use and SCC in the context of other environmental and host-related risk factors. Study Population The source population consisted of members of KPNC who had completed at least one Multiphasic Health Checkup (MHC) between July 1964 and August 1973. The MHC, initiated in 1964, was a voluntary, comprehensive health evaluation that included a detailed self-administered MHC questionnaire (MHCQ), a standardized physical examination, and a group of specialty examinations [27]. The MHC has been used for numerous risk factor studies [28][29][30]. This study focused on the program's medical center in Oakland, which provided health care to many of the area's adult subscribers [31] and had computer-stored surgical pathology records starting in 1974. These pathology records were coded by histology using Systematized Nomenclature of Human and Veterinary Medicine (SNOMED) codes [32]. Tumor registry records, mostly complete from 1969, allowed us to identify patients with prior histologically confirmed cancers (other than NMSCs) within at least five years of their initial SCC diagnosis. These patients (~19%) were not included in the selection of patients with SCC in order to minimize the immediate effects of surveillance bias, treatment effects, disease-induced immunosuppression, and the possibility of residual cancer. However, patients with more distant self-reported physician diagnosed cancers, ascertained from their earliest MHCQ, were included in the selection of SCC cases and analyzed as a covariate in our statistical model. We examined the histology codes of the 111,521 MHC cohort members enrolled in Oakland between 1964 and 1973 and identified individuals who had a histology code for a cutaneous SCC during follow-up from 1974 through 1995. For each case, we randomly selected up to five controls who were members at the time of case diagnosis and who were matched for age at the time of examination (±2 years), gender, residential postal zip code, and year of health checkup (±5 years). Excluded from the analysis (cases and controls) were 157 (4.5%) non-Caucasians, 12 (0.34%) participants diagnosed with genital SCCs and 82 (2.4%) participants with missing values on the MHCQ for smoking status, occupational exposures, history of cancer, birth control pill use (if female), eye color, or aspirin use. A total of 392 (76%) SCC cases had two or more matched controls. The final analysis dataset consisted of 516 cases and 1,690 controls. This study was approved by the Institutional Review Board of KPNC. Finally, we ascertained self-reported regular use of three classes of medications postulated a priori to be associated with SCC: (1) aspirin ("six tablets or more of aspirin including Empirin, Anacin, or Bufferin almost every day") hypothesized to decrease SCC risk [33,34]; (2) "cortisone type medication," hypothesized to increase SCC risk due to immunosuppression [35] and (3) oral contraceptives (birth control pills), hypothesized to increase SCC risk due to its association with anogenital SCCs [36]. Statistical Analysis Odds ratios (ORs) were used as the measure of association for binary outcome variables and were computed using conditional logistic regression [37]. Normal theory approximation using Wald's method was used to determine the 95% confidence intervals (CI) for the OR estimates (ORs, hereafter referred to as risk) [38]. All variables were included in the multivariable model except for occupational exposures not associated with SCC risk in the univariate model. Analyses for oral contraceptive use were limited to women. Statistical analyses were performed using SAS, version 9.1 (SAS Institute Inc., Cary, NC). Results A total of 516 participants with SCC included 321 men and 195 women. The mean age was 71.4 years (standard deviation 11.3, range 33-97). The majority (62%) of SCCs were diagnosed on the head and neck consistent with previous reports [3,39]. The remaining tumors were located on the upper extremities (11%), trunk (9%), lower extremities (9%), and other non-genital skin (10%). Host Factors Associated with SCC Marital Status. Risk for SCC was increased among women who were currently or ever married, but CIs spanned unity. Education. Women who completed education beyond high school were at increased risk for SCC than those less educated. Although positive in direction, the effect was diminished among men. Body Mass Index. Risk for SCC was not increased for any category of body mass index. Eye Color. Blue/gray eye color was associated with an increased risk for SCC. Participants that were classified as having "other" eye color also had a higher risk for SCC. We assumed that the "other" category was more likely to represent light color eyes, such as hazel. Also, among women, but not men, green eye color was associated with an increased risk for SCC. Personal History of Cancer. Men, but not women, who self-reported a history of cancer had an increased risk of SCC. Environmental Factors Associated with SCC Cigarettes. There was no significant association of SCC with cigarette smoking history among men, whether comparing never smokers to former or current smokers. A borderline increased risk for SCC was observed for women who were former but not current smokers. UV Exposure. Although the MHCQ did not specifically inquire about sun exposure habits, it obtained information on occupational UV exposure and time spent in leisure activities or exercise, which we reasoned would be the majority of exposure to UV. Men who spent two or more hours a day in leisure activities had a higher risk for SCC than those who spent less than two hours, and the risk estimate increased with increasing time spent in leisure activities. An increased risk for SCC was observed among women who spent 5-7 hours a day in leisure activities but the effect was not statistically significant for other levels of exposure. Except for a borderline significant effect for women who exercised 2-4 hours a day, this variable was not associated with an increased risk for SCC when controlling for other variables in the model. Among men, occupational exposure to UV radiation was the strongest predictor of SCC risk in our multivariable model. Oral Contraceptive Use. We found a statistically significant risk for SCC in regular users of OC in the univariate model (Table 1). In the multivariable model, the OR changed little, although the confidence interval widened yielding an association that was no longer statistically significant. Aspirin and Cortisone. Women who consumed >six aspirin tablets a day had a decreased risk for SCC but CIs were wide. Cortisone type medications were not associated with SCC risk. Other Occupational Exposures. Exposure to dust (asbestos, cement or grains) was associated with decreased SCC risk among men. Only two women in our study reported this exposure and neither were diagnosed with SCC. The remaining nine occupational exposure categories were not associated with future SCC. Discussion We observed a borderline statistically significant association between oral contraceptive use and subsequent SCC risk among women in our cohort. If not merely due to chance or study bias, a possible explanation is that oral contraceptives, which contain estrogen (ethinylestradiol) and/or progesterone (progestin or synthetic progesterone-like compounds), alter the serum estradiol/progesterone ratio, which may influence the oncogenic potential of cutaneous squamous cells [40,41]. Estrogen receptors are present in normal skin [42]. However, SCCs are not believed to express significant amounts of sex hormones, suggesting that an association between OC use and SCC may be mediated through non-sex hormone pathways [43]. One possible pathway is p53, as estrogen appears to inhibit the actions of this tumor suppressor [44]. Inactivation of the p53 gene is believed to play a pivotal gatekeeper role in SCC carcinogenesis [45][46][47]. Risk also may be indirectly increased through interactions with polymorphisms in nucleotide excision repair (NER) genes including Xeroderma pigmentosum group D (XPD) [26]. Another possibility is that women who use birth control pills may have certain lifestyle factors, such as increased sexual activity, which may make them more likely to harbor HPV. Infection with some HPV types has been implicated in the pathogenesis of cutaneous SCCs in immunocompetent hosts [1,21,[48][49][50][51][52]. The association between birth control use, HPV risk and cervical SCC has been reported [36,[53][54][55]. A similar association could hold for cutaneous SCC. To the best of our knowledge, only one published paper to date has studied the association between OC use and SCC [26]. Overall, OC users had a 1.6 adjusted odds ratio (OR) for SCC (CI = 1.0-2.5). ORs also were higher among women who last used OCs ≥ 25 years before diagnosis (OR = 2.1, CI = 1.2-3.7), and within group ORs increased with duration of use (OR for ≤ 2 years, 1.7; CI = 0.9-3.5; OR for 3-6 years, 2.6; CI = 1.0-6.5; OR for ≥7 years, 2.7; CI = 0.9-8.5, P trend = 0.01). Our results support these previously published findings. The epidemiology of SCCs has been difficult to characterize because conventional national registries, such as the Surveillance, Epidemiology, and End Results (SEER) [56] program exclude NMSCs. The unique advantages of the Kaiser Permanente Northern California (KPNC) setting are that it closely simulates the surrounding population serving nearly one-third of the insured population of Northern California and it houses an electronic database that captures information on all pathology specimens received for examination, allowing for thorough and accurate capture of incident SCCs. Recall bias was not a concern in our study since OC use was recorded prior to the diagnosis of SCC. One potential limitation of this study is residual confounding due to indirect measurement of sun exposure, a known risk factor for SCCs [7,9,18]. We used occupational UV and time spent in leisure activities, or exercise, as surrogate markers for sun exposure reasoning that sun exposure comes from two primary sources: time spent in the sun for leisure/exercise and time spent in the sun related to one's occupation. Among men and to lesser degree women, the strength of association with leisure time activities increased with time, supporting the assumption that time spent in leisure activities is correlated with UV exposure. However, leisure time activity may have been an inexact surrogate measure of sun exposure as the prompting examples ("Hobby, TV, etc.") given for the question on the MHCQ were vague and may have been interpreted as activities that did not involve sun exposure. Nor did the question differentiate between sun and non sun related leisure time activities. Similarly, the MHCQ did not specify the type or duration of exercise, or whether the exercise was performed indoors versus outdoors. Surprisingly, time spent in exercise was slightly protective among men. Although this may seem counterintuitive, it is conceivable that exercise conveys health benefits that may offset the negative effect of UV exposure. Furthermore, some participants may have underreported occupational sun exposure since they might not have known that "ultraviolet radiation" was a surrogate term for "sun exposure." While our surrogate measures of sun exposure were inexact, there is no inherent reason to believe that sun exposure is associated with OC use and thus unlikely to have affected the risk estimates for this variable. Attenuation of risk estimates may have occurred if some control subjects had SCC diagnosed outside of the KPNC system. This is unlikely because KPNC is a comprehensive healthcare system and members would have to pay out-of-pocket for services received outside the health plan. Also, the exposures that we studied were obtained at a single point in time and were not measured over the entire study follow-up period. The possibility remains that women who take oral contraceptive medicines may be more likely to interact with the healthcare system and increased risk for SCC may be due to detection bias. However, most SCC is diagnosed years after discontinuing OC use. Women with a history of OC use may have differentially received greater screening for cervical cancer. In our analysis, we adjusted for level of education, a factor believed to influence screening behavior. In the current study, OC use was crudely measured as "ever/never" exposed and did not include information on dose, duration of OC use, pill composition, or serum estradiol and progesterone levels. The potency and overall dose of OCs have changed over time. Our results reflect the use of earlier compositions of OCs when hormone doses were considerably higher and may no longer hold for present day OCs. Nonetheless, this study is an important addition to the literature as the use of OCs was recorded prior to skin cancer diagnosis. Although KPNC is generally reflective of the broader population in Northern California [57,58], there are some differences that may have introduced uncontrolled factors regarding ethnicity or behavior. In addition, individuals who self-selected for the MHCQ may have differed from the larger KPNC population on these as well as other distinguishing characteristics that were not directly measured and could have introduced selection and/or detection bias. However, these factors probably did not affect the overall validity of our case versus control comparisons. Occupational exposure to dust from asbestos, cement, or grain was included as a potential confounding factor in our model based on our hypothesis that occupational dust may coat the skin to form a physical barrier to UV light. Previous reports of occupational risk and keratinocyte tumors have not noted this specific association [59]. Exposure to agricultural dusts have been associated with decreased lung cancer risk [60] suggesting a possible anticancer effect independent of an interaction with UV light. However, a healthy worker survival effect and reduced smoking among farmers may have been a more plausible explanation for the reduced risk observed among men in our study [61]. A limitation of our analysis was that the MHCQ grouped all three types of dust into one category and the association of each with SCC might differ. The MHCQ also did not differentiate between types of cancer in the self-reported personal history of cancer question. Thus, it was not possible to determine if prior history of self-reported cancer and SCC risk was due to a personal history of NMSC versus other cancers. NMSC is the most common cancer in the United States [4] and history of NMSC is a major risk factor for a new primary SCCs [62]. In our study, risk was higher only among men who self-reported a history of cancer and may reflect a higher incidence of NMSC among this group [63,64]. Although our main effects multivariable analysis was adjusted for self-reported personal history of cancer, it is possible that the resulting estimate for OC use was affected by residual confounding. We did not detect a consistent association between smoking and SCC risk as has been reported by some studies [65][66][67][68]. Our finding may reflect a different study population which may be susceptible to different gene-environment interactions. Or it may be due to our simple smoking history classification (i.e., never, former, current) which did not account for pack years smoked, filtered versus unfiltered, and other detailed smoking information. However, the possibility remains that smoking does not increase SCC risk as was observed in a large occupational cohort study [69]. Given the uncertainty of our smoking variable, we cannot rule out residual confounding in our observed association between OC use and SCC. The association of SCC with innate pigmentary factors, such as light eye color, is well established [7,8] and is supported by our data. Our results indicate that environmental exposures which were used as surrogate markers for UV exposure (occupational UV in the past year and time spent in leisure activities) also were highly correlated with SCC risk, as expected. Similarly, our finding of an association between higher education and SCC risk has been previously reported [19,70,71]. Education level may affect SCC risk through socioeconomic status leading to differences in lifestyle and healthseeking behavior. Those individuals with more education may have higher socioeconomic status, allowing them to take mid-winter vacations in sunny locations, leading to higher SCC risk due to more frequent episodic sunburns. Alternatively, higher education also may lead to detection bias if more educated individuals are more likely to seek health care. Further studies on the mechanisms underlying the association between education and SCC risk are needed. In summary, we observed a borderline statistically significant increased SCC risk with use of oral contraceptives similar to that reported in a recent case-control study. On the present evidence, it is not possible to definitively answer the question of how OC use influences SCC risk, if such an association exists, or to favor any specific hypothesis. If confirmed in future studies, these results will lead to new insight in the etiology of SCCs.
2014-10-01T00:00:00.000Z
2010-02-01T00:00:00.000
{ "year": 2010, "sha1": "82bdea5e664ae7f018a2f18a60e214cab4660e08", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/7/2/427/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82bdea5e664ae7f018a2f18a60e214cab4660e08", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234453609
pes2o/s2orc
v3-fos-license
Transient Cell Cycle Induction in Cardiomyocytes to Treat Ischemic Heart Failure The regenerative capacity of the heart after myocardial infarction (MI) is limited. Our previous study showed that ectopic introduction of Cdk1/CyclinB1 and Cdk4/CyclinD1 complexes (4F) promotes cardiomyocyte proliferation in 15-20% of infected cardiomyocytes in vitro and in vivo and improves cardiac function after MI. Here, we aim to identify the necessary reprogramming stages during the forced cardiomyocyte proliferation with 4F on a single cell basis. Also, we aim to start the rst preclinical testing to introduce 4F gene therapy as a candidate for the treatment of ischemia-induced heart failure. Temporal bulk and single-cell RNAseq and further biochemical validations of mature hiPS-CMs treated with either LacZ or 4F adenoviruses revealed full cell cycle reprogramming in 15% of the cardiomyocyte population after 48 h post-infection with 4F, which was associated with sarcomere disassembly and metabolic reprogramming. Transient overexpression of 4F, specically in cardiomyocytes, was achieved using a polycistronic non-integrating lentivirus (NIL) encoding the 4F; each is driven by a TNNT2 promoter (TNNT2-4F-NIL). TNNT2-4F-NIL or control virus was injected intramyocardially one week after MI in rats or pigs. TNNT2-4F-NIL treated animals showed signicant improvement in left ventricular ejection fraction and scar size compared with the control virus treated animals four weeks post-injection. In conclusion, the present study provides a mechanistic demonstration of the process of forced cardiomyocyte proliferation and advances the clinical feasibility of this approach by minimizing the oncogenic potential of the cell cycle factors using a novel transient and cardiomyocyte-specic viral construct. rat injection, echocardiography; M.S. and K.M.K.: echocardiography and MRI analyses; Y.G., Y.H., and Y.N.: Mouse surgery, modRNA, and viral injection. L.M., P.K.L., and B.G.H.: Metabolic analysis; K.C., and R.T.: Bioinformatics analyses; B.M.A. and J.S.: Electrophysiology analysis; H.R.J., A.S., Z.I., A.M.S., and S.H.: histology and analyses including staining, imaging, and quantications. A.S.E.: MRI imaging quantication; modRNA Design and supervision of rat and pig Introduction The mammalian cell cycle involves numerous feedback loops that either permit or prevent cell division depending on the cell type 1,2 . Although fetal myocytes proliferate to achieve cardiac growth and tissuespeci c stem cells undergo cytokinesis postnatally, differentiated cells typically become post-mitotic and permanently exit the cell cycle 3 . As a result, the regenerative capacity of most organs, including the heart, is limited. The ability to control proliferation in the postnatal heart would represent a powerful approach to promote cardiac repair after infarction. Recently, we took a combinatorial approach to screen for factors and conditions that could recapitulate the fetal state of cardiomyocyte division. We found that ectopic introduction of the Cdk1/CyclinB1 and the Cdk4/CyclinD1 complexes (4F, i.e., four factors) promoted cell division in at least 15% of mouse and human cardiomyocytes in vitro 4 . Rigorous assessment of cardiomyocyte cytokinesis in vivo using the Cre-recombinase dependent Mosaic Analysis with Double Markers (MADM) 5 lineage tracing system revealed similar e ciency in mouse hearts, leading to cardiac regeneration upon delivery of the 4F one week after myocardial infarction 4 . Moreover, in vitro and in vivo results show that myocytes undergo only one round of division after transduction with these cell cycle factors because the overexpression of the 4F in cardiomyocytes is self-limiting through proteasome-mediated degradation of the protein products 4 . Interestingly, a recent study showed that AAV-mediated expression of microRNA-199a in infarcted pig hearts could initially stimulate cardiac repair through induction of cardiomyocyte proliferation; however, subsequent persistent and uncontrolled expression of the microRNA resulted in sudden arrhythmic death of most of the treated pigs 6 . Our approach is currently one of the most robust methods for inducing cardiomyocyte proliferation; however, the timing, dosage, and speci city of this therapy's expression must be tightly controlled. Therefore, there is a need to transiently and speci cally express 4F in cardiomyocytes to induce one cycle of proliferation to avoid any adverse effects in the heart and other tissues if escaped. Uncertainty regarding the mechanisms underlying the functional improvement observed with cell-based therapies is a signi cant limitation of these approaches and led to skepticism about their clinical applicability (reviewed in 7,8 ). Therefore, prior to starting any translational efforts, we rst sought to gain mechanistic insights during the cell cycle reprogramming at a single cell transcriptomics level in a temporal manner following overexpression of the 4F in cardiomyocytes. We were able to identify the main reprogramming steps needed for the cardiomyocytes to complete the cell cycle. These ndings provide an essential foundation that enables one to ascribe subsequent cardiac functional improvements to the generation of new cardiomyocytes. Then, we provided the rst proof-of-concept evidence for this approach's clinical applicability by minimizing any oncogenic potential of the cell cycle factors using a novel transient and cardiomyocyte-speci c viral construct in rat and pig models of heart failure. Results Temporal bulk and single-cell RNAseq of mature hiPS-CMs reveals that cell cycle reprogramming is associated with sarcomere disassembly and metabolic reprogramming during forced cell cycle induction Previously, we reported that 4F induces at least 15% of mouse and human cardiomyocytes in vitro and in vivo to undergo proliferation within the rst 48 h post infection 4 . This high percentage of proliferating cells within the bulk population provided con dence that temporal bulk RNAseq could identify the signi cant transcriptional reprogramming events during the progress of cardiomyocyte proliferation at a transcriptional level. Therefore, to investigate the transcriptional changes during cell cycle progression, we conducted temporal bulk RNAseq on 60-day-old mature hiPS-CMs treated with either LacZ (control) or 4F adenovirus for 24, 48, 1&2). These data suggest that cardiomyocytes need to withdraw from their primary function of contraction during the cell cycle to enter the cell cycle. These transcriptomics changes were functionally con rmed by using time-lapse impedance contractility tracing. We found that cardiomyocyte contractile force declined signi cantly during proliferation (48 h post-infection), which coincided with the appearance of arrhythmic episodes; normal contractile force and rhythm returned to baseline levels 84 h post-viral infection (Fig. 2c). Furthermore, during the G2/M phase, especially during anaphase or cytokinesis, the sarcomeric structures were disrupted, as shown by troponin-T Immunostaining of the sarcomeres (Fig. 2d). The impedance-based force measurements indicate the contractile force generated by the monolayer sheet of cardiomyocytes, which could have a mixed-signal from proliferating and nonproliferating cardiomyocytes. Therefore, we investigated the sarcomeric disassembly implication on the electrophysiological properties and ion current in cardiomyocytes during proliferation. The decrease of the Scn5a transcript suggested a possible change in resulting I Na , but the transient nature of the scn5a decrease along with the knowledge that transcript levels do not necessarily predict levels of functioning protein motivated us to assess I Na . Mean I Na density tended to increase compared to LacZ (Extended data Fig. 1a), but the difference between lacZ and the 4F 48 or 72h time points was not signi cantly different. Cell capacitance was also not different (Extended data Fig. 1b). We also assessed the voltagedependence of steady-state inactivation because there is a development shift of the inactivation midpoint (V 0.5 ) whereby V 0.5 becomes more negative with embryonic heart maturation 9 . As with current density, there was no signi cant difference caused by 4F (Extended data Fig. 1c), though there is a trend towards a more negative V 0.5 (Extended data Fig. 1d). These data suggest that 4F does not necessarily reduce excitability, and there is an unexpected tendency for larger current with more mature properties in cells treated with 4F for 48 and 72 h compared to control cardiomyocytes. To further investigate in detail the transcriptional modi cations during cardiomyocyte proliferation at a single-cell level, we conducted a temporal single-cell RNAseq of 60-day-old mature hiPS-CMs infected with either LacZ (control) or 4F adenovirus for 24, 48, and 72 h. Gene expression data were collected from 7000 cells/condition as summarized UMAP blots (Fig. 3a). All cells were positive for cardiac markers (TNNT2, TNNC1, and MYH7) (Fig. 3b) Consistent with these transcriptomic changes, hiPS-CMs had markedly lower oxygen consumption and extracellular acidi cation rates 48 h after 4F infection, suggesting lower oxidative phosphorylation and glycolysis in proliferating myocytes (Extended Data Fig. 3a). Furthermore, in 4F-overexpressing hiPS-CMs, stable isotope tracing experiments using 13 C 6 -glucose demonstrate signi cantly higher enrichment of 13 C in intermediates or end products in NAD synthesis (NAD + ), the hexosamine biosynthetic pathway (UDP-HexNAc), phosphatidylethanolamine synthesis (CDP-ethanolamine), and pyrimidine nucleobase biosynthesis (uracil) (Extended Data Fig. 3b). These data indicate that proliferating cardiomyocytes diminish catabolic pathway activity and augment biosynthetic pathway activity before or during cell cycle progression. Transient expression of the 4F using modi ed RNA showed low infection e ciency The insights gained into 4F-mediated cardiomyocyte cell cycle reprogramming encouraged us to standardize the approach to treat heart failure. Our previous studies using adenoviruses provided proof of principle for the e cacy of our approach 4 ; however, the use of adenovirus is not clinically applicable because of the high prevalence of immune response in humans. From our work with adenovirus, we noticed that most of the cardiomyocyte proliferation events occur within the rst 48-72 h after introducing the virus, after which cardiomyocytes shut down the protein expression of the 4F through activation of the proteasome system 4 . However, this was not the case in other cell types, such as in neurons, where the expression and proliferation capacity last for over seven days (data not shown as it is out of the context of the manuscript). Furthermore, the induction of long-term cell cycle activity in the heart may become oncogenic and have deleterious effects on the heart 6 . Therefore, we tested the transient expression of the 4F using modi ed RNA (modRNA) delivery and its ability to induce cardiomyocyte proliferation in vivo using the recently optimized delivery method for delivering modRNA to the heart using citrate sucrose buffer 10 . modRNAs were injected into the myocardium of C57BL/6 mice in citrate sucrose buffer; then, we assessed the cardiomyocyte expression of nuclear-GFP, CDK1, CCNB, CDK4, and CCND at 48 h after injection (Extended Data Fig. 4a-d). The infection e ciency was very low (<0.01%) (Extended Data Fig. 4d). To obtain proof-of-principle for the e cacy of the transient expression approach in inducing proper cardiomyocyte cytokinesis, we injected the 4F+GFP modRNAs into the heart of MADM lineage-tracing transgenic mice crossed with tamoxifen-induced alpha-MHC-Cre. Knowing that only a few cells would be informative given the small number of recombined cells expected and the low infection e ciency of the modRNA, we identi ed the site of injection by the nuclear GFP expression. We found signi cant induction of phospho-histone H3 (pHH3) in cardiomyocytes at the site of the 4F injection compared to control (Extended Data Fig. 4e), suggesting that the modRNAs can induce cell division in vivo. Also, we observed very few MADM recombination events that led to single-colored cells (10-15/heart) that co-localized with pHH3 positive nuclei, suggesting true cytokinesis in this setting (Extended Data Fig. 4f). These data are suggestive that transient expression of the 4F may be su cient to induce cell division but will require more e cient delivery and multiple injections as the infection e ciency and the spread of the modRNA at the injection site is very limited. Characterization of cardiomyocyte-speci c non-integrating lentivirus system for gene therapy The low in vivo infection e ciency of modRNA motivated us to develop and optimize an alternate strategy for 4F delivery and transient expression using a non-integrating lentivirus (NIL). NIL is known for its high infection e ciency and transient expression of the encoded protein, which is limited to 2-3 days, followed by a signi cant decline in expression [10][11][12][13] . Several ongoing clinical trials are aiming to treat various diseases with the lentivirus gene therapy approach 14 . This rapid degradation kinetics makes NIL the optimal delivery vehicle for the 4F. Additionally, when using NIL, it is important to target only cardiomyocytes by controlling the 4F expression with the cardiac-speci c troponin-T promoter (TNNT2), which we previously optimized 15 . Polycistronic TNNT2-4F-NIL induced cardiomyocyte proliferation in vitro and in vivo Each cell cycle factor of the 4F was cloned into a lentivirus backbone under TNNT2 promoter (4F separate lentiviruses); also, all 4F were cloned in one polycistronic lentivirus backbone with each factor is driven by a TNNT2 promoter (4F polycistronic lentivirus) (Extended Data Fig. 6a). First, we assessed each cell cycle factor's protein expression four days post-infection in hiPS-CMs using western blot. TNNT2-4Fpolycistronic-NIL was signi cantly more e cient in inducing higher protein expression of all the cell cycle factors compared to the TNNT2-4Fseparate-NIL (Extended Data Fig. 6b-c). Furthermore, TNNT2-4Fpolycistronic-NIL showed 80-100% infection e ciency as assessed by immuno uorescence (Extended Data Fig. 6d-e). Four days post-infection with TNNT2-4Fpolycistronic-NIL, we found that 15-20% of the cardiomyocytes were positive for 5-ethynyl-2´-deoxyuridine (EDU), which marks new DNA synthesis and histone H3 phosphorylation (PHH3), which marks cells in the G2/M phase ( Fig. 4a-b). Furthermore, the total cell number was increased by 20-30% (Fig. 4b). Assessments 10 days post-infection showed that cell number increase and EDU labeling for the divided nuclei persisted; however, the PHH3 was abolished, indicating the transient nature of the cell cycle induction in the cardiomyocyte and the likelihood that TNNT2-4Fpolycistronic-NIL induced only one cycle of proliferation. To test the e cacy of the TNNT2-4Fpolycistronic-NIL in inducing cardiomyocyte proliferation in vivo, we used a cardiomyocyte cytokinesis lineage-tracing animal model (inducible a-MHC-Cre::MADM-lineagetracing) 4,5,16,17 . In these lineage-tracing mice, cardiomyocytes that undergo cytokinesis produce daughter cells that are either red, green, yellow (red+green), or colorless, based on allelic recombination of uorescent reporters; if the cardiomyocytes fail to divide, they will remain double-colored (i.e., yellow), or colorless if no recombination occurs. Thus, the presence of single-colored red or green cells de nitively indicates cells that have undergone cell division, although dividing cells are underrepresented by singlecolored cells because double-colored (yellow) or colorless cells also could have divided. Animals were subjected to a 60-min occlusion of the left anterior descending artery followed by reperfusion, then oneweek later, TNNT2-4Fpolycistronic-NIL or LacZ-NIL (control) is injected intramyocardially (Fig. 4c). Tamoxifen injection was carried out as described in 4 , starting 48 h after the virus injection for three days to initiate recombination events. Mice were sacri ced one week after the viral injections, and hearts were sectioned to enumerate the cytokinesis events. All surgeries, imaging, and microscopy analyses were blinded about treatment, and animals were decoded after all data were analyzed. The analysis was done on ten different sections from each heart across the whole myocardium. After intramyocardial injection of TNNT2-4Fpolycistronic-NIL, it showed at least 15% of the recombinant cardiomyocytes were singlecolored, compared to <1% in hearts injected with control virus (Fig. 4d-e). TNNT2-4Fpolycistronic-NIL improves cardiac function in a rat model of heart failure Before starting in vivo functional studies, we sought to validate that TNNT2-NIL drives cardiomyocytespeci c expression of the 4F in vivo. Therefore, TNNT2-4Fpolycistronic-NIL or GFP-NIL control virus were injected intramyocardially, and the rats were sacri ced one-week post-injection. Western blotting and Immunostaining con rmed the expression of 4F in the rat hearts six days after injection (Extended Data Fig. 7a-d). Furthermore, RNA expressions of the overexpressed human CDK1, CDK4, CCNB, and CCND in rat hearts were only detected in the heart and not in the other organs six days post-viral injection (Extended Data Fig. 8a). Then we started to test the effects of TNNT2-4Fpolycistronic-NIL on cardiac function following cardiac damage in vivo. Rats were subjected to a 2 h coronary occlusion followed by reperfusion. One week later, TNNT2-4Fpolycistronic-NIL or control TNNT2-GFP-NIL was injected into the peri-infarct region of the heart. Rats were followed for four weeks and then sacri ced, and the cardiac tissue was processed and analyzed ( Fig. 5a). Gross assessment of the heart weight showed that HW/BW was signi cantly decreased in the hearts treated with TNNT2-4Fpolycistronic-NIL compared to the control virus treated hearts (Fig. 5b). TNNT2-4Fpolycistronic-NIL-treated groups exhibited a signi cant increase in left ventricular ejection fraction four weeks post-viral injection compared to the control group, as assessed by blinded echocardiography (Fig. 5c). Consistently with the improvement in cardiac function, histological analyses of the hearts revealed an approximately 30% reduction in the scar size in hearts treated with TNNT2-4Fpolycistronic-NIL compared to control hearts (Fig. 5d). Interestingly, assessment of cell size at the border and remote zones showed a signi cant reduction in the cardiomyocyte cross-sectional area ( Fig. 5e-f). As the virus was injected in the border zone, so the reduction in cardiomyocyte cross-sectional area could be due to the induction of proliferation; however, the reduction in cardiomyocyte crosssectional area at the remote zone could be due to improvement of the overall functionality of the heart and the protection from progression towards dilatation after treatment with TNNT2-4Fpolycistronic-NIL. Double reporter to permanently label cardiomyocytes activation of aurora kinase B in vivo in large animals To track mitotic events in vivo in large animals, we developed a new reporter system based on the Aurora kinase B (AurKB) promoter region activation. AurKB is one of the central protein kinases that ensure the proper execution and delity of mitosis and is expressed only for a short time during the cytokinesis process, localizing to the central spindle during anaphase and in the midbody during cytokinesis 18 . It has been considered as a putative marker for mitosis in several cell types, including cardiomyocytes [19][20][21] . A recent study demonstrated that AurKB correct positioning to the midbody in cardiomyocytes during mitosis is positively correlated with cytokinesis and that 70% of the neonatal cardiomyocytes that express AurKB undergo complete cytokinesis with correct midbody positioning 20 . To develop this reporter, we used the previously well-characterized 1.8kb promoter region of the human AurKB gene, which is highly conserved between species 22 , and cloned it into a lentivirus to drive the expression of GFP protein (Extended Data Fig. 9a). We will refer to this reporter as the AurKB-GFP reporter throughout the manuscript. We generated and validated this reporter to detect mitosis/cytokinesis events in proliferating cells, e.g., HEK293 cells (Extended Data Fig. 9a). Live cell imaging of hiPS-CMs overexpressing 4F over 96 h showed GFP expression during the M phase, which reaches the maximum intensity during cytokinesis; in contrast, there is no GFP expression observed in lacZ-treated cells (supplementary movie 1&2). In line with a previous report 20 , live imaging of hiPS-CMs treated with 4F indicated that 70% of the GFP expressing cells completed cytokinesis while the remaining 30% were stuck in mitosis without completing cytokinesis (Extended Data Fig. 9b and supplementary movie 1&2). Fixation and Immunostaining demonstrated that 36 h post-infection with 4F, the GFP signal is co-localized with the AurKB protein expression at the midbody during mitosis (Extended Data Fig. 9c-d). AurKB protein expression fades after two days post-infection; however, the GFP protein remained and marked that 20-30% of hiPS-CMs infected with 4F adenovirus, with a decline in the GFP signal after day 4 (Extended Data Fig. 9e-f). EDU nuclei labeling was observed in GFP positive cells (Extended Data Fig. 9e), indicating the S-phase's completion before entering the M phase. After demonstrating the AurKB promoter region's ability to reliably indicate mitosis (100%) and cytokinesis (70%) events in cardiomyocytes through transient expression of GFP, we developed a permanent marker for mitosis to be used in vivo and in situ. To this end, we developed a double reporter system to track mitotic events in situ and in vivo based on the AurKB promoter described above. In this reporter system, we cloned a Lox-DsRed-Stop-Lox-GFP construct under CAG promoter in lentivirus; in another lentiviral construct, we cloned the Cre encoding protein sequence under the in uence of the AurKB promoter (AurKB-Cre) (Extended Data Fig. 10a). Using this double reporter system, all infected cells will become DsRed positive; when mitosis occurs, Cre recombinase will be expressed and will cut out the DsRed-Stop sequence, turning these cardiomyocytes permanently into GFP positive cardiomyocytes. Therefore, the presence of green cells will be an indication of mitotic events. We rst validated the color switch in normally proliferating cells (HEK293) (Extended Data Fig. 10b). Then, we further validated the e ciency of this reporter system in detecting mitotic events induced by 4F in hiPS-CMs (Extended Data Fig. 10c-d) and pig heart tissue in situ using our recently developed heart slices culture system 23,24 (Extended Data Fig. 10e-f). This reporter system indicates the number of infected cells (total red-and green-labeled cells) and the number of mitotic events (green-labeled cells). Therefore, the quotient of green and red cardiomyocytes provides a quanti cation of mitotic cardiomyocytes. TNNT2-4Fpolyscistronic-NIL induces cardiomyocyte proliferation, improves cardiac function, and reduces scar size in a porcine model of heart failure As a proof of concept for our approach's e cacy in inducing transient cardiomyocyte proliferation in large animals, we injected TNNT2-4Fpolyscistronic-NIL or control LacZ-NIL into the pig myocardium one week after induction of myocardial infarction with a 90-min coronary occlusion followed by reperfusion (Fig. 6a). The double reporter system was co-injected into the border zone with the therapeutic or control virus to assess the extent of cardiomyocyte proliferation induced by the TNNT2-4Fpolyscistronic-NIL. Four weeks post-treatment, every pig treated with TNNT2-4Fpolyscistronic-NIL showed signi cant improvement in gross heart failure measures such as HW-BW and LW/BW (Fig. 6b). Furthermore, the cardiac functional parameter, ejection fraction, assessed by blinded echocardiography (Fig. 6c) and MRI (Fig. 6d, Extended Data Fig. 11 and supplementary movies 3-6) demonstrated signi cant improvement in animals treated with TNNT2-4Fpolyscistronic-NIL compared to control virus-treated pigs. Also, TNNT2-4Fpolyscistronic-NIL treated pigs exhibited a 25% reduction in scar size compared to control pigs (Fig. 6e). TNNT2-4Fpolyscistronic-NIL-treated animals showed that 30% of the total labeled cardiomyocytes with the double reporter system at the injection site are GFP positive, indicating the cardiomyocyte mitotic activation. In contrast, almost no background proliferation was detected in control virus-treated hearts ( Fig. 6f-h), supporting the concept that the improvement in function is due to induction of cardiomyocyte proliferation. Discussion Direct induction of the cell cycle using 4F is one of the most robust methods in inducing cardiomyocyte proliferation 25,26 ; however, it is essential to understand the mechanism of action of this potential heart failure gene therapy and to tightly control its timing, dosage, and speci city of expression in cardiomyocytes. Here, we describe the essential processes associated with forced cardiomyocyte proliferation following cell cycle induction, including sarcomeric disassembly and metabolic reprogramming, in a temporal sequence and on a single cell transcriptomic level. Furthermore, we provide the rst proof of concept for this approach's e cacy in improving cardiac function after infarction in a large animal model using a transient and cardiac-speci c gene therapy approach. Understanding the process of human cardiomyocyte proliferation and the reprogramming steps needed for the cardiomyocytes to complete the process is essential to advance the eld of cardiac regeneration. Several efforts comparing proliferating fetal cardiomyocytes and adult cardiomyocytes have yielded a certain degree of understanding of the process 25 . However, the highly variable nature of fetal and adult cardiomyocytes limits the ability to elucidate the reprogramming events during proliferation. In the present study, we attempted to identify the essential reprogramming events associated with cell cycle induction by monitoring the same human cardiomyocytes during proliferation on a single-cell transcriptomic level. First, we found that sarcomeric disassembly is an essential step for cardiomyocytes cytokinesis. This nding is consistent with the recent suggestion that proteins responsible for sarcomere assembly, e.g., ephrin-B1, are essential elements for the cell cycle blockade in adult cardiomyocytes as described in a recent preprint 27 . Furthermore, we demonstrated that proliferating cardiomyocytes shift their metabolism from energy production to biosynthesis. This nding is consistent with the need for new building blocks for cell growth and division. Recent studies suggest that glycolysis 28,29 , glucose oxidation 30 , and the mevalonate pathway 31 in uence myocyte proliferation; our results build upon these previous reports and support the idea that metabolic activity changes may be required for successful myocyte division. Interestingly, a recent paper demonstrated switching the metabolic substrate from fatty acids to glucose induces cardiomyocyte cell-cycle progression 30 . Considering that glucose is a primary biosynthetic substrate in cardiomyocytes 32 , this supports our nding that the switch from catabolic to biosynthetic activities is essential for cell cycle progression in cardiomyocytes. The advantage of these single-cell RNAseq data is that we reached a time resolution that enabled us to compare two subpopulations of cardiomyocytes, both of which received 4F for 48 hours; one subpopulation was delayed entering mitosis while the other subpopulation was in mitosis. This comparison will impact the eld of cardiac regeneration and has been long-awaited. This comparison was not possible before because there was no approach to induce cardiomyocyte regeneration that reached the e ciency we achieved with the 4F. Furthermore, direct induction of cell cycle is a clean approach to understand the mechanism of cardiomyocyte proliferation, unlike other approaches that have many off-target effects, e.g., manipulations of YAP 33 , a developmental gene that induces dedifferentiation, and use of microRNAs 6 , which have multiple off-target effects. The insights we gained into the process of cardiomyocyte proliferation and the ability to de ne the 15% cardiomyocyte subpopulation that temporarily undergoes mitosis at 48 h (mitotic subpopulation) and track the reprogramming events in this subpopulation motivated us to perform the next translational steps: that is to provide a proof of concept for the e cacy of this approach in improving cardiac function in animal models of heart failure. Interestingly, a recent study showed that AAV-mediated expression of microRNA-199a in infarcted pig hearts initially stimulates cardiac repair through induction of cardiomyocyte proliferation; however, subsequent persistent and uncontrolled expression of the microRNA resulted in sudden arrhythmic death of most of the treated pigs 6 . Our previous in vitro and in vivo results show that myocytes undergo only one round of division after transduction with cell cycle factors because the overexpression of the 4F in cardiomyocytes is self-limiting through proteasomemediated degradation of the protein products 4 . Therefore, to perform these translational steps, we needed to transiently and speci cally express 4F in cardiomyocytes to induce one cycle of proliferation and avoid any adverse effects in other tissues. Over the past decade, there have been signi cant advances in genetherapy delivery approaches for transient gene expression using either ModRNA 34 or NIL 13 . Although the modi ed RNA approach is a promising virus-free delivery system for transient expression, its poor delivery and speci city to cardiomyocytes limit its applicability. Therefore, NIL was the tool of choice for our animal studies for transient expression with high infection e ciency, as reviewed in 35 . Our data show that a polycistronic TNNT2-4Fpolycictronic-NIL induces 4F expression only in cardiomyocytes both in vitro and in vivo. Furthermore, TNNT2-4Fpolycistronic-NIL induces proliferation markers in hiPS-CMs, cardiomyocyte cytokinesis in vivo in MADM mice, and signi cantly improves cardiac function in both a rat and a pig of heart failure caused by myocardial infarction. In conclusion, we have provided a mechanistic understanding of the process of forced cardiomyocyte proliferation and advanced the clinical feasibility of the 4F gene therapy approach for heart failure treatment by minimizing the oncogenic potential of the cell cycle factors using a novel transient and cardiomyocyte-speci c viral construct. Further studies are needed to prove effectiveness and safety in more chronic heart failure models in large animals. These studies will pave the way for the rst test of this promising approach in patients with ischemic cardiomyopathy. Limitations of the study The use of 60-day mature hiPS-CMs instead of adult primary human cardiomyocytes in the mechanistic studies is a limitation. However, there is a lack of a reliable long-term culture model of adult human cardiomyocytes and the inability to perform single-cell RNAseq on adult cardiomyocytes other than single nuclear RNAseq due to the large size of the adult cardiomyocytes. More importantly, nuclei are at different integrities during the cell cycle stages, which will lead to di culties in isolating mitotic nuclei and will obfuscate the interpretation of any nuclear RNAseq. Therefore, we used the hiPS-CMs, which are highly pure cardiomyocyte cultures obtained from Cellular Dynamics, Inc. These cells are selected after differentiation using an a-MHC-Blastocidin selection cassette. This strategy yields nearly 100% pure iPSC-CMs, as indicated in our TNNT2 immunostaining images and single-cell RNAseq data. For consistency, only cells that mature for at least 60 days were used for experiments. After this time, the cells have matured to a point at which they have minimal proliferation capacity and minimal basal expression of cyclins and Cdks 4 . Thus, the use of hiPS-CMs provided a homogenous cell population as a starting material to track the reprogramming events during cardiomyocyte proliferation. The data from the AurKB must be interpreted cautiously as mitotic rather than cytokinesis events. As we described here, there is a 30% overestimation in cytokinesis events reported by this reporter. Nevertheless, as we show here, the reporter reliably estimates cell cycle induction and mitotic entrance in cardiomyocytes in vitro, in situ, and in vivo in large animal models with almost no background labeling in the control groups. We preferred to use the CAG promoter for this reporter rather than TNNT2 due to the delayed kinetics of the TNNT2 promoter (Extended Data Fig. 5b-c), which could complicate the experimental design and interpretation of results. All functional e cacy and speci city of expression have been assessed four weeks after injection in an acute heart failure model where the virus was injected one-week post-I/R. Therefore, an assessment of the safety and functional e cacy of the TNNT2-4Fpolycystronic-NIL for a longer time (4-6 months) will be needed. Besides, there is a need for further studies to assess the e cacy of the TNNT2-4Fpolycystronic-NIL in a more chronic heart failure model where the virus is injected one or two months after I/R. Methods Cloning and preparation of integrating and non-integrating lentivirus. Here, we rst ltered out any genes without at least two samples with a C.P.M. (counts per million) between 0.5 and 5000. C.P.M.s below 0.5 indicate nondetectable gene expression, and C.P.M.s above 5000 are typically only seen in mitochondrial genes. If these high-expression genes were not excluded, their counts would disproportionately affect the normalization. After excluding these genes, we renormalized the remaining ones using "calcNormFactor" in edgeR, then calculated P-values for each gene with differential expression between samples using edgeR's assumed negative-binomial distribution of gene expression. We calculated the false discovery rates (FDRs) for each P-value with the Benjamini-Hochberg method based on the built-in R function "p.adjust". Genomics) with default parameters. The counts" matrices across the samples were aggregated using cell ranger aggr. The resulting les were processed in R using the package Seurat (version 3.1.3) 36 . All cells with at least three detected genes and less than 30% of reads from mitochondrial genes and all detected genes in at least 200 cells were used in the further analyses. The remaining data were normalized using the "LogNormaliz" method. Principal Component Analysis for the subset of the 2000 most variable genes (Seurat function FindVariableFeatures) was then performed on the scaled data. The cells were clustered using the Louvain Algorithm with the resolution parameter value of 0.5 (Seurat function FindClusters) after determining the shared nearest neighbor graph using the rst ten principal components (Seurat function FindNeighbors). The data were visualized using the UMAP algorithm with the rst ten principal components as input (Seurat function RunUMAP). The cells were grouped into ten clusters based on the distribution of expression of the cell cycle genes of interest. Differential analysis between all pairs of clusters was performed using the Wilcoxon rank-sum test to identify the differentially expressed genes (Seurat function FindMarkers). The dimensionality reduction results were reformatted for compatibility with the learn_graph function in the R package monocle3, used for trajectory analysis 37 . This analysis was done for ve groups of cells -24 h unique cluster, 48 h pre-mitotic cells, 48 h mitotic cells, 72 h unique cluster, and 48 h quiescent cells. Metabolic ux assessment The bioenergetics of hiPS-CMs were measured using a Seahorse Bioscience XF96e Flux Analyzer. For these experiments, the assay medium consisted of unbuffered phenol red-free DMEM pH 7.4, supplemented with 5 mM glucose, 1 mM glutamine, and 100 µM L-carnitine, 100 nM insulin, and 100 µM BSA-palmitate. Following microplate insertion, the XF96e automated protocol consisted of a 12 min delay followed by baseline oxygen consumption rate (OCR) and extracellular acidi cation rate (ECAR) measurements. All experiments were conducted at 37°C. Data were normalized to the protein content. Stable isotope-resolved metabolomics (SIRM) hiPS-CMs were incubated in growth medium containing 5.5 mM 13 C 6 -glucose in 6-well plates for 8 h, after which cell reactions were quenched in cold acetonitrile, and extracted in acetonitrile: water: chloroform (v/v/v, 2:1.5:1), and processed as described previously for metabolite assessments using mass spectropmetry [38][39][40][41][42] . Stable isotope data analyses were performed by obtaining the mass spectrometer .raw les, which are rst converted to .mzML format with msConvert tool, a part of an open-source ProteoWizard suite, described in detail by Adusumilli and Mallick 43 . Isotopologue peak deconvolution and assignments were performed using El-MAVEN. Peaks were assigned using a metabolite library rst generated and veri ed using full-scan MS and MS/MS spectra of unlabeled samples, as described previously 41,42,44 . The library contained metabolite names and corresponding molecular formulae used to generate theoretical m/z values for all possible isotopologues and retention times. The El-MAVEN parameters for compound library matching were as follows: EIC Extraction Window ± 7 ppm; Match Retention Time ± 0.60 min. For 13 C isotopologue peak detection, the software criteria were set as follows: Minimum Isotope-parent correlation 0.5; Isotope is within seven scans of the parent; Abundance threshold 1.0; Maximum Error To Natural Abundance 100%. All assignments were visually inspected and compared with unlabeled samples for reference. Any peak groups assigned in error, e.g., not present or having different retention times than in the unlabeled samples, were deleted, and correct peak assignments were added manually, as described in 42 . Finally, the peak list with corresponding abundances was exported to a comma-separated (CSV) le and uploaded to the Polly work ow to perform natural abundance correction using Polly Isocorrect. Finally, the data were analyzed and plotted with GraphPad Prism 8.0 (GraphPad Software, San Diego, Ca, U.S.A.). Immunocytochemistry and immunohistochemistry The hiPS-CMs were xed in 4% formaldehyde for 20 min (Thermos Scienti c Cat#28908). Table 1 showed a list of primary and secondary antibodies used in this study. Cells/ tissue sections were then washed three times with PBS and stained with DAPI 1µg/ml (Biotium Cat# 40043) to stain the nucleolus blue. For EDU detection, the cells were also treated with 5µM 5-ethyl-2-deoxyuridine (EDU) for the course of the experiment, which will incorporate into the newly synthesized DNA. After xation, permeabilization, and blocking of the cells/ tissue sections, the EDU incorporation was visualized using Click it EDU-Alexa-Flour 647 imaging kit (Thermo Fisher Cat# C10340). For live-cell imaging, the cells were treated with NucBlue live cells stain (Thermo Fisher ) for 20 minutes. Imaging was conducted for the whole well using the high content imaging instrument, Cytation 1. The percentage of co-localization of PHH3, EDU, GFP, or gene expression and Troponin-T was quanti ed using Gen 5.05 software. Animal experiments Animal studies were performed following the University of Louisville animal use guidelines, and the protocols were approved by the Institutional Animal Care and Use Committee (IACUC) and were accredited by the Association for Assessment and Accreditation of Laboratory Animal Care. MADM mice experiment For lineage tracing, we used mosaic analysis with double markers (MADM) transgenic mice were developed as prescribed in 4 . All the surgeries were performed as described in 45,46 . Adult (about 12 weeks old) female MADM mice were anesthetized with sodium pentobarbital (60 mg/kg i.p.). After opening the chest through a left thoracotomy, a nontraumatic balloon occluder was implanted around the mid-left anterior descending coronary artery (L.A.D.) using an 8-0 nylon suture. Myocardial infarction was produced by 60-min coronary ischemia, followed by reperfusion (I/R). Rectal temperature was carefully monitored and maintained around 37°C throughout the experiment. Successful performance of coronary occlusion and reperfusion was veri ed by visual inspection and by observing S-T segment elevation and widening of the QRS on the electrocardiogram during ischemia and their resolution after reperfusion. Seven days after I/R, mice were re-anesthetized with sodium pentobarbital, 60 mg/kg I.P. and the chest reopened through a central thoracotomy. The mice were randomly selected to be injected with 20 ul of TNNT2-4Fpolycistronic-NIL, TNNT2-LacZ-NIL virus intramyocardially using a 30-gauge needle. The injections were made at the border between infarcted and non-infarcted myocardium as two injections 10 µL each (2x10 7 transducing units (T.U.) per mouse heart). Forty-eight hours after injection, mice received Rat experiments All surgeries were performed as described in 47,48 . Brie y, Female Fischer 344 (F344) rats at the age ranging from 8-12 weeks were anesthetized with ketamine (37 mg/kg) and xylazine (5 mg/kg), intubated, and ventilated with a rodent respirator. Anesthesia was maintained with 1% iso urane inhalation, and body temperature was kept at 37°C with a heating pad. All rats underwent a 2 h occlusion of the left anterior descending coronary artery, followed by reperfusion. Seven days after MI, echocardiography was performed to ensure the development of MI. All rats in this study had EF drop > 20 points from baseline. Rats were randomized into two groups (TNNT2-GFP-NIL, TNNT2-4Fpolycistronic-NIL). Rats were reanesthetized with ketamine/xylazine, intubated, and ventilated. The chest was reopened to expose the heart. Viral vectors (1x10 8 T.U. per rat heart in 100 µl PBS) were injected into the left ventricle along the infarct border at ve sites (20 µl/site) using a 30G needle. The rat surgeon was blinded to whether 4F or control non-integrated lentivirus was administered in each animal. Cardiac function was assessed by serial echocardiography at baseline (before MI), one week after MI (before virus injection), and then four weeks after virus injection. Animals were anesthetized lightly with iso urane, placed on the imaging table in the supine position, and prepared for imaging using the Vevo 2100 Imaging System (Visual Sonics) equipped with a 25-MHz transducer. Parasternal longitudinal axis images were acquired and analyzed by LV trace using the Vevo LAB 3.2.6 to obtain the LV functional parameters, including the end-diastolic and end-systolic area, volume, stroke volume, fractional shortening, and ejection fraction. Imaging and calculations were done by an individual who was blinded to the treatment, and the code was broken after all data acquired. At the end of the experiments (5 weeks after MI), animals were sacri ced, and their hearts were harvested for histological studies. The frozen hearts were sectioned longitudinally into 400-500 sections 8um thickness (take a section to throw one add two sections per slide), and one slide for every ten slides (20-25 slide per animal) were stained with Standard Masson's Trichrome staining to determine scar size. The stained sections were imaged using the Keyence BZ9000 imaging system (4X magni cation). Image J software was used to measure the scar area (blue) and healthy area (red) on longitudinal sections. Individuals assessing scar area were blinded to the treatment applied in each animal. Pig experiments All surgeries were performed as described in 49 . Yorkshire pigs weighing 25-35 kg received 200 mg amiodarone orally daily for seven days pre-operatively. Pigs were premedicated with an intramuscular injection of a solution containing ketamine hydrochloride and xylazine. Pigs were injected with a dose of buprenorphine S.R. before the procedure. To create myocardial infarction, the right neck's skin was cut to make a small opening, allowing access to the right carotid artery. A 7-8F fast-cath sheath was introduced into the carotid artery. The pig was injected with Heparin (300 units/kg I.V.) to prevent clotting of the sheaths and catheters during the procedure. After intubation, the pig was mechanically ventilated. Anesthesia was maintained with iso urane. Body temperature was monitored continuously with a rectal probe attached to a thermocouple and maintained within physiology range using a veterinary blanket. The pigs were subjected to 5 minutes of stabilization, followed by baseline hemodynamics and echocardiography. A 6-7F Hockey-stick catheter was guided to the left main coronary artery under uoroscopy as following: the catheter engaged the left main coronary ostium, and an angioplasty-type balloon catheter and guidewire assembly were uoroscopically guided into the L.A.D. Then, the wire was advanced into the distal L.A.D., and an appropriate balloon catheter was telescoped over the wire and positioned above the rst diagonal branch (the entire L.A.D. territory was included for occlusion). The balloon's placement will be veri ed by intracoronary contrast dye injection (Contrast media) and documented by cine angiogram before in ation. The balloon was in ated to occlude the L.A.D. and the L.A.D. occlusion was maintained for 90 minutes to produce myocardial infarction, targeting an infarct size of >50% of the area at risk. In ation and position of the balloon were veri ed by contrast angiogram again at the end of ischemia. If necessary, the balloon will be repositioned, and such "positional re-in ation" will be limited to less than 20 seconds to avoid any preconditioning. Once the balloon is in ated, external de brillator pads were placed on the pig's chest for "hands-free" cardioversion if ventricular brillation occurs using a bipolar de brillator (HP Codemaster XL+) at 300 Joules. After the 90 min ischemic period, the intracoronary balloon was de ated to initiate reperfusion. After the procedure of myocardial infarction, the balloon catheter was withdrawn, and a cine angiogram was taken to document the wide open of the L.A.D. artery. After withdrawing the Hockey-stick guide catheter, the arterial sheath catheter was removed, and the arterial was repaired by anastomosis. The skin incision was closed in 3 layers using 3-0 Vicryl for internal sutures and 3-0 P.D.S. for the nal subcutaneous layer. The pig was weaned from anesthesia, and the animal was extubated when appropriate and allowed to recover. Animals received antimicrobial therapy cetiofur pre-operatively and every 24 hours for the rst 48 hours postoperatively. Animals were prepped and draped in a routine sterile fashion. 5% dextrose and normal saline was continuously infused during the procedures Seven days after the MI procedure, pigs were re-anesthetized as described above. Pigs were subjected to MRI scans and echocardiography. Anesthesia was maintained with iso urane. Body temperature was monitored continuously with a rectal probe attached to a thermocouple and maintained within physiology range using a veterinary blanket. Animals will be prepped and draped in a routine sterile fashion. A dose of buprenorphine S.R. will be given before the procedure. The chest was opened through the initial skin incision and continued down to the sternum. To avoid a post-surgical pulmonary complication, a midline sternotomy to expose the heart in the intrapleural space without breaking the pleural membrane was performed with extra care to avoid rapture of the pleural membrane intact during the opening of the mediastinum. Then the heart suspended in a pericardial cradle as described in 50,51 . In the case of the pleural membrane broken, efforts are made to close the tear by re-approximation of the pleural membrane with 6-0 Prolene and to reestablish a negative pressure in the pleural cavity using a withdrawal tube with a purse-string closure. After the chest was opened, the pericardium was cut vertically, and the heart was exposed and suspended in a pericardial cradle. Five intramyocardial injections (200ml each) (3x10 9 T.U. total/pig heart) of viral vectors were performed along the infarct border, and the site of injection was demarcated with a 6-0 Prolene suture. After completing these procedures, warm normal saline (approx. 500mL) was used for ushing the thoracic cavity. This ush was suctioned out before closing the chest. The pericardium was approximated as soon as possible. The sternum was closed with a 20G stainless steel suture and 5 Green Braided P.T.F.E. nonabsorbable surgical sutures. The chest was closed in layers (0 PDS II suture for the muscle and 2-0 PDS II suture subcutaneously), and a single mediastinal tube (18F catheter), 3-way valve, and 60cc syringe will be used to reestablish a negative intrapleural pressure and evacuate any remaining blood or irrigation solution. The chest tube (18F catheter) was removed before skin closing after no visible air leak or blood accumulation, and a purse-string suture (2-0 PDS II) was used to ensure an airtight seal. The skin incision was then glued with Vetbond adhesive. Then the chest was closed, and the inhaled anesthetic was turned off, the animal extubated when appropriate, and allowed to recover. Animals received antimicrobial therapy cetiofur pre-operatively and every 24 h for the rst 48 h postoperatively. Four weeks after virus injection animal was anesthetized with iso urane as described before to perform the nal cardiac MRI and echocardiography. Body temperature will be monitored and maintained within the physiological range. Arterial blood pressure and surface E.C.G. were monitored continuously. Then the animal was deeply anesthetized with 5% iso urane. A bolus of 3-6 ml/kg of 3M KCl solution will be injected into the left atrium until the heart is arrested. After the cessation of vital signs, the heart will be harvested for postmortem procession. T.T.C. stain of the pig heart Directly after the heart was harvested, the aorta was perfused with normal saline (500-1000 ml) to ush out vascular blood. The heart was weighed and transversely sliced into 5-6 sections. Heart sections were incubated in 1% T.T.C. at 37 o C for 5 min. Then right ventricular, atriums were removed, and LV sections were weighted. The pictures of LV slices were taken using a professional camera. Images were analyzed using Image-J software. Scar size percentages were calculated. Statistical Analyses For all assays, power analyses were performed to choose the group sizes, which will provide >80% power to detect a 10% absolute change in the parameter with a 5% Type I error rate. These power analyses indicated a minimum of 4 experimental replicates per group; therefore, we used a range of 4 -15 experimental replicates per group for each assay. Special statistical consideration for RNAseq was detailed under the methods section. The Kolmogorov-Smirnov (K-S) test for normality was conducted; all data sets showed normal distribution. Then, differences between the two groups were examined for statistical signi cance with unpaired Student t-tests. However, to compare data consisting of more than two groups, we performed one-or Two-way ANOVA tests followed by Bonferroni post hoc multiple comparisons to get the corrected p-value. A value of P<0.05 was regarded as signi cant. Error bars indicate S.D. The person who performed the analysis was blinded to the experimental groups. Two blinded clinical cardiology fellows assessed all cardiac function Echocardiography and MRI analyses, and the data represented are the average of their analyses. Usually, there were no signi cant discrepancies between the two readings.
2020-12-31T09:07:33.718Z
2020-12-28T00:00:00.000
{ "year": 2020, "sha1": "5f1eb6abcb45b212e8634198ee82e80898d8236a", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-122026/latest.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8e985ded8b0214651ff38700d02af73d9bfeca0e", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245181464
pes2o/s2orc
v3-fos-license
Sediment Assessment of the Pchelina Reservoir, Bulgaria The temporal dynamics of anthropogenic impacts on the Pchelina Reservoir is assessed based on chemical element analysis of three sediment cores at a depth of about 100–130 cm below the surface water. The 137Cs activity is measured to identify the layers corresponding to the 1986 Chernobyl accident. The obtained dating of sediment cores gives an average sedimentation rate of 0.44 cm/year in the Pchelina Reservoir. The elements’ depth profiles (Ti, Mn, Fe, Zn, Cr, Ni, Cu, Mo, Sn, Sb, Pb, Co, Cd, Ce, Tl, Bi, Gd, La, Th and Unat) outline the Struma River as the main anthropogenic source for Pchelina Reservoir sediments. The principal component analysis reveals two groups of chemical elements connected with the anthropogenic impacts. The first group of chemical elements (Mn, Fe, Cr, Ni, Cu, Mo, Sn, Sb and Co) has increasing time trends in the Struma sediment core and no trend or decreasing ones at the Pchelina sampling core. The behavior of these elements is determined by the change of the profile of the industry in the Pernik town during the 1990s. The second group of elements (Zn, Pb, Cd, Bi and Unat) has increasing time trends in Struma and Pchelina sediment cores. The increased concentrations of these elements during the whole investigated period have led to moderate enrichments for Pb and Unat, and significant enrichments for Zn and Cd at the Pchelina sampling site. The moderately contaminated, according to the geoaccumulation indexes, Pchelina Reservoir surface sediment samples have low ecotoxicity. Introduction Chemical elements are among the most widespread of the various pollutants originating from anthropogenic activities, particularly from mining, metallurgy and smelting waste sites. They are one of the most persistent pollutants in the environment, since they do not decompose, nor do they biodegrade into simpler and less harmful substances. Their concentrations in water are quite variable, with the highest concentrations found in the suspended matter (insoluble substances) and the lowest in the liquid phase [1][2][3]. The distribution of the elements is primarily on the surfaces of sediments, suspended particles, and other solids, decreasing in the order: suspended matter > sediment > water [4,5]. Chemical processes occurring in natural waters, such as oxidation, can cause them to precipitate from the solution [6]. As an integral part of natural water sources, sediments play an important role in the biogeochemical cycle of the elements, as they are the site of deposition and chemical transformation of many compounds entering the water bodies [7]. The elements enter surface waters from many sources, in the form of atmospheric deposits, or are leached Results To reveal the relationships between the analyzed chemical elements and/or layers in the sediment cores, a principal component analysis (PCA) was applied. The input data set used for PCA consists of 58 objects (layers in the sedimentary cores) and 20 variables (analyzed chemical elements). PCA of the data from the three sedimentary cores revealed that the first three main components describe almost 80% of the variation of the data. The number of latent variables is determined based on their eigenvalues and the internal model validation error. In the formation of the first principal component, explaining 41.53% of data variance, the following elements have a significant impact: Mn, Fe, Cr, Ni, Cu, Mo, Sn, internal model validation error. In the formation of the first principal component, explaining 41.53% of data variance, the following elements have a significant impact: Mn, Fe, Cr, Ni, Cu, Mo, Sn, Sb and Co ( Figure 1). This component separates the elements into two groups according to their time trends in Struma sediment core: (i) Mn, Fe, Zn, Cr, Ni, Cu, Mo, Sn, Sb, Pb, Co, Cd and U have increased concentrations with time; and (ii) Ti, Ce, Tl, Bi, Gd, La and Th have decreased or non-significant time trends. The second principal component (22.16%) is formed by Ce, Gd, La, Th, and Ti, which significantly decrease over time in the sedimentary cores at Struma and Svetlia rivers ( Figure 2). These decreasing trends do not lead to a significant change in the contents of the elements in the Pchelina Reservoir, which are comparable to the contents in the sedimentary cores near the Struma and Svetlia at the beginning of the period. The second principal component (22.16%) is formed by Ce, Gd, La, Th, and Ti, which significantly decrease over time in the sedimentary cores at Struma and Svetlia rivers ( Figure 2). These decreasing trends do not lead to a significant change in the contents of the elements in the Pchelina Reservoir, which are comparable to the contents in the sedimentary cores near the Struma and Svetlia at the beginning of the period. The third principal component (15.44%) is formed by the elements Zn, Pb, Cd, Bi and U nat . The factor scores of the layers of the Pchelina core show a particularly pronounced positive trend, which leads to the formation of two groups of layers. The first group, covering the beginning of the studied period (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11), has contents comparable to the sedimentary core of the Svetlia (anthropogenically undisturbed river), while the contents of the elements in the second group (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23) are the highest for all the three studied sediment cores ( Figure 3). The layers in the Struma sediment core have medium factor scores between both groups of Pchelina sediment core. nounced positive trend, which leads to the formation of two groups of layers. The first group, covering the beginning of the studied period (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11), has contents comparable to the sedimentary core of the Svetlia (anthropogenically undisturbed river), while the contents of the elements in the second group (12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23) are the highest for all the three studied sediment cores ( Figure 3). The layers in the Struma sediment core have medium factor scores between both groups of Pchelina sediment core. 137 Cs has been widely applied as an environmental tracer in the study of sediment recent deposition history (usually within the last 50 years) [32,33]. Each of the 2 cm sediment core fractions was analyzed for its 137 Cs content. Only in one of these fractions for each sediment core, radioactivity (γ-activity) was found and the content of 137 Cs measured was between 32.8 Bq/kg to 55.3 Bq/kg specific activities. Based on these findings, a conclusion was drawn that, on average, 15 cm of sediment was deposited in the 34 years since the Chernobyl incident (1986). This means that the average sedimentation rate at the sampling points in Pchelina Reservoir is 0.44 cm/year. Such results differ from the literature values-usually > 2 cm/year [27,34]-but are closer to the reported values for the rates of river sediment-usually < 0.3 cm/year) [30]. The element depth profiles of sediments from Pchelina Reservoir at the different sampling points (Pchelina, Struma and Svetlia) are shown in Figure 4. Red points indicate the sample in which the highest radioactivity (γ-activity) has been registered, which corresponds to the Chernobyl pollution of 1986. A similar approach was used by Audry et al. [13] for the sediment core dating of the Lot River reservoirs. 137 Cs has been widely applied as an environmental tracer in the study of sediment recent deposition history (usually within the last 50 years) [32,33]. Each of the 2 cm sediment core fractions was analyzed for its 137 Cs content. Only in one of these fractions for each sediment core, radioactivity (γ-activity) was found and the content of 137 Cs measured was between 32.8 Bq/kg to 55.3 Bq/kg specific activities. Based on these findings, a conclusion was drawn that, on average, 15 cm of sediment was deposited in the 34 years since the Chernobyl incident (1986). This means that the average sedimentation rate at the sampling points in Pchelina Reservoir is 0.44 cm/year. Such results differ from the literature values-usually > 2 cm/year [27,34]-but are closer to the reported values for the rates of river sediment-usually < 0.3 cm/year) [30]. The element depth profiles of sediments from Pchelina Reservoir at the different sampling points (Pchelina, Struma and Svetlia) are shown in Figure 4. Red points indicate the sample in which the highest radioactivity (γ-activity) has been registered, which corresponds to the Chernobyl pollution of 1986. A similar approach was used by Audry et al. [13] for the sediment core dating of the Lot River reservoirs. To determine whether significant time trends of elements were observed, the Mann-Kendell test was performed. The results are presented in Table 1, with significant trends (p < 0.05) marked with "+" for increasing and "−" for decreasing trends. The significant time tendencies in the sediment cores influenced by the flowing rivers reveal that Struma River (sampling point 1) is the main source of Pchelina Reservoir pollution. According to the sedimentation rate calculations, it is assumed that any concentration of each element in a layer more than 6 cm below the sample, which corresponds to the Chernobyl pollution (marked in red in Figure 4), is a background concentration. The average result for Fe (used as a conservative element) of all such samples was calculated for each sampling point and used to obtain the enrichment factor for the sample corresponding to 2020 (top layers, Table 2). To determine whether significant time trends of elements were observed, the Mann-Kendell test was performed. The results are presented in Table 1, with significant trends (p < 0.05) marked with "+" for increasing and "−" for decreasing trends. The significant time tendencies in the sediment cores influenced by the flowing rivers reveal that Struma River (sampling point 1) is the main source of Pchelina Reservoir pollution. Element Pchelina Struma Svetlia Bi Significant enrichment is observed only in the reference point (Pchelina), in terms of Zn and Cd (7.6 and 7.5, respectively). Regarding Cd, the enrichment in Svetlia is moderate (3.2). The enrichment factor is 1.8 in the Struma, which shows significant pollution with this metal in the entire Reservoir. Except for these elements, moderate enrichment is observed for Cu, Pb, Ce and Th only at Pchelina, as well as for Tl and U nat in Svetlia and Pchelina. The same approach for determination of the reference concentration (average of all results for the layers more than 6 cm below the sample, which corresponds to the Chernobyl pollution-marked in red in Figure 4) for each of the studied elements was used to determine the geoaccumulation index. The results are presented in Table 3. Table 3. Geoaccumulation index (I geo ) in the three sampling points (yellow-uncontaminated to moderately contaminated sample, red-moderately contaminated sample). It is evident from Table 3 that for most elements, the geoaccumulation index corresponds to an uncontaminated sample. This result is especially important for the sediments at the inflow of the Svetlia River (sampling point 2), where only for Cd, Tl and U nat the index corresponds to an unpolluted to moderately polluted sample. The geoaccumulation index in the sediments at the inflow of the Struma River (sampling point 1) corresponds to an unpolluted to moderately contaminated sample for Fe, Cr, Mo and Sn and to a moderately polluted sample for Sb and Cd. The geoaccumulation index for the sediments of Pchelina Reservoir from the village of Radibosh (sampling point 3), which corresponds to an unpolluted to moderately polluted sample in terms of Pb, Tl and U nat (similar to the sediments in Svetlia) and a moderately polluted sample in terms of Zn and Cd (analogous to the sediments in the Struma). The moderate contamination of the sediments of Pchelina Reservoir in all sampling points with Cd is impressive. Similar results were obtained for sediments of Wadi Al-Arab Dam, Jordan [35], where sediments were found uncontaminated with Mn, Fe, and Cu, moderately contaminated with Zn, and strongly to extremely contaminated with Cd. Ye et al. [36] compared deposits of "heavy metals", accumulated in the water-level-fluctuation zone before and after the submergence period and found that Cd is the main pollutant of the sediment. Element The results of the conducted biotest Phytotoxkit F ™ are presented in Table 4 and reveal the low ecotoxicity of the surface sediments from Pchelina Reservoir. The indicator related to seed germination (SG) of Sinapis alba shows the lack of ecotoxicological effect. This means that the number of germinated seeds in the analyzed sediments is equal to their number in the control sample, in this case, all 10 seeds have germinated. The ecotoxicological effect, reflecting the growth of the roots of Sinapis alba, is relatively weak, as it is highest in the sample from Struma River, and even shows a negative value in Pchelina (hormesis). This means that the roots of the test species used are longer than the control sample. Discussion The background concentrations strongly depend on geological characteristics such as mineral composition, grain size distribution and organic matter content. Thus, establishing geochemical background concentrations of chemical elements is a very important step in environmental pollution assessment [37]. 137 Cs as an indicator of sedimentation processes is consistent as it binds almost irreversibly to clay and silt particles and because of its relatively long half-life (t 1/2 = 30.2 years). Moreover, the Chernobyl nuclear accident of 1986 has been recorded by the European sediments. Thus, 137 Cs activity depth profiles are often used for sediment core dating [38]. Based on the measurement of γ-activity and the calculated sedimentation rate in the Pchelina Reservoir of an average of 0.44 cm/year, it can be assumed that three of the points under 1986 (marked in red) are samples of sediment (approximately 6 cm), and the rest are from the natural soil cover, flooded during the construction of the Reservoir (1975). The remaining samples should contain levels of the elements that correspond to the background concentrations of these elements before they are anthropogenically affected. This was the basis for the analysis of the elements' core depth profiles in the present study. It is noteworthy that the concentrations of most of the studied elements increase, except for Bi, La, Ti, Tl, where the concentrations decrease over time, while Ce, Gd, Th practically do not change. The PCA results divided the analyzed chemical elements into three groups. The elements with significant impact in the formation of the first principal component (Mn, Fe, Cr, Ni, Cu, Mo, Sn, Sb and Co) have an increasing time trend in Struma sediment core (sampling site 1) and no trend (Co, Cr, Mn, Mo) or a decreasing one (Cu, Fe, Ni, Sb, Sn) at the Pchelina sampling site. The factor scores of the layers in the Pchelina sediment core (sampling site 3) increase until the late 1990s and at the end of the investigated period decreased to the levels between 1988 and 1994. The absence of an increasing trend in concentrations of the abovementioned elements at the reference point for Pchelina Reservoir (between the inflows of the two rivers-Struma River-anthropogenically affected and Svetlia River-anthropogenically unaffected) could be explained by the changed profile of the industry in the town of Pernik during the 1990s, since when mining and metallurgy have not been so dominant. These observations are supported by the calculated EFs and geoaccumulation indexes for the top layer at the Pchelina sampling site where only for Cu moderate enrichment is observed. The chemical elements associated with the second principal component (Ce, Gd, La, Th, and Ti) have decreasing time trends in both sediment cores at the two river inflows and excluding Th in the Pchelina sediment core too. These elements have no anthropogenic origin and the moderate enrichment of Ce and Th at the Pchelina sampling site could be attributed to their geochemical immobility [39]. The third group of chemical elements (Zn, Pb, Cd, Bi and U nat ) forming the third principal component have significant increasing trends in Struma and Pchelina sediment cores. This leads to the formation of two distinct groups of layers in the Pchelina sediment core before and after the 1990s (layers 11 and 12). The increased concentrations of these elements at the end of the investigated period have led to moderate enrichments for Pb and U nat , and significant enrichments for Zn and Cd at the Pchelina sampling site. The respected geoaccumulation indexes confirm these observations with uncontaminated to moderately contaminated values for Pb and U nat , and moderately contaminated ones for Zn and Cd. The results from the present study largely confirm the conclusions made by Meuser and co-authors in 2006 [40] where the impact of industry located in the region of Pernik town results in increased concentrations of Pb, Cd, Cr, Cu and Zn. The results of the conducted biotest Phytotoxkit F ™ reveal the low ecotoxicity of surface sediments, which is an indication that the concentrations of the elements classifying the Pchelina Reservoir samples as moderately contaminated has no significant ecotoxicological effect. Sampling The sampling of the bottom sediment materials was carried out in the period July-September 2020. Three locations in the Pchelina Reservoir were selected ( Figure 5). Sampling point 1 is located at the inflow of the Struma River into the Pchelina Reservoir. Sampling point 2 is located at the inflow of the Svetlia River into the Pchelina Reservoir. The sampling point in Pchelina Reservoir-at the village of Radibosh (sampling point 3)-was chosen between the inflows of the two rivers-Struma River (anthropogenically affected by the industry located in the region of Pernik town) and Svetlia River (anthropogenically unaffected)-as a reference point. Thin-walled tubes with small diameters, which ensures mechanical immersion in the sediment's mass, were used for obtaining semi-intact specimens of fine-grained sediment samples. The basic components of tube-type samplers include steel grade S355 hardened cutting tip, body tube or barrel, and a threaded end. Pipes with a diameter of 48 × 1 mm were used, coupled to a nozzle to reach a depth of approximately 1.20 m (4 ft) below the water surface. The entire Shelby tube system follows the design requirements of ASTM D1587/D1587M-15 [41]. Sampling was carried out with a percussion-swirling motion by hand. Separate sampling tubes were packed and marked on-site and transported to the laboratory at 4 °C. Sediment Digestion Sediment samples were first pre-grinded, sieved through 2-mm sieves and homogenized. Sub-samples of 0.25 g were accurately weighed using an analytical balance, 10 mL of conc. Hydrofluoric acid (HF, 47-51%, Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade) was added and the mixtures were left for 24 h. Subsequent dissolution was performed using a sand bath after adding an additional amount of 10 mL conc. HF (47-51%, Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade) and 5 mL conc. Perchloric acid (HClO4, 70% Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade). The samples were heated until the acid mixture was reduced to 1/3 of the initial volume. Then portions of 10 mL conc. HF were added and heating in a sand bath was performed until complete dissolution of the sediments. Then two portions of 10 mL conc. Nitric acid (HNO3, 67-69%, Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade) were added and the samples were heated in a sand bath until the volume was reduced to 0.5-1 mL. After cooling, the samples were quantitatively transferred to 50 mL polyethylene tubes by repeated washing with double deionized water. All samples were initially diluted to 50 mL, and immediately before instrumental measurement, an additional dilution of 1 mL to 14 mL was performed. Thin-walled tubes with small diameters, which ensures mechanical immersion in the sediment's mass, were used for obtaining semi-intact specimens of fine-grained sediment samples. The basic components of tube-type samplers include steel grade S355 hardened cutting tip, body tube or barrel, and a threaded end. Pipes with a diameter of 48 × 1 mm were used, coupled to a nozzle to reach a depth of approximately 1.20 m (4 ft) below the water surface. The entire Shelby tube system follows the design requirements of ASTM D1587/D1587M-15 [41]. Sampling was carried out with a percussion-swirling motion by hand. Separate sampling tubes were packed and marked on-site and transported to the laboratory at 4 • C. Sediment Digestion Sediment samples were first pre-grinded, sieved through 2-mm sieves and homogenized. Sub-samples of 0.25 g were accurately weighed using an analytical balance, 10 mL of conc. Hydrofluoric acid (HF, 47-51%, Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade) was added and the mixtures were left for 24 h. Subsequent dissolution was performed using a sand bath after adding an additional amount of 10 mL conc. HF (47-51%, Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade) and 5 mL conc. Perchloric acid (HClO 4 , 70% Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade). The samples were heated until the acid mixture was reduced to 1/3 of the initial volume. Then portions of 10 mL conc. HF were added and heating in a sand bath was performed until complete dissolution of the sediments. Then two portions of 10 mL conc. Nitric acid (HNO 3 , 67-69%, Fisher Chemicals, Waltham, MA, USA, Trace Metal Grade) were added and the samples were heated in a sand bath until the volume was reduced to 0.5-1 mL. After cooling, the samples were quantitatively transferred to 50 mL polyethylene tubes by repeated washing with double deionized water. All samples were initially diluted to 50 mL, and immediately before instrumental measurement, an additional dilution of 1 mL to 14 mL was performed. Inductively Coupled Plasma Mass Spectrometry (ICP-MS) The sediment samples were analyzed using Perkin-Elmer SCIEX Elan DRC-e ICP-MS (MDS Inc., Concord, ON, Canada) with cross-flow nebulizer. The spectrometer was optimized (RF power, gas flow, lens voltage) to provide minimal values of the ratios CeO + /Ce + and Ba 2+ /Ba + as well as maximum intensity of the analytes. The concentrations of 20 elements (Bi, Cd, Ce, Co, Cr, Cu, Fe, Gd, La, Mn, Mo, Ni, Pb, Sb, Sn, Th, Ti, Tl, U nat and Zn) were determined. Coefficients for the reduction of the analytical signal (RPa, Dynamic Bandpass Tuning parameter), pre-optimized for sediment matrix, were used for the determination of Fe, Mn, and Ti using ICP-MS. They are presented in Table 5. Chemical elements were determined under standard conditions. In the course of the analysis, the appropriate isotopes of the elements in terms of natural distribution and low spectral interference were selected. External calibration by matrix-matched standard solutions was performed. Single element standard solutions of Bi, Cd, Ce, Co, Cr, Cu, Fe, Gd, La, Mn, Mo, Ni, Pb, Sb, Sn, Th, Ti, Tl, U nat and Zn (Fluka, Steinheim, Switzerland) with initial concentration of 1000 mg/L were used to construct the calibration curve after appropriate dilution. The multielement calibration standard solutions were prepared in the concentration range from 0.2 to 20 mg/L for Fe, Mn and Ti, and in the range 0.01 to 100 µg/L for the other elements. All standard solutions were prepared with double deionized water (Millipore purification system Synergy, Molstheim, France). The calibration coefficients for all calibration curves were at least 0.99. The linearity was proven to be four orders of magnitude for Fe, Mn, and Ti, and five orders of magnitude for the other elements. The operating conditions of the ICP-MS are listed in Table 6. To verify the accuracy of the analysis, stream sediment certified reference materials STSD-1 and STSD-3 of (Canada Center for Mineral and Energy Technology, Geological Survey of Canada) were subjected to analysis using the same sample preparation. Tables 7 and 8 present the experimentally determined and the certified values of the analyzed elements in the certified reference materials STSD-1 and STSD-3, respectively. If the values of the analytical yields are in the range of 97-108%, the method is considered fit-for-purpose. All measurements were performed in triplicate and the mean value was reported. Gamma-Spectrometry The individual sediment samples were air-dried and cleaned from plant impurities (roots, leaves, shells, etc.), followed by sieving through 2-mm sieves and homogeniza-tion. Samples of about 10 g were packed in standard geometry vessels and measured by gamma-spectrometry for the determination of the total activity of 137 Cs. The activity of radionuclides was measured using HPGe detector Canberra 7229 (energy resolution 1.8 and efficiency 16% at 1332.5 keV) coupled to a 4196-channel analyzer Canberra 35Plus. The calibration was achieved using national standard radioactive solutions and standard samples, produced, and standardized at Czech Metrology Institute (Serial No.: 130520-1785043). The accuracy and precision of the analysis have been verified by participation in International Atomic Energy Agency (IAEA) round robin tests. All measurements were performed in triplicate and the mean value was reported. Ecotoxicological Studies The ecotoxicological test Phytotoxkit F ™ [42] measures the reduction of seed germination (SG) and the growth of young roots (RG) of selected higher plants (Sorghum saccharatum, Lepidium sativum and Sinapis alba) when seeded in contaminated samples for several days compared to a control sample. Sample preparation included drying, grinding, and sieving through 2-mm sieves as a preliminary step. Then the Phytotoxkit F ™ tests were performed on water-saturated samples. It was experimentally found that 35 mL of distilled water was required to achieve 100% saturation of control soil with a volume of 90 cm 3 . The determination of the water retention capacity of the analyzed sediments was performed on a representative sample (mean of the three analyzed sediments). To 60 mL of distilled water, 90 cm 3 of the sample was added and after equilibrium was reached, the volume of excess water (15 mL) was measured, and the volume of water required to achieve saturation of the sample was calculated (45 mL). The first step of conducting the Phytotoxkit F ™ test was to place 90 cm 3 of each of the studied sediments, as well as of the control sample, in special plastic test plates. This was followed by hydration with the necessary volume of distilled water to achieve saturation, placement of black filter paper, seeding of 10 seeds of the test plant (Sinapis alba-white mustard) and closing the test plate. Two repetitions for each sediment sample were made, and 3 for the control sample-Reference OECD soil for Phytotoxkit test (Microbiotests, Gent, Belgium). The test plates were placed vertically in an incubator for 72 h at a temperature of 25 ± 1 • C in the dark. As the last step, the number of germinated seeds was counted and the length of the roots was measured, using the free software ImageJ [43]. The ecotoxicological effect (%), reflecting the germination of the seeds is calculated by 100 × (A − B)/A, where A is the average number of germinated seeds for the control sample and B is the average number of germinated seeds for the analyzed sample. The ecotoxicological effect (%), reflecting the growth of the roots, is calculated by 100 × (A − B)/A, where A is the mean length (mm) of the plant roots from the control sample and B is the mean length (mm) of the plant roots from the sample. Principal Component Analysis (PCA) The Principal Component Analysis (PCA) is a multivariate approach to data reduction. The aim is to find and interpret the latent interdependencies between the variables (chemical elements) in the data set. Such variables form new ones, called latent factors or principal components. In addition to discovering the data structure, the PCA data set can be modelled, compressed, classified and visualized on a plane. The main task in PCA is the decomposition of the data matrix into two parts-a matrix of factor results and a matrix of factor weights. The factor weights show the participation of each of the old variables in the formation of the main components while the factor results are the coordinates of the objects (layers in the sedimentary cores) in the newly formed variables. The determination of the number of significant principal components is based on their eigenvalues and the percentage of explained variation in the data. Mann-Kendall Test The Mann-Kendall test is a non-parametric approach for estimating time trends. The assessment uses all possible discrepancies between the values for a given layer of sediment with those of previous years. Positive differences (increase in the concentration of the element) are marked with +1, negative (decrease in the concentration of the element) with −1, and the lack of difference with 0. The test takes into account only the sign of the differences. The null hypothesis is that there is no time trend, the alternative hypothesis is that there is a positive or negative time trend. The direction of the trend is determined by the sign of the S statistics, which is the difference between the number of positive and negative differences. At the assumed significance level (α = 0.05), the null hypothesis is rejected at values of p < 0.05. Enrichment Factor (EF) To distinguish anthropogenic pollution from the natural content of elements in the sediment, enrichment factors (EF) were calculated by comparing the measured concentrations of chemical elements with the geochemical background values of the study area. To avoid erroneous enrichment results, geochemical normalization based on the concentration of a conservative element is usually used. The purpose of normalization is to correct changes in the nature of the sediment that may affect the distribution of contaminants. Al, Fe, Th, Ti and Zr are usually used as conservative elements [15,44]. The normalized enrichment factor (EF) is determined by the metal/X concentration ratio (X = Fe, Al, Th, Ti, Zr) divided by the background metal concentration/background concentration ratio X: Five levels of pollution are often identified-EF < 2: low enrichment; EF 2-5: moderate enrichment; EF 5-20: significant enrichment; EF 20-40: strong enrichment; EF > 40: extremely strong enrichment. In addition, values of 0.5 ≤ EF ≤ 1.5 suggest that the concentration of the elements may come entirely from natural weathering processes. However, EF > 1.5 shows that a significant part of the microelements did not come from the earth's crust, i.e., their origin is from other sources, such as point and non-point pollution and biota [17]. Geoaccumulation Index (I geo ) A similar approach was used for the determination of the Geoaccumulation Index (I geo ): where C n is the concentration of the element in the sample and C ref is the background concentration [45]. The coverage factor of 1.5 used allows the normalization of possible variations in the data for background concentrations, which may also be due to anthropogenic pollution. 7 classes of contamination are known depending on the Geoaccumulation Index (I geo ≤ 0 -uncontaminated sample; 0 < I geo < 1 uncontaminated to moderately contaminated sample; 1 < I geo < 2 moderately contaminated sample; 2 < I geo < 3 moderately to highly contaminated sample; 3 < I geo < 4 heavily contaminated sample; 4 < I geo < 5 heavily to extremely contaminated sample; I geo > 5 extremely contaminated sample) [14,[46][47][48]. The highest class corresponds to at least a 100-fold difference with the background concentration. Conclusions Based on the measurement of γ-activity of the technogenic 137 Cs, an accumulation of an average of 15 cm of sediment was established for 34 years, at a sedimentation rate of an average of 0.44 cm/year. The distribution of the concentrations of chemical elements in the sediment from the three sampling points (1-Pchelina Reservoir at the flow of Struma River, 2-Pchelina Reservoir at the flow of Svetlia River and 3-Pchelina Reservoir near the village of Radibosh) is presented. PCA of the data shows three main components, which describe nearly 80% of the variation of the data. The first main component (41.53% of the data variation) contains Mn, Fe, Cr, Ni, Cu, Mo, Sn, Sb and Co. The factor scores show that the concentrations of these elements decrease in the order Pchelina > Struma > Svetlia. All these elements have a positive time trend in the Struma sediment core, which is an indication that most of the elements in the Reservoir come through the anthropogenically affected river Struma. The second main component (22.16%) is formed by Ce, Gd, La, Th, and Ti, which decrease significantly with time in Svetlia and Struma. The third main component (15.44%) is formed by the elements Zn, Pb, Cd, Bi and U nat . The factor scores of the layers in the sedimentary cores show the anthropogenic origin of most of these elements. There is an increase over time in the sedimentary cores in Struma and Pchelina. The increase in the sedimentary layers of Pchelina is especially pronounced. In the first group (beginning of the studied period), there are contents comparable to the sediment core of Svetlia, while the contents of the elements in the second group are the highest for the three studied sediment cores. To distinguish anthropogenic pollution from the natural content of elements in the sediment, enrichment factors (EF) have been calculated for which Fe concentrations have been used as a conservative element. Significant enrichment was observed only at the reference point (Pchelina), for Zn and Cd. In terms of Cd, the enrichment in Svetlia is moderate. The enrichment factor is 1.8 in the Struma, which shows significant contamination with this metal in the entire Reservoir. Except for these elements, moderate enrichment is observed for Cu, Pb, Ce and Th only in Pchelina, as well as for T1 and U na t in Svetlia and Pchelina. For most elements, the geoaccumulation index corresponds to an uncontaminated sample. In Svetlia, only for Cd, T1 and U nat the index corresponds to an uncontaminated to moderately contaminated sample. The index of geoaccumulation in the sediments of Struma corresponds to an uncontaminated to moderately contaminated sample for Fe, Cr, Mo and Sn and to a moderately contaminated sample for Sb and Cd. The geoaccumulation index for the sediments of Pchelina corresponds to an unpolluted to moderately contaminated sample for Pb, Tl and U nat (similar to the Svetlia sediments) and a moderately polluted sample for Zn and Cd (similar to the Svetlia sediments). Moderate contamination of the sediments of Pchelina Reservoir in all sampling points is from Cd. The results of the Phytotoxkit F bioassay revealed the low ecotoxicity of Pchelina Reservoir surface sediments in terms of both seed germination (SG) and root growth (RG) of the plant species Sinapis alba. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-12-16T17:39:16.054Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "9acfc067071b7cbe3bcf7f7a027919589cd51e8c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/26/24/7517/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1811d12920077a15b66adaf5a8d704d25f2bce25", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
119086316
pes2o/s2orc
v3-fos-license
Suppressing spin relaxation in silicon Uniaxial compressive strain along the [001] direction strongly suppresses the spin relaxation in silicon. When the strain level is large enough so that electrons are redistributed only in the two valleys along the strain axis, the dominant scattering mechanisms are quenched and electrons mainly experience intra-axis scattering processes (intravalley or intervalley scattering within valleys on the same crystal axis). We first derive the spin-flip matrix elements due to intra-axis electron scattering off impurities, and then provide a comprehensive model of the spin relaxation time due to all possible interactions of conduction-band electrons with impurities and phonons. We predict nearly three orders of magnitude improvement in the spin relaxation time of $\sim10^{19}\text{cm}^{-3}$ antimony-doped silicon (Si:Sb) at low temperatures. I. INTRODUCTION Silicon is a promising material choice for spintronic devices that require long spin lifetimes. [1][2][3][4][5][6][7][8][9][10] When electrons reach their saturation drift velocity, spin information in silicon can be transported over hundreds of microns, [11][12][13][14] being compatible with on-chip interconnect length scales. 15,16 Transport over such distances is possible not only due to the relatively weak spin-orbit coupling of silicon atoms, but also owing to two major manifestations of the crystal space inversion symmetry. The first one is the spin degeneracy of the energy bands in centrosymmetric materials resulting in cancelation of the Dyakonov-Perel spin relaxation mechanism. 17 The second manifestation, first studied by Yafet, is that space inversion and time reversal symmetries weaken the electron's spin-flip scattering amplitude due to interaction with phonons when |k i ± k f |a 1. [18][19][20][21] Here, k i(f ) is the initial (final) electron's wavevector and a is the lattice constant. In one case the electron remains in the same valley, |k i − k f |a 1, and in the other it is scattered to the opposite valley, |k i + k f |a 1. The former denotes intravalley scattering and the latter intervalley scattering between opposite valleys, termed g-process in silicon. [22][23][24][25] These weak spin-flip scattering processes can be generally classified as 'intra-axis' scattering if the conduction-band edge is degenerate having more than one lowest-energy valley in the Brillouin zone. Contrary to the weak intra-axis spin flip process in centrosymmetric materials, 'inter-axis' valley scattering can have a much larger spin-flip amplitude. 18,26 In this type of intervalley scattering, termed f -process in silicon, 22,24,25 the initial and final valleys are not connected by time reversal or space inversion symmetries. The electron transition between the valleys is mediated by interaction with shortwave phonons or shortrange scattering off impurities. 18,26 While the contribution from shortwave phonons becomes negligible at low temperatures due to their large energy compared with the thermal energy (k B T ), spin relaxation due to impurities is unavoidable at all temperatures. In either case, inter-axis valley scattering is the dominant means to relax the spins of itinerant electrons in unstrained silicon or germanium. 13,18,19,[26][27][28][29][30][31][32][33][34] Suppression of the detrimental inter-axis valley scattering can be achieved by applying uniaxial compressive strain along the [001]-crystallographic direction in silicon. 15,27,[35][36][37] This strain configuration lowers the energy edge for the pair of valleys along the strain axis, while the energy edge of all other valleys is raised. For the valley splitting energy to be sufficiently large compared with k B T , the strain levels should be of the order of 0.1% at 30 K and 1% at room temperature. Under these conditions, electrons populate the low-energy valley pair, and therefore can only experience intra-axis scattering processes (intravalley and g-process). To date, there are no theoretical models that quantify the spin relaxation due to intra-axis electron scattering off impurities. The aim of this paper is to fill this missing component, and to compare the relative contributions of electron-phonon and electron-impurity interactions to spin relaxation. Additional motivation to our work stems from the need to improve the electrical spin injection from ferromagnetic metals to heavily doped n-type silicon. Because of the so-called conductivity and spin lifetime mismatch problem, [69][70][71] electrical spin injection to semiconductors is largely limited to tunneling or ballistic injection of hot electrons. [72][73][74][75][76][77][78][79][80][81][82][83][84][85] In the former, electrons tunnel across the built-in potential barrier in the semiconductor side of the junction. The barrier is formed by the depletion region, existing both in direct metal-semiconductor Schottky contacts and metal-oxide-semiconductor structures with ultrathin oxide layers. Effective tunneling requires interface doping with donor concentrations of 10 19 cm −3 or higher so that the thickness of the depletion region is only a few nanometers. [72][73][74][75] Such degenerate doping levels come with a penalty of enhanced spin relaxation due to electron-impurity scattering. [86][87][88][89][90][91][92][93][94][95][96][97] As we will show, application of strain to suppress the dominant inter-axis valley scattering greatly increases the spin lifetime due to a much weaker effect of intra-axis scattering off impurities. This paper is organized as follows. Section II provides arXiv:1609.07077v1 [cond-mat.mtrl-sci] 22 Sep 2016 a theoretical background starting with a summary of our previous findings on inter-axis valley scattering in multivalley semiconductors. The second part of Sec. II contains a discussion on a compact k·p Hamiltonian whose eigenstates are used in Sec. III to derive the intra-axis spin-flip matrix elements due to scattering off impurities. To keep the discussion succinct, many technical details on the derivation of the Hamiltonian and its utilization in the context of intra-axis spin flips are provided in Appendix A. Section IV includes integration of all spin-flip processes due to electron-phonon and electron-impurity interactions, from which one can quantify the dependence of the spin relaxation on strain, temperature, donor impurity concentration, and donor identity. Section V includes results and discussion of these dependencies, and conclusions are provided in Sec. VI. Finally, Appendix B includes technical computational details. II. THEORETICAL BACKGROUND Contrary to single-valley semiconductors, such as GaAs, a zero-velocity wave packet can scatter off an impurity in multivalley semiconductors. The reason is that in addition to velocity, the wave packet has an extra "quantum number" -the valley index. We have recently shown that the spin relaxation due to scattering off impurities is governed by the inter-axis valley change, while the velocity of the wave packet plays a minor role. 26 This spin-flip process is enabled by short-range interaction of the scattered electron with the spin-orbit coupling potential of the impurity. To quantify the scattering amplitude, one can make use of the analytical relation between the phase shift of scattered states and the binding energies of the impurity bound states. [98][99][100] This relation bypasses the need to rely on ab-initio calculations in order to quantify the spin relaxation. Consequently, we were able to quantify the spin-flip scattering amplitude by using empirical fine-structure parameters of the impurity bound states. The spin-flip matrix element due to interaxis valley scattering off an impurity can then be written as, 26 where C is a constant of the order of unity that depends on spin orientation. V and a B are the crystal volume and the electron Bohr radius, respectively. ∆ so is the spin-splitting of the ground-state impurity level whose amplitude is commensurate with the central cell correction coming from the difference between the spin-orbit interaction of the impurity and silicon atoms, δV so . Because the Bohr radius is large compared with the impurity's core-region radius wherein δV so is relatively strong (> 1 nm vs ∼ 0.1 nm), the central cell correction has to be strong enough in order to produce measurable spinsplitting. In other words, the strength of δV so has to compensate its short range. The dependence of ∆ so on the difference between the spin-orbit interaction of the impurity and silicon atoms is indeed corroborated in experiments, showing that ∆ so is of the order of 0.3, 0.1, and 0.03 meV for Sb, As, and P impurities, respectively. 101,102 To better understand the details of the impurity's spin splitting, we consider a substitutional impurity atom surrounded by four silicon atoms in a tetrahedral molecular geometry. The vast majority of shallow donors in Si are represented by such substitutional impurities whose potential has T d point-group symmetry. Due to the valley-orbit coupling within the T d impurity, the ground state level (1s) is split into spin-independent nondegenerate (A 1 ), doubly degenerate (E) and triply degenerate (T 2 ) states where the overall 6-fold multiplicity comes from the number of conduction edge states (valley centers). 103,104 A 1 , E and T 2 denote the symmetrized linear combinations of these valley edge states under T d group operations. When adding the spin degree of freedom, the notable measured effect from the spin-orbit coupling is attributed to the spin splitting of the triply degenerate state (T 2 ). 26,102 This splitting is denoted by ∆ so in (1). We note that ∆ so = 0 for direct band-gap semiconductors, in which substitutional donors do not change the point-group symmetry and their ground state is nondegenerate (the conduction band has one low-energy valley in the zone center). A. X-point k·p Hamiltonian When the inter-axis scattering is quenched by strain, various types of weak intra-axis mechanisms become relevant. Contrary to the inter-axis mechanisms, the intraaxis ones depend on the velocity of the wave packet. The spin-flip dependence on the momentum of the incoming and scattered wave packets can be captured by employing a spin-dependent k·p Hamiltonian to describe the lowenergy conduction-band states. We construct the Hamiltonian by using its invariance to the symmetry operations of the space group G 2 32 , which describes the symmetry of the X point at the edge of the Brillouin zone in diamond crystal structures. 25,[105][106][107][108][109] In silicon, the X point is closer to the absolute conduction band minimum than all other high symmetry points, thereby allowing us to reliably express the low-energy conduction states using a minimal set of basis functions. 18,105 Due to the symmetry of the crystal, only one of the six conduction band valleys in silicon is studied, and we arbitrary identify it as the valley along the +z crystallographic axis for which the X point corresponds to k = (0, 0, 2π/a). The results presented below can be readily extended to other valleys by cyclic coordinate permutations. The k·p state expansion in the vicinity of the Xpoint is carried by employing basis functions for the lowest pair of conduction bands and upper pair of valence bands. Inclusion of the valence states is imperative since they bring in the mass anisotropy and spin mixing of the states. 18,19,105 The nomenclature for the irre-ducible representations (IRs) of the conduction and valence pairs is X 1 and X 4 , respectively. Each of these IRs is two-dimensional due to the twofold band degeneracy at the X point of diamond crystal structures, originating from time-reversal and glide-reflection symmetries. 25 We denote the corresponding spinless basis states as X 1 = (X 2 1 , X 1 1 ) and X 4 = (X x 4 , X y 4 ). The superscript indexing of the basis states is reminiscent of their compatibility relations with IRs of the ∆-axis connecting the Γ and X points. Namely, ∆ 2 (1) denotes the top (bottom) branches of the conduction band, while ∆ x,y denote the degenerate valence band along the ∆ axis (heavy and light holes). The compatibility relation also allows us to relate the basis functions along the ∆-axis. For example, (r) where k X = 2π/a. Following the notation of Ref. [105], the basis components are chosen to be complex conjugate of each other While our expansion is carried around one of the equivalent X points, the eigenstates involved in an intervalley g-process can be readily connected by time-reversal and space inversion symmetries, I andT are operators of space and time reversal. These relations are derived by choosing a gauge according to which spatial inversion negates spinors, and by letting complex conjugation and spatial inversion to act on basis functions (2) equivalently. Adding the spin-orbit coupling at the X-point, we note that the IRs of the corresponding double group cannot be factorized into product of {X 2 1 , X 1 1 , X x 4 , X y 4 } and {↑, ↓}. 18 As a consequence, even in the absence of impurities and at k = (0, 0, k X ), the k·p Hamiltonian contains non-zero interband spin-mixing terms (see Appendix A or Ref. [19]), ρ x , ρ y , ρ z are pseudospin Pauli matrices due to the twofold band degeneracy in the X-point, and σ x , σ y , σ z are the spin Pauli matrices; ρ 0 is a 2 × 2 orbital unity matrix. ∆ X 4 meV is the finite spin-orbit coupling parameter between X 1 and X 4 . 18,19 The presence of the spin-mixing term (4) in the Hamiltonian means that fully polarized waves are not eigenstates of an impurity-free Hamiltonian. Its eigenstates are slightly spin-mixed, where (Y 1 , Y 2 , Y 3 , Y 4 ) = (X 2 1 , X 1 1 , X x 4 , X y 4 ), and k z ≡ k z − k X and ∆ c are negative for the +z valley. The notations of constants in (6) and their values are the same as in Ref. [19]: E g 4.3 eV is the X-point band gap, P 10 eV·a/2π is the interband momentum matrix element where a=5.43 Å in Si, and |∆ c | 0.5 eV is the energy splitting between the top and bottom conduction bands at the valley-edge position (15% away from the X point toward the Γ point; k 0 = 0.15k X ). Finally, α −3.1 meV·a/2π is a correction to the X-point spinorbit coupling parameter (∆ X ) due to the finite distance of the valley bottom from the X point. Below, we make use of the state expansion in (6) to derive intra-axis spinflip matrix elements. III. SPIN-FLIP PROCESSES IN STRAINED SILICON DUE TO SCATTERING OFF IMPURITIES A central goal of this paper is to derive the intra-axis matrix elements due to scattering off impurities. These matrix elements, M sf (k i , k f ) = ⇓ k f |V | ⇑ k i where V is the impurity potential, govern the spin relaxation when the strain-induced valley splitting is large enough to quench inter-axis scattering. For elastic scattering off impurities, k i = k f , the resulting spin relaxation rate is 19 where N D is the donor impurity concentration. The average over k i represents weighted integration over ∂F/∂E ki where F denotes the electron distribution function. This weighted integration is exact for any distribution in the limit of infinitesimal net-spin polarization. The prefactor of 4π/ instead of 2π/ denotes the fact that the net number of spin-polarized electrons changes by two with each spin flip. It is noted that only first-order processes are relevant for calculation of M sf (k i , k f ); we have found zero net contribution from second-order processes in which an electron undergoes intra-axis scattering via two virtual elementary inter-axis scattering events. We consider three types of spin-flip processes due to intra-axis scattering. Two of the three are Elliott processes in which the spin flip is governed by the spin-orbit coupling of the host crystal, whereas the scattering potential is spin independent. 110,111 One Elliott process involves long-range interaction with the ionized-impurity potential, and the second one is a central cell correction coming from short-range interaction with the spinindependent part of the impurity potential. Using the k·p expansion in (6), an Elliott spin-flip matrix element has the form, for the intravalley and intrevalley g-process, respectively. H E is a 4 × 4 interaction matrix whose form will be deduced from the symmetry of the spin-independent potential. Following (2) and (3),à andB in the g-process matrix element are found by exchanging the coefficients of X 2 1 and X 1 1 as well as of X x 4 and X y 4 in (6). The last intra-axis scattering that we will consider in this work is a Yafet process in which the spin flip is governed by the spin-orbit coupling of the scattering potential. 20,111 Although much weaker than the dominant inter-axis Yafet mechanism, both originate from short-range interaction with the spin-orbit coupling of the impurity. The spin-flip matrix elements have the form, where we have neglected the contribution from the cross products of B vectors due to the smallness of the spinorbit coupling in silicon. It is noted that Elliott and Yafet matrix elements can become comparable if the smallness of the nonzero elements in B (compared with those in A) is compensated by the smallness of the elements in H Y (compared with those H E ). In fact, this scenario applies in the case of the electron-phonon interaction for which the spin-orbit coupling of the host atoms drives both the terms in B and H Y . 19 A. Long-range Coulomb potential (Elliott) We first consider electron scattering off the long-range Coulomb potential of ionized donor impurities, where κ is the Thomas-Fermi screening wavenumber, e is the electron charge, and is the dielectric constant. The long-range nature of this scattering stems from the fact that the screened Coulomb potential decays on a much longer length scale compared with the lattice constant, κa 1. We note that while scattering of this potential dominates momentum relaxation in highly doped silicon, 24 its role in the context of spin relaxation is marginal compared with the inter-axis short-range scattering in unstrained silicon. 26 The long-range and radial symmetry of the screened potential allows us to consider H E as a product between a unity matrix and the Fourier transform of the screened coulomb potential, where q = k i − k f . Substituting (11) and (6) in (8a), the long-range intravalley spin-flip matrix element reads where Similarly, the spin-flip matrix element for the intervalley g-process is found by substituting (11) and (6) in (8b), The long-range nature of the Coulomb potential renders the intravalley process much stronger, B. Spin-orbit coupling of impurities (Yafet) The second spin-flip process we consider is due to the spin-orbit coupling of the donor impurities. Their presence lowers the diamond point-group symmetry from I ⊗ T d to T d . Inspecting the symmetry operations of the space group G 2 32 , we identify M 2 as the IR that can represent the lowered symmetry of the impurity potential. Compared with the identity IR (M 1 ) whose characters are all '1', the characters of M 2 are negated for all symmetry operations that involve exchanging the two atoms in the unit cell. As elaborated on in Appendix A 4, the form of H Y in (9) is extracted from the following considerations. Firstly, we identify the selection rules of M 2 with IRs whose transformation properties match those of transverse vector components (such as k x and k y ) and transverse pseudovector components (such as σ x and σ y ). These IRs are represented by M 5 and M 5 , respectively, where the selection rules follow That is, coupling to the impurities transforms a vectortype interaction to a pseudovector one and vice versa. Next, we use these selection rules to construct H Y due to the presence of impurities. Specifically, we look for terms that stem from the coupling between valence and conduction states since this coupling corresponds to transverse vector and pseudovector terms in the Hamiltonian (X 1 ⊗ X 4 = M 5 ⊕ M 5 ). For the Yafet process, H Y is constructed by replacing the k x,y terms with σ x,y , alongside replacement of crystal parameters with impurity ones (e.g., P k i → δ B ∆ so ). Using this procedure, the leading Yafet term has the form (Appendix A 4), Substituting (14) and (6) in (9), the intravalley and gprocess spin-flip matrix elements follow The final spin-flip process we consider is governed by the spin-orbit coupling of the host crystal (silicon atoms), and it takes place when electrons are scattered off the short-range and spin-independent part of the impurity potential. To derive the form of the resulting Elliott matrix, H E , we inspect the coupling within conduction-band basis states (the conduction-valence coupling gives rise to the Yafet process as discussed in the previous section). The selection rule for coupling between conduction states follows from Relevant to our study are M 2 and the identity IR M 1 . The identity IR represents the radial part of the central cell correction, and as such it gives rise to diagonal coupling between X 1 1 states or between X 2 1 states. On the other hand, M 2 represents the lowered symmetry part of the impurity potential. Its transformation properties gives rise to off-diagonal coupling between X 1 1 and X 2 1 states. 19 We therefore have two terms in the short-range Elliott matrix, where σ 0 is a 2 × 2 unity matrix acting in spinor space, and ∆ 0 and ∆ 1 are the diagonal and off-diagonal scattering constants. To estimate their values, we make use of the fact that the short-range and spin-independent part of the impurity potential splits the sixfold degenerate ground state energy due to valley-orbit coupling. 103 Similar to the Yafet process for which the scattering amplitude was estimated from ∆ so (the spin-splitting of T 2 due to the spin-orbit coupling of the impurity), the values of ∆ 0 and ∆ 1 can be uniquely determined via the empirically known spin-independent binding energies of the ground state (A 1 , E and T ). Below we use ∆ 0 4 meV and ∆ 1 1.5 meV, following the work of Friesen who studied the Stark Effect for donors in silicon. 112 Contrary to ∆ so , the values of ∆ 0 and ∆ 1 are largely insensitive to the identity of the substitutional donor (Sb, As, and P). 104,113 Substituting (17) and (6) in (8), the short-range intravalley and g-process spin-flip matrix elements follow Total intra-axis spin-flip matrix elements The total matrix element is denoted by the sum of (12), (15), and (18). For spin flips due to intravalley and g-process, one gets The C and D terms represent the long-and short-range Elliott processes, respectively. For Si:Sb in which the spin-orbit coupling of the impurity is relatively strong (∆ so 0.3 meV), 101,102 both Elliott processes can be neglected and the spin relaxation is governed by the Yafet process (C, D 1). Only for the case of Si:P (∆ so 0.03 meV), 102 the Elliott processes become comparable to the Yafet one. This result is not surprising given that the spin-orbit coupling of silicon is smaller than that of antimony and arsenic while being comparable to that of phosphorus. In addition, we can quantify the ratio between intra-axis and inter-axis spinflip matrix elements in unstrained silicon by comparing (19) and (1). Using the facts that in semiconductors P/E g ∼ a/2π and k 2 ∼ 2mk B T / 2 , the ratio is of the order of 2mk B T a 2 /h 2 revealing that intra-axis spin-flip matrix element is about three orders of magnitude weaker than the inter-axis one at room temperature (and more than that at lower temperatures). Finally, we note on the qualitative difference between spin flips caused by electron-phonon and electronimpurity interactions. In the case of phonons, the spinorbit coupling of the host crystal drives both the Elliott and Yafet processes, giving rise to cancellation of the leading Elliott and Yafet intravalley processes when space inversion symmetry is respected. 19,20 Yafet found that in silicon this cancelation gives rise to quadratic rather than linear dependence of the intravalley spin-flip matrix element on the transverse components of the acousticphonon wavevector, (∝ q 2 ± ). 20 On the other hand, the intra-axis spin-flip matrix elements due to scattering off impurities have linear dependence on the transverse crystal momentum of the initial and final states, as shown in (19). The reason for the linear dependence is that there is no Elliott-Yafet cancellation when dealing with impurities, whose presence breaks the space inversion symmetry and whose spin-orbit coupling is not related to the one of the host-crystal atoms. Quantitatively, this difference can be seen from the lack of interference terms between the short-range Yafet, short-range Elliott, and long-range Elliott processes after averaging the value of |M i/g (k i , k f )| 2 over angular angles. where θ is measured from the valley axis and φ is the azimuthal angle. We have omitted terms proportional to identically for linear in k terms (19). Similar to the case of phonon-induced spin relaxation, 19 summing the contributions from all six valleys in unstrained silicon yields isotropic spin relaxation. The anisotropy emerges when applying strain, yielding as twice as strong intra-axis spin relaxation when the low-energy valleys pair and spin orientation are collinear compared with the perpendicular case. The total spin relaxation rate is the sum of the intervalley f -process (inter-axis scattering), intravalley and intervalley g-process (intra-axis scattering), The effect of compressive strain on the intra-axis and inter-axis processes is different since only the latter can be completely quenched when the valley splitting energy is large: 1/τ inter s → 0 when e −∆v/k B T 1, where ∆ v is the strain-induced valley splitting energy. On the other hand, the intra-axis relaxation rate, 1/τ intra s , is only mildly affected via the emerged anisotropy. Below we focus on the two extreme cases for parallel and perpendicular orientations of strain and spin axes, and express the total spin relaxation in silicon due to electronimpurity and electron-phonon as functions of temperature, donor concentration, and valley splitting energy (for uniaxial compressive strain along one of the equivalent [001]-crystallographic directions). A. Total inter-axis spin relaxation (f -process) The f -process spin relaxation is expressed by, where the first term denotes the contribution due to scattering off impurities, 26 and the sum denotes the contribution from the three Σ-axis phonon modes that govern the spin-flip f -process intervalley scattering. 19 The τ terms on the right-hand side denote the corresponding strain-free spin relaxation when assuming electron Boltzmann distribution in (7). C I,f and C Σj denote correction factors caused by the strain suppression of the f -process (both approach zero when e −∆v/k B T 1), as well as the correction to the spin relaxation when deviating from the Boltzmann limit (high density and low temperature). The electron-impurity spin relaxation rate constant in (22) follows, meV is the effective Rydberg energy in silicon, where m d = 0.32m 0 is the effective density-of-state mass and a B 1.85 nm. The value of τ D is about 30 ps for Sb, 240 ps for As, and 3 ns for P. The electron-impurity correction factor in (22) is expressed by, δ s,v = 1(0) if the spin orientation is collinear (perpendicular) to the low-energy valley axis. The other terms are defined by the two integrals, where ε m = max{ε 1 , ε 2 }, and In the following, we will show results when using the Fermi-Dirac distribution to represent F(ε). In the Boltzmann limit where max{F(ε)} 1, both integrations can be performed analytically yielding I 1 (ε 1 ) → e −ε1/k B T and I f (ε 1 , ε 2 ) → r d e −ra K 1 (r d ), where r d = |ε 2 − ε 1 |/2k B T , r a = (ε 2 + ε 1 )/2k B T , and K 1 is the firstorder modified Bessel function of the second kind. Figure 1(a) shows the value of C I,f at 77 K (dashed lines) and 300 K (solid lines) as a function of valley splitting for three donor densities. The value of C I,f at zero strain approaches unity at low densities. The f -process spin relaxation in (22) due to electronphonon interaction is decomposed in a similar way. The rate constants denote the corresponding strain-free spin relaxation in the Boltzmann limit, 19 The temperature dependence is carried by the parameter r j = T j /2T , where T 1 = 540 K, T 2 = 660 K, and T 3 = 270 K are the energies of the three types of symmetry-allowed shortwave phonon modes, Σ 1−3 . The time constants τ j are governed by the corresponding electron-phonon spin-flip matrix elements, 19 yielding τ 1 ∼ 20 ns, τ 2 ∼ 70 ns, and τ 3 ∼ 200 ns. The expressions for the phonon-related strain suppression factors are cumbersome (C Σj in (22)), and we present them in Appendix B. Their dependence on the strain-induced valley splitting energy is shown in Figs. 1(b)-(d). B. Total intra-axis spin relaxation The spin relaxation rate due to intravalley and intervalley g-processes in (21) has four contributions, 1 τ intra s = C I,sr τ I,sr + C I,lr τ I,lr The τ terms on the right-hand side are spin relaxation times when assuming electron Boltzmann distribution in unstrained silicon. The C terms are corrections due to the strain and/or deviations from Boltzmann statistics. The first term on the right-hand side denotes the contribution from electron scattering off the short-range impurity potential. Qualitatively, this mechanism influences the intravalley and intervalley g-process similarly. The second term denotes the contribution from electron scattering off the ionized impurity potential. As mentioned at the end of Sec. III A, the effect of this long-range scattering potential on the intervalley g-process is negligible compared with intravalley one. The third and fourth terms denote contributions from electron interaction with long-wavelength acoustic phonons (intravalley) and ∆axis shortwave phonons (intervalley g-process). We recall that a major difference from the inter-axis case is that the intra-axis relaxation does not change appreciably when the valley-splitting energy is large. Here, the intra-axis C terms only bring out the anisotropy in spin relaxation; they do not become negligible at large strain levels. Starting with the intra-axis electron-impurity processes, we calculate the resulting spin relaxation by substituting (19) into (7). For the short-range interaction, denoted by the first term in (28), we get that the spin relaxation in the Boltzmann limit yields where τ D and T R were defined in (23), and D 0,1 in (19c). Compared with τ D , α D has a weaker dependence on donor identity, ranging from 1.6×10 −3 for Si:Sb and 5.7×10 −3 for Si:P. Compared with the impurity-induced inter-axis spin relaxation in (23), the smallness of α D demonstrates the negligible effect of the intra-axis impurity scattering on spin relaxation in unstrained silicon. When deviating from the Boltzmann limit, the straindependent correction factor in (28) follows where I n (ε) was defined in (26). Turning to the longrange interaction, denoted by the second term in (28), we get that the spin relaxation in unstrained silicon in the Boltzmann limit yields where r N = N D /N T and Here, E i (x) is the exponential integral special function, Unlike the short-range mechanism, there is no dependence on donor identity (all donors yield similar long-range ionized Coulomb potential). Similar to the other intra-axis mechanisms, however, this mechanism has negligible contribution to the spin relaxation in unstrained silicon compared with the contribution from the inter-axis processes. In fact, it will be shown to be smaller than the intra-axis electronphonon interaction as well as the intra-axis short-range impurity scattering in Si:Sb and Si:As. Following the notation used in (28), the strain-dependent correction factor when deviating from the Boltzmann limit is where The final spin relaxation mechanisms are those from intra-axis electron-phonon scattering, denoted by the last two terms in (28). They are driven by intravalley scattering with acoustic phonons (mainly transverse modes), and g-process intervalley scattering with shortwave phonon modes of ∆ 1 symmetry. 19 The intravalley and intervalley g-process strain-free rates in the Boltzmann limit follow, 18,19 where τ ac,0 ∼ 50 ns and τ ∆,0 ∼ 2 µs are governed by the respective electron-phonon spin-flip matrix elements. The temperature dependence of the g-process is carried by the parameter r g = T g /2T , where T g = 240 K is the energy of the relevant ∆ 1 shortwave phonon mode. K 2 is the second-order modified Bessel function of the second kind. The intravalley correction factor is similar to that of the short-range potential, C ac = C I,sr , provided in (30). The expression for the intervalley g-process correction is cumbersome (C ∆ ), and we provide it in Appendix B. In the Boltzmann limit, all of the intra-axis correction factors approach C I,sr , C I,lr , C ac , C ∆ , where r v = ∆ v /2k B T . In this limit and for large valleysplitting energy (e −2rv 1), the intra-axis spin relaxation is as twice as strong when the strain and spin orientations are parallel (δ s,v = 1) compared with the case in which they are perpendicular (δ s,v = 0). V. RESULTS AND DISCUSSION The analysis provided in the previous section allows us to quantify the spin relaxation time of mobile electrons in n-type silicon for any temperature, donor concentration, donor identity, and valley splitting energy (at the uniaxial compressive strain configuration). It takes less than a minute to generate the results of each of the figures in this work with a simple computer. Figure 2 shows the spin relaxation times in unstrained silicon as a function of temperature in Si:P, Si:As and Si:Sb. The donor concentration in all cases is N D =10 19 cm −3 . For the case of Si:Sb, shown in the right panel, the relaxation is governed by inter-axis scattering off impurities at all temperatures. 26 In Si:P, the spin relaxation is governed by this mechanism at low temperatures and by the other inter-axis mechanism at high-temperatures (interaction with shortwave Σ-axis phonons). The intra-axis spin relaxation (τ intra s ), shown by the dashed-blue line, has marginal contribution at all cases. The small effect of the intra-axis mechanism applies at lower donor concentrations as well, in which the relaxation is largely governed by the interaction with the shortwave Σ-axis phonons. 18 The only means to bring the intra-axis mechanism into play is by strain-induced quenching of the inter-axis mechanisms. Figure 3 shows the spin relaxation time as a function of the strain-induced valley splitting energy in Si:P, Si:As and Si:Sb at three temperatures. The donor concentration in all cases is N D = 3×10 19 cm −3 . In all cases, the spin relaxation is switched from being governed by inter-axis mechanisms to intra-axis ones at large valley splitting energies. The enhancement in spin relaxation time is most evident at low temperatures because of the increased ratio between valley splitting and thermal energies as well as the weaker effect from phonon-related interactions. Furthermore, the improvement is larger than two orders of magnitude for Si:Sb at low temperatures due to quenching of its strong inter-axis impurity scattering. 26 We also notice that while the inter-axis spin relaxation mechanisms are quenched by strain, the intraaxis spin relaxation time becomes slightly faster. The latter is explained by the increase of the chemical potential with respect to the conduction band edge. Specifically, the electron density is redistributed among six valleys at zero-strain condition while only among two valleys when the strain-induced valley splitting energy is very large. This electron redistribution leads to a change in the chemical potential which for degenerate doping such as the one studied here, N D =3×10 19 cm −3 , results in an increase from about 12 to 60 meV at room temperature (with respect to the conduction band edge) and from about 30 meV to 60 meV at 30 K. Given that the intraaxis spin-flip matrix elements are commensurate with the electron wavevector, the relaxation rate increase about linearly with the Fermi energy. Figure 4 shows the improvement ratio in the total spin relaxation time when the strain-induced valley splitting energy is 100 meV (i.e., τ s (∆ v = 100 meV)/τ s (∆ v = 0 meV)). There are several notable features. The first one is that the improvement is mostly significant at low temperatures due to the negligible population of the highenergy valleys. At room temperature, on the other hand, ∆ v /k B T is of the order of 4, which is not sufficient to quench the inter-axis processes. Accordingly, the improvement is nearly three order of magnitude in Si:Sb at 30 K while being much smaller at room temperature. The second feature is that the ratio initially increases with donor density before sharply decaying at very large densities. The conjunction of two factors gives rise to the initial increase: (i) the electron-impurity scattering becomes more significant when increasing the donor density, and (ii) the strain quenches the inter-axis elastic scattering more effectively than the inelastic one. The latter involves Σ-axis phonons whose energy renders the effective valley splitting smaller. Specifically, electron transitions can take place already when the electron energy is ∆ v − ε Σ with respect to the conduction band edge (ε Σ ∼ 47 meV for the dominant Σ 1 mode). As a result, the strain quenches more effectively the elastic electronimpurity inter-axis mechanism for which electrons transitions can take place when their energy is ∆ v . This behavior explains the initial increase of the ratio in Fig. 4 with show the dependencies of these mechanisms on temperature when ND=10 19 cm −3 and on donor concentration when T =20 K, respectively. Unlike the interaction with the shortrange impurity potential, the Coulomb interaction with the long-range ionized impurity potential (dashed lines) and the interaction with phonons (black solid lines) are independent of the donor identity. donor concentration (bigger role played by the electronimpurity interaction). This behavior supports the fact that the improvement is larger for Si:Sb in which the inter-axis electron-impurity scattering is strongest (compared with Si:P in which the electron-phonon interaction plays a bigger role). Finally, the improvement in spin relaxation sharply decays for all donor types for concentrations close to N D =5×10 19 cm −3 . At these densities, the chemical potential is higher than the valley splitting energy (µ > ∆ v =100 meV in this case), so that the highenergy valleys become populated and the inter-axis mechanisms are restored. In other words, the improvement in spin relaxation for degenerate doping conditions is viable The last effect we focus on is the relative role of the intra-axis mechanisms in strained silicon. Figures 5(a) and (b) show the dependencies of these mechanisms on temperature and donor concentrations, respectively. The strain-induced spin splitting is ∆ v =120 meV. We notice that the spin relaxation due to the long-range interaction of electrons with impurities is a relatively weak effect (dashed lines). This interaction shows a relatively strong dependence on temperature compared with that of the short-range interaction. The reason originates from the decreased role of screening at high temperatures; thermally agitated electrons screen less effectively and therefore the spin relaxation is enhanced. Finally, Fig. 5(b) shows an untypical trend wherein the intra-axis spin relaxation time due to phonons has a stronger dependence on donor density compared with the long-range interaction with impurities. The weak dependence of the latter is understood by the fact that the increased donor concentration is accompanied by stronger screening, and the two effects cancel each other (i.e., the relaxation saturates). The enhancement of the intra-axis spin relaxation due to phonons when increasing the donor concentration is understood by the rise of the chemical potential. The phonon wavevector involved in the spin-flip of the elec-tron is proportional to √ E F rather than √ k B T in degenerately doped silicon, thereby enhancing the relaxation rate with increasing the donor density. VI. CONCLUSION We have derived the intra-axis spin-flip matrix elements due to scattering off impurities (19), taking into account contributions from both the short-range and longrange parts of the impurity potential. This derivation complements our previous studies of the phonon-induced spin relaxation and the inter-axis impurity-induced spin relaxation. 18,19,26 Importantly, the complete analytical framework in Sec. IV allows one a fast calculation of the spin relaxation time in n-type silicon due to the interaction of electrons with phonons and impurities for any temperature, strain level, donor identity and concentration. Depending on the angle between strain and spin orientation, we have provided analytical expressions that quantify the anisotropy in the spin relaxation time. This work provides a clear motivation for employing silicon spintronic devices in which the spin transport region is compressively strained along one of the crystallographic axes. We have quantified the improvement in the spin relaxation of n-type silicon when applying this type of strain. The spin relaxation time improves evidently when the strain is large enough to depopulate the electrons from the high-energy valleys. In non-degenerate silicon, this condition is met when ∆ v k B T , where ∆ v is the strain-induced valley splitting energy. In degenerately doped silicon, the condition is met when where µ is the chemical potential. The results show that the inter-axis elastic impurity scattering is quenched more effectively by the strain compared with the inter-axis inelastic phonon scattering. As a result, a larger improvement is expected in degenerately doped Si:Sb compared with Si:P due to the relatively large spinorbit coupling of Sb, and hence the bigger effect of the electron-impurity interaction on the spin relaxation in Si:Sb. We have predicted that the spin relaxation time can be enhanced by nearly three orders of magnitude at low temperature Si:Sb, and by slightly more than one order of magnitude in Si:P under the same strain level, donor concentration, and temperature conditions. Another finding of our work is that the long-range interaction of electrons with the ionized impurity potential (Coulomb scattering) bears no practical significance for spin relaxation in silicon. It is much weaker than the inter-axis spin relaxation mechanisms and is also a weak effect when considering the contribution from all other intra-axis mechanisms (especially acoustic phonons or the short-range interaction with impurities in Si:Sb and Si:As). On the other hand, this intra-axis and long-range interaction is known to play a crucial role in limiting the electron mobility of doped semiconductors. 26,[114][115][116][117][118][119][120] Its insignificance for spin relaxation versus its importance for momentum relaxation implies that estimating the Elliott-Yafet spin relaxation time in n-type silicon by multiplying the momentum relaxation time with some coefficient that depends on the spin-orbit coupling is an arbitrary choice. This Appendix includes details on the derivation of the k · p Hamiltonian, which we used throughout the main text. Unlike many semiconductors in which the band extrema reside at highly symmetrical points of the Brillouin zone (e.g., the Γ-point in GaAs or the L point in germanium), the conduction band minimum in silicon resides on the ∆-symmetry axis, connecting the Γ and X points. The minimum is located about 0.15(2π/a) away from the X point (k 0 = 0.15k X ). In choosing between the symmetry groups of the ∆-axis and the X-point to describe the electronic states at the bottom of the conduction bands in silicon, we choose the X-point since it has four-times more symmetry operations (G 2 32 versus C 4v ). The set of X-point symmetry operations is large-enough to determine a compact form of the k · p Hamiltonian, and we find that second-order perturbation theory in k 0 produces accurate eigensystem along the ∆-axis, despite the fact that k 0 is not negligible. Time reversal symmetry We begin with invoking time-reversal argumentation in order to determine whether amplitudes in front of different terms in the Hamiltonian are real or imaginary. The time-reversal operatorT = σ yK , whereK is the operator of complex conjugation, anti-commutes with momentum: From (2) we conclude that complex conjugationK in our basis is represented by ρ x for both X 1 and X 4 . As shown in Tab. I, the same is true for the inversionÎ, so for our basis functions these two operators are identical:K =Î. Time-reversal transforms momentum matrix elements as where the last equality is obtained by applyingÎ on top ofT . Analogously In a similar fashion, one can transform matrix elements for operators that commute withT . One of these operators is the spin-orbit interaction of the host crystal (which also commutes withÎ), where U at is the (intrinsic) atomic potential in pure silicon crystal. Finally, we consider operators that represent the impurity potential V and its spin-orbit interaction ∝ ∇V · [ s ׈ p ]. They are not symmetric under inversion so their time-reversal symmetry relations contain ρ x -matrices: Spatial symmetries The structure of the k·p Hamiltonian is determined by matrix elements X i |Ô|X j where i, j = 1, 4, andÔ can represent k · p terms, impurity potentialV, or spin-orbit interaction terms ∇U at · [ s ׈ p ] and ∇V · [ s ׈ p ]. Let R i and R j be IR matrices that represent some symmetry operationĝ acting on X i and X j , respectively. The transformation properties of matrix elements that involveÔ then follow Following Ref. [105] , the operations C 2 , S 4 , andÎ are sufficient in order to determine the form of the k·p Hamiltonian in the vicinity of the X-point. Table. I shows the generators of these operations for the chosen conduction (X 1 ) and valence (X 4 ) states in (2). Below, we demonstrate how the application of (A6) and Tab. I are used to evaluate the non-vanishing interband spin-independent matrix elements ofÔ = k · p. Focusing first on the inversion operationÎ for which R 1 = R 4 = ρ x , we obtain which means that X 1 |k·p|X 4 cannot contain any terms proportional to ρ x or ρ 0 . Similarly, we may use other symmetry operations to see further restrictions on X 1 |k· p|X 4 . We see that the terms k z · (ρ y , ρ z ) are forbidden by S 2 4 , and (k x · ρ z , k y · ρ y ) by C 2 . Finally, we are left with the spin-conserving invariant, H cv ∝ ak x ρ y + ik y ρ z . By applying S 4 on this expression, we realize that a = 1. The spin-independent part of X 1 |k · p|X 4 is then given by Thus, using spatial symmetries we are able to determine that (A8) represents the spin-independent structure of the off-diagonal (interband) block in our impurity-free Hamiltonian. In order to see whether the coefficient P in (A8) is real or imaginary, we will have to employ time reversal symmetry. For the case of (A8), Eq. (A2) for i = 1 and j = 4 makes it clear that the coefficient P in (A8) is real. We can apply a similar approach to check which of the spin-dependent terms are symmetry forbidden. In the absence of impurities, such terms could arise from matrix elements of ∇U at · [ s ׈ p ]. We now have to consider not only the orbital parts of R i and R j in (A6), but their spinor-rotation parts as well. In spinor space, the generators ofÎ, C 2 , and S 4 are represented by where σ 0 is a 2 × 2 unity matrix acting in spinor space (analogous to ρ 0 ≡ 1, which denotes a 2 × 2 "orbital" unity matrix). Considering first terms that are proportional to ρ 0 , we see thatÎ allows (σ 0 , σ x , σ y , σ z ), C 2 allows (σ 0 , σ y ), and S 2 4 allows (σ 0 , σ z ). Thus, ρ 0 σ 0 is the only allowed invariant term. Similarly, we deduce that ρ y ⊗ σ 0 is forbidden byÎ. The presence of impurities results in additional perturbative terms. The complete Hamiltonian for a silicon crystal with randomly placed impurities can be written in the mixed momentum-coordinate representation as where N = N D |Ω| is the total number of (randomly placed) defects. H X (k) is the impurity-free k · p Hamiltonian, provided in (A10), and ρ j denotes the discrete coordinate that lists all of the cells with impurities. The function χ has a value of one within unit cells that contain impurities and zero otherwise, In order to evaluate the form of δH(k) in (A11), we note that the impurity potential, V (r), has two contributions. The first one is invariant with respect to all operations of the diamond point group, while the second contribution transforms according to the IR M 2 of the G 2 32 group. The latter flips sign under space inversion operation (transforms as the product xyz). Repeating the invariant-based analysis in Apps. A 1 and A 2, we obtain the following impurity-induced interband correction terms (responsible for the Yafet process), where the X-basis functions are normalized with respect to the elementary cell volume |v|, A different elegant way to derive the interband invariants in (A13) is to make use of the selection rules of M 2 with transverse vector components (k x and k y ) and pseudovector ones (σ x and σ y ). Transformation properties of the former/latter are represented by M 5 /M 5 , and their interaction with M 2 flips their roles, That is, M 2 switches between the transverse components of vectors (M 5 ) and pseudovectors (M 5 ): x ↔ σ x and y ↔ σ y . This exchange rule establishes a connection between linear in momentum terms from (A10) with impurity-induced spin-flip corrections to these equations. For example, let us first inspect the invariant k x ρ y +ik y ρ z in (A10), which contains the x and y vector components of k (M 5 IR). In the disorder-induced part of the Hamiltonian, δH, impurity terms transform according to M 2 . Therefore, we should replace the vector components in k x ρ y + ik y ρ z with pseudovector ones in order to find the analogous term in δH. The only pseudovector we have is spin, so that δH includes the invariant ρ y ⊗ σ x + iρ z ⊗ σ y (corresponding to the first term in (A13)). In order to compare the dominant interband spinmixing terms in (A10) and (A13), we make use of the fact that V cv ∼ δ B ∆ so , where δ B = a 3 B /V and ∆ so is the spin-splitting of the impurity ground state. We get that Si:P Si:As Si:Sb V cv (meV)≈ 3.0 10.6 31.3 (A16) We can now compare the interband spin-mixing amplitude of the impurity with that of the host atoms (V cv in (A13) versus ∆ X in (A10)). The latter gives rise to the interband spin-mixing in clean silicon at k = 0. Given that Si and P are neighboring elements in the periodic table, it is reasonable that their spin-orbit coupling parameters are comparable in Si:P, V cv = 3.0 meV and ∆ X = 3.6 meV. Turning to impurity-induced intraband terms, we follow the analysis of App. A 2 in order to find the spindependent correction of H c . This correction term is responsible for the central-cell Elliott process, where δ B ∆ 0 denotes short-range spherically-symmetric corrections to the screened Coulomb potential. 100 The second term in (A17), δ B ∆ 1 ρ y , arises due to the lowsymmetry (T d ) part of the impurity potential, which is responsible for the spin-independent splitting of the sstate into singlet, triplet, and doublet. Finally, the intravalence band impurity terms follow which partially lifts the degeneracy of the valence band near the X point. the numerical values of C Σ1−3 as a function of valley splitting energy (∆ v ) for various temperatures and donor concentrations. In the Boltzmann limit, we get where r v = ∆ v /2k B T and K 1 (x) is the first-order modified Bessel function of the second kind. The general expression for the intervalley g-process correction factor that appears in (28) follows + e −rg I +,g (0, −ε ∆1 ) + e rg I −,g (0, ε ∆1 ) (1 + δ s,v ) , where r g = T g /2T and ε ∆1 = k B T g (T g = 240 K). In addition,
2016-09-22T17:22:43.000Z
2016-09-22T00:00:00.000
{ "year": 2016, "sha1": "ff885c0479a307c15c954303cbc835eda36cf6e6", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.95.035204", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "ff885c0479a307c15c954303cbc835eda36cf6e6", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
233328596
pes2o/s2orc
v3-fos-license
Investigation of Different Free Image Analysis Software for High-Throughput Droplet Detection Droplet microfluidics has revealed innovative strategies in biology and chemistry. This advancement has delivered novel quantification methods, such as droplet digital polymerase chain reaction (ddPCR) and an antibiotic heteroresistance analysis tool. For droplet analysis, researchers often use image-based detection techniques. Unfortunately, the analysis of images may require specific tools or programming skills to produce the expected results. In order to address the issue, we explore the potential use of standalone freely available software to perform image-based droplet detection. We select the four most popular software and classify them into rule-based and machine learning-based types after assessing the software’s modules. We test and evaluate the software’s (i) ability to detect droplets, (ii) accuracy and precision, and (iii) overall components and supporting material. In our experimental setting, we find that the rule-based type of software is better suited for image-based droplet detection. The rule-based type of software also has a simpler workflow or pipeline, especially aimed for non-experienced users. In our case, CellProfiler (CP) offers the most user-friendly experience for both single image and batch processing analyses. ■ INTRODUCTION Droplet microfluidics has become a powerful tool for highthroughput analysis over the last few decades. 1 It allows compartmentalization of samples in massive parallelization. 2 This high-throughput technique is also compatible with different analytical technologies, e.g., mass spectrometry. 3 Droplets are often applied for high sensitivity nucleic acid diagnostics 4 or different microbiological studies. 5 For instance, the tool has also been used to perform high-throughput screening for protein crystals, 6 DNA quantification by digital droplet polymerase chain reaction (ddPCR), 7,8 detecting viable bacteria and heteroresistance in antimicrobial experiments, 9,10 or performing experiments with mammalian cells. 11 Image-based analysis has often been used in droplet microfluidic experiments. 12 The analysis has been implemented in different types of image data, from single static image up to real-time data, either by bright-field or fluorescence microscopy. 13 This approach has been used for a wide range of experiments, such as bacterial surveillance of foodborne contamination, 14 screening of specific substrates, 15 single-cell analysis, 16 and detecting viable bacteria or viruses (e.g., SARS-CoV-2). 17,18 Image-based droplet analysis (IDA) often requires specific skills in programming that are not widely available in non-specialist laboratories. Most of the published articles in droplet detection use scripted programs, such as Circular Hough Transform in Python programming language, 19 Mathematica, 20,21 Scikit-image in Python, 22 Image Processing Toolbox from MATLAB, 23 OpenCV and Keras in Python, 24 and OpenCV in C++. 25 There are some userfriendly software that may be used for droplet microfluidic image analysis, such as the Zen imaging program 26 and NIS-Elements from NIKON. 14 However, these kinds of programs are only commercially available. There is a need for widely accessible and user-friendly IDA tools for image-based droplet analysis. Open-source software is available and can be used to detect and/or analyze droplets. For example, ImageJ software has been used to analyze image data in general 27 including droplets, 28 or CellProfiler (CP), which was developed to identify and measure various bioimage data. 29 Even though some published articles mention the use of the software, information regarding their workflow is limited (the data is often missing from publications). This would confuse early-stage researchers with little or no experience in image analysis, specifically for image-based droplet detection using no programming skills. However, novel workflows can be constructed by combining functions, modules, or pipelines from different software, like building a puzzle. 30 Here, we (i) demonstrate how to use different software for the analysis of droplet images in static 2D images and (ii) explore the differences and similarities of workflows in the different software from the perspective of detecting, counting, and measuring the properties (including but not limited to droplet number, diameter, fluorescence intensity of droplets, etc.) using four selected software (Table 1). ■ RESULTS AND DISCUSSION Software Selection and Workflow Construction. The most popular software for image analysis are ImageJ (IJ), CellProfiler (CP), Ilastik (Ila), and QuPath (QP). Here, we use Twitter and Scopus repositories to find the popularity of the software in the field of image analysis. Twitter has been used for research purposes before. 32 We found that social media also give researchers the opportunity to "push" their findings and correlate them to a greater citation. 33 To find the popularity, we executed Twint 34 Python script using each of the software's name as the keyword. For finding the results from Scopus' repository, we also used the same keyword. Both searches were performed to acquire data from January 1, 2010 to December 31, 2020. Based on the Scopus and Twitter search (obtained on February 11, 2021), we showed the sum of "tweets" or 160-character max of text from Twitter and the sum of Scopus search in scatter plot ( Figure 1A). The most popular software are ImageJ, 27 CP, 35 Ilastik, and QuPath, in blue, red, cyan, and green color, respectively. Ilastik uses the concept of supervised machine learning in their workflow, 36 and QuPath has been used as a whole slide image analysis tool. 37 We continued with these four popular software tools and used them to detect droplets on the image dataset previously described by Bartkova et al. 31 ( Figure 1B). Then, we took a deeper look into their workflow and assessed their performance with different key parameters ( Figure 1C). Rule-Based and Machine Learning-Based Software for Droplet Detection. We divided the selected software into two groups (rule-based and machine learning-based) according to their workflow. In the rule-based software group (CP and ImageJ), users have to manually provide settings for the program to select the pixels of interest with numeric or known parameter in order to detect droplets. In the machine learningbased group (Ilastik and QuPath), on the other hand, users may select the areas of the image (labeling) and manually annotate them as objects of interest (e.g., droplets or background) for pixel classification. Based on these characteristics, we described the abstraction of the process with three increasing levels and used it to direct the image-based droplet detection. Pre-processing, Processing, and Post-processing Concepts. We used the terms (i) pre-processing, (ii) processing, and (iii) post-processing. (i) In pre-processing, we modified, adjusted, and prepared the image data for further use. For instance, we performed pre-processing to duplicate the image data, introduce features, and make annotation(s) on the image. In addition, we also include the image setup, such as image upload, metadata setting, and supporting option before processing the image data. For instance, we also included the macro record in IJ and the metadata setup in CP. (ii) In processing, we conducted segmentation or pixel partitioning based on color, intensity, or texture along with droplet detection or counting process. 38 Usually, processing steps may help users obtain a specific type of data. 39 In our case, we introduced thresholdring to distinguish between the background (dark) and the foreground (droplets). For the details, CP came in handy and only needed one module named "IdentifyPrimaryObject", which contained some options to detect droplets. This included thresholding, smoothing, segmentation, and automatic selection. In ImageJ, processing steps had three options: "Thresholding", "Watershed", and "Analyze Particle". Similar to CP, these three steps will provide selections to detect the droplets. In the processing part, Ilastik had to process "Thresholding", "Object Feature Selection", and "Object Classification" for selecting the droplets and discarding the background. In QuPath, we found all of these features in "Pixel Classifier". The settings included a classifier from an artificial neural network with multilayer perception (ANN_MLP) 40 with high resolution, using four multiscale features (Gaussian, gradient magnitude, Hessian determinant, and Hessian max eigenvalue) with probability as an output. (iii) For the last step, in post-processing, we prepared data extraction or generation for further use, for example, to generate a table of data or type of images for visualization. In CP, this last step was performed with "OverlayOutlines", "OverlayObject", "DisplayDataOnImage", and "ExportToSpreadsheet". These modules generated the images and results in CSV format. The order was similar in ImageJ and Ilastik, but the option was available in "ROI Manager" and "Export", respectively. In QuPath, the results can be obtained by exporting annotations from detected objects or called as labeled images. We used the Groovy script to generate this result using commands in "Workflow" tab. Groovy is a compiled language that can be integrated seamlessly with Java. However, it has some semantic and practical differences, especially regarding syntax. 41 For a brief workflow/pipeline, we provide the scheme of third level complexity in Figure 2. Wide is the general image file type, such as TIFF, JPEG, PNG, etc. Generates object size, pixel intensity, circularity, object position, etc. c v = available, − = not available CP Has the Highest Accuracy and Precision. By comparing the results with manually counted droplets (7145), we investigated the ability of the analyzed software to detect droplets. We only counted the droplets that did not We generated water-in-oil droplets using a flow-focusing microfluidic chip (left). We used a fluorescence microscope to obtain "raw images" of droplets that contained fluorescence producing bacteria (middle). For the analysis of droplet images, we used the four most popular image analysis software that were selected according to hits in social media (Twitter) and Scopus search (obtained on February 11, 2021) (right). (B) Droplet detection comparison among (i) ImageJ (IJ), (ii) CellProfiler (CP), (iii) Ilastik (Ila), and (iv) QuPath (QP). (C) We divided the image processing software into two groups (rule-based and machine learning-based) and explored their logic and working principle on three levels of abstraction. (1) The first level shows that used software are very similar in their basic image processing logic. They usually have three processing stages in their image analysis logic: pre-processing, processing, and post-processing. (2) The second level shows distinction between two groups of software in droplet detection: rule-based, where users define how to detect droplets by giving specific parameters (e.g., threshold or size), or machine learning-based, where users classify/annotate grouping of pixels on an image. (3) The third level shows a number of different steps and modules in processing stages. For the object on the left side of each workflow, we use a triangle to determine the module with only one option, rectangle for the module with two to eight options, and circle for the module with more than eight options. touch the image border and did not make a bundle (joint droplets because of failed segmentation). We performed sensitivity and specificity tests using True Positive (TP), False Positive (FP), and False Negative (FN) values based on the comparison with manual counting. 42 The TP confirms the positive droplet detection in the data. For FP, the value is obtained by finding false droplet detection or underestimation (type I error). In FN, the software does not detect the droplet or performs overestimation (type II error). We defined TN as the background (black = 0). After the calculation, we obtained the accuracy ((TP + TN)/(TP + TN + FP + FN)) and precision (TP/(TN + TP)) from the detection. This accuracy explains the ratio between the correct droplet detection and total number of droplet detection. On the other hand, precision describes the probability to produce the correct droplet detection in total positive detection. 43−45 The accuracy of each detection ranges from 74.7 to 96.2%. One of the software managed to generate a precision of up to 99.8% (Table 2). Low-Image Quality Gives More False Detection. From Figure 3, we can see how each group shares similar errors in every event (detection per image). We compared the false detection results (both FP and FN) from each of the software. We found that the rule-based group (CP and ImageJ) have less false detection compared to the machine learning-based group (Ilastik and QuPath). However, Ilastik and QuPath received high error because they do not have filters to eliminate the droplets that touch the border, and some droplets are falsely detected as joint droplets ( Figure S1). Figure 3 also shows images, which may have bad quality for droplet detection. For instance, image numbers 2, 19, and 64 depict the highest error values from all four software. Notwithstanding, CP outperforms the other software and has both high accuracy and precision. Each Software Requires Different Workflows for Batch Processing. CP is the most suitable software for batch analysis or high throughput analysis. In CP, we can analyze a whole set of images with a press of a single button "Analyze Images" on the main menu. The software will process available images uploaded in the "Images" module (default module). We tested and used the batch processing option to analyze 64 images straight after we had our pipeline/workflow set. In ImageJ, we processed the batch analysis using a recorded macro by single image analysis. We also performed some macro script cleaning (e.g., closing unnecessary tabs during the process), which was written in the macro recorder. After cleaning, we selected the input and output folders and performed batch processing through the "Process" tab. For Ilastik, we executed batch processing after the last option of the pipeline. We just needed to upload the images and started the "Process all files". QuPath demanded macroprogramming commands for executing batch analysis. However, this software provided an automated script generator that simplified the macro record to perform batch analysis. ImageJ and QuPath required a macro script for batch analysis. Even though this macro script was easy to do, creating a macro script for the first time could become an obstacle for researchers who are not familiar with any programming language or practices. 46 From Figure 2. Detailed third level of abstraction for image-based droplet detection using (i) CellProfiler, (ii) ImageJ, (iii) Ilastik, and (iv) QuPath. The symbols represent how many options are within each module, referring to the previous figure where the triangle, rectangle, and circle represent one, two to eight, and more than eight options, respectively. The background colors correspond to pre-processing (cyan), processing (yellow), and post-processing (magenta). our viewpoint, CP and Ilastik had the most user-friendly interface for batch processing because they provide the option to scale up after single image pipeline construction and do not require any programming steps. Therefore, finding any additional button or tab to batch process the images was unnecessary. On the other hand, process and scripting were required in ImageJ and QuPath. Modularity Gives More Flexibility in Developing Pipelines. Rule-based class software are flexible and have modular options in processing image(s). As rule-based tools, CP and ImageJ offered options that could be added and removed depending on the user's preferences, such as the type of thresholding algorithm, filters, and other modules. In machine learning-based software, the features were embedded in the pipeline and had limited availability for additional settings. For example, Ilastik had some pre-defined pipelines: one of them was object classification and pixel classification. 36 These two were fixed in the interface of Ilastik and may be rearranged only through Python programming. From Figure 1C, the third level of complexity also represents the modularity in which Ilastik and QuPath were more limited than CP and ImageJ. For instance, CP had the "IdentifyPrimaryObject" module that could be duplicated in one pipeline, while in Ilastik, "Thresholding" could be performed only once within the pre-defined workflow. This complication placed Ilastik as the least flexible tool followed by QuPath. Batch Processing Time Is Shorter in Java-Based Software. Macro programming language affects the software processing time, particularly in batch analysis. CP or ImageJ expected less computational power for the use since they did not implement machine learning classification methods in our pipelines. The use of machine learning requires training and features implementation that requires more computational power. 47 The rule-based software used object logic classification 48 and did not require training set to test the defined parameters, e.g., size of the object or maximum length of the object. In QuPath and Ilastik, the classification depended on a supervised machine learning process. 36,48 We used manual annotations (droplets and background) in making the classifier before processing whole pixels. We also previously compared the minimum hardware requirements for each of the tools (Table 1). Based on the comparison, ImageJ was the only one that did not put any minimum requirement on the randomaccess memory (RAM). We also expected that the machine learning-based software might take more time to process the whole set of images. Therefore, we also tried running the whole pipeline and comparing the performance from each of the tools. We tested each pipeline with the same computer having an Intel Core i3-9100F processor, 8GB RAM, NVDIA GeForce GTX 1660 SUPER, 120Gb SSD PANTHER and running in a Windows operating system. In our setting (with the same environment and background setting), we found that QuPath and ImageJ perform faster than CP and Ilastik in batch processing (Figure 4). The experiment was conducted by running the same pipeline 10 times to find the deviation as well. Tool's batch processing language (macros) may cause this difference. At the beginning, we expected Ilastik and QuPath to have longer processing time than CP and ImageJ because of the machine learning-based processing. However, ImageJ and QuPath performed faster than others. In principle, there are two types of program that bioinformaticians use: compiled and interpreted. 49 ImageJ and QuPath use Java based (macros) code that is compiled once before the program processes the batch analysis. Presumably, this allows the program to run faster. On the other hand, CP and Ilastik use Python to process batch analysis. In Python, variables and functions will be run through an interpreter every time the program needs to process the task, in our case, to detect droplets in every image. Regardless, we do not have enough evidence to claim that the type of software may shorten the processing time. Nonetheless, a speed comparison of different types of language (including Python and Java) to run the same command showed that implementation in Java performs up to 20 times faster than in Python. 50 We also note that different hardware can alter the performance of software in different settings, but the relative ratios of needed computing resources should be similar. Documentation Is Important in Pipeline Development. CP and ImageJ have sufficient examples and documentation for novice users. Each of the software provides documentation and examples for guiding their users. CP and ImageJ have been developed since 2005 and 1987, respectively. 46,51 Therefore, these rule-based software have more users and examples, e.g., ImageJ has a distribution for compiling the biological image analysis plugins called Fiji. 27 CP also provides some tutorials, examples, and other documentation on their website, e.g., detecting different cell morphology and tracking objects (www.cellprofiler.org). On the contrary, Ilastik and QuPath have limited documentation for accompanying new users. However, these two software also have extensive documentation, including their manuals and tutorials for both novice and advanced users at their website (https:// ilastik.org/documentation and https://qupath.readthedocs.io). Additionally, there are some forums such as image.sc forum (forum.image.sc) that are actively helping other bioimage researchers or software users. Plugins May Ease Users to Perform Specific Image-Based Detection. Plugins in CP and ImageJ can be used as an extensible option in processing images. Plugins or add-on can be used to improve default options within the software. These may be utilized by other software developers. As an additional option, plugins may help the user implement specific cases of detection. Before Ilastik and QuPath were developed, ImageJ had plugins called Trainable WEKA Segmentation that, in principle, works similarly to machine learning-based software. 52 In CP, plugins are also available. For instance, we found one plugin that analyzes mass cytometry (multiplexed images) called ImcPluginsCP. 53 Here, we did not add any plugins to detect droplets and we used similar settings to see the tool's ability to detect and count droplets. The extension software for CP, CellProfiler Analyst (CPA), 54 could be an option to enhance droplet detection, which has been described briefly in our previous research. 31 Based on our classification, CPA belongs to machine learning-based software because users need to supervise or train the data at the beginning. However, this software is not a standalone software and requires feature extraction or properties file that contain the observed data from CP. 31,55 Image Components Take Important Role in Processing. Rule-based software are more suitable for analyzing droplet microfluidic image data. Rule-based software provide more options, e.g., to disregard the object that touches the border/frame, which resulted in high accuracy and precision. On the other hand, the machine learning-based software required more optimization to train the classifier. We only used 12 lines (5 lines for determining droplets and 7 lines to define borders between droplets and background) to supervise each class (background and droplet). Each line represents the pixels for each group. This pixel manual selection works better if the image has similar properties in majority and represents the pixel distribution of an object, for example, borders between droplets and empty droplets. Even though the droplet's border looks the same across the image, the pixel distributions are varied. We picked more lines to define borders. We also needed extra time to train the classifier (in minutes) when setting the machine learning-based software to determine the 12 lines. However, this cannot represent all of the properties and may result in joint droplets. To overcome this, a larger training set and improvement of the classifier would presumably give a better result. As an image processing tool, the machine learning-based software like QuPath has a more specific purpose. Moreover, this software was created to accommodate whole slide image and large image data analyses, specifically for complex tissue images. 37 However, a comparison has been made between QuPath and CP coupled with CPA. 48 The comparison also shows the pros and cons between the rule-based and machine learning-based software in renal tissues. Furthermore, ImageJ, CP, Ilastik, and QuPath have shown their capability in detecting droplets and generating the results as standalone tools. Data Acquisition Can Be Embedded in the Pipeline. Droplet detection is often used as a preliminary step in droplet microfluidic experiment. It is possible to expand the pipeline for further analysis, e.g., bacteria detection, 31 enzyme reaction measurement, 56 chemical purification analysis, 57 and metal extraction. 58 This step is usually performed to extract the different aspects of a droplet (size, texture, volume, etc.) through pixel analysis. However, each software has its own option and feature to obtain the particular information, for example, "MeasureObjectSizeAndShape" and "MeasureObjec-tIntensity" in CP and "Set measurement" and "ROI Manager" in ImageJ. Nonetheless, this further analysis is not within the scope of this article. We try to focus on the principle of imagebased droplet detection in different software and their components that may ease the user with no experience in image-based analysis. ■ CONCLUSIONS This investigation gives insights into processing droplet microfluidic images using the four currently most popular software tools. We classified the types of open-source software into rule-based and machine learning-based groups. Both groups have three levels of complexity that cover preprocessing, processing, and post-processing steps. These steps help users, specifically with no programming experience, to choose and perform their image analysis. In our experimental setup, we found that the rule-based type of software is better suited for image-based droplet detection. The rule-based type tools also have a simpler workflow or pipeline, especially aimed for non-experienced users. In our case, CP outperforms other software in terms of accuracy, precision, and user-friendliness (defined as usability for non-experienced users in building the pipeline and performing image-based droplet detection using available software modules). In terms of time processing, ImageJ and QuPath give faster processing time to detect droplets in 64 images. On the other hand, Ilastik gives a direct module that may ease early-stage researchers in image-based detection using the annotation principle. However, the optimal software choice may definitely be different for other users depending on their experimental conditions and acquired images. Our paper would serve as a starting point for them to compare available solutions and start with settings optimization, either using rule-based or machine learningbased software. In addition, published research, documentation, or forum discussions (such as www.image.sc) help in finding the most suitable software pipeline for image-based droplet detection and analysis. ■ METHODS Software Search and Selection. We used selected software tools to detect droplets using the procedure explained by Bartkova et al. 31 We found several available and accessible software tools online such as CP, 35 ImageJ, 27 Ilastik, 36 QuPath, 37 Icy, 59 BioFilmQ, 60 CellOrganizer, 61 CellCognition, 62 BioImageXD, 63 BacStalk, 64 Advanced CellClassifier, 65 Phenoripper, 66 and Cytomine. 67 We have tested every software mentioned previously to perform image-based droplet detection; however, not all of the software had a good documentation, workflow, reference, and user-friendly interface. Therefore, we tried to find the most preferred tools available online by using Twint−Twitter Intelligence Tool script 34 written in Python and Scopus search from their website (https://www.scopus.com). The search has the same filter, including the search time (01-01-2010 until 31-12-2020), and only receives the result in "English". Therefore, the search both in Twitter and Scopus will not consider any data outside the filter. Both Twitter and Scopus data were obtained on February 11, 2021. We used each software's name as the keyword for the search. For the Twitter search, the processing was executed in Jupyter Notebook (ver. 6.0.3) 68 within Anaconda Navigator. 69 We also imported datetime and Pandas as additional libraries. For the Scopus search, it was performed using the same keyword. Both results were visualized together using Bokeh and NumPy libraries in Python. 70−72 Droplet Generation and Image Acquisition. We repeated the method described in Bartkova et al. to generate droplets and their image data. 31 We used a set of 64 images to test the most popular software to detect droplets. The images are 2D layers of droplets generated by fluorescence confocal microscopy. We used the same images to find a suitable workflow for each software and describe it thoroughly in the next paragraph. Using the data, we calculated the precision and accuracy of detecting the droplets by comparing the results with manual counting using the same batch processing results in the same attempt. Image Analysis with the Most Popular Software. The image data were analyzed first as a single image using ImageJ Pipeline Construction in CP. We used our previous pipeline 31 in CP as the basis for exploring other tools. We uploaded the image through a drag and drop feature in the Images module and set the Metadata, NamesAndTypes, and Groups according to our setting. We used the "IdentifyPrimar-yObject" to detect droplets. We also used the same setting that is also provided in our GitHub repository (github.com/ taltechmicrofluidics/CP-for-droplet-analysis). The "Measur-eObjectIntensity" and "ExportToSpreadSheet" modules were also set as previously. The results were obtained automatically after pressing the "Analyze Image" button. Pipeline Construction in ImageJ. For ImageJ, we recorded the workflow in the macro record option. This record was used to make scripts for batch processing. To upload the image, we use Open Image from the File tab in the main menu. The parameter was set within "Set Measurement" under "Analyze" tab, and we only ticked "Area" for obtaining the pixels' area in one droplet. This was followed with processing workflow, which included segmentation using "Threshold" under "Adjust" option in the "Image" tab. The threshold was determined as 1507, corresponding to 0.023 scale, described in our previous article using CP. The thresholding was followed with "Watershed" to separate droplets from each other. The counting was performed using "Analyze Particle" under the "Analyze" tab. We set the size corresponding to the range we described in CP, 22,500 up to 62,500 pixels 2 with 0 circularity. Once we finished the processing step, we downloaded the image through the "Flatten" option in the "ROI Manager" menu. We obtained the results in the table, which appeared straight after we performed the analysis. Pipeline Construction in Ilastik. In Ilastik, we used "Pixel Classification" and "Object Classification" pre-defined workflow. We loaded the image in the Input Data module and selected the features for the training set. Since we did not have any reference regarding this type of workflow, we used the recommendation from image.sc forum, starting by adding 0.30, 1.00, and 3.50 sigma or scale corresponding to the selected features, e.g., Gaussian Filter, for color/intensity, edge, and texture. We trained the program to distinguish between the background (dark) and droplets using manual annotations/ labels. For thresholding, we used the default smoothing value (1.0 and 1.0) with a 0.70 threshold. For the size filter, we put values that correspond to the settings in ImageJ, 22,500 for the minimum size and 62,500 for the maximum size. This was followed by using the standard object selection feature option and selecting the detected droplets in object classification as a sample. After finishing the setup, we obtained the results by exporting both object predictions and measured features. Pipeline Construction in QuPath. In QuPath, we started the workflow by creating a project (Create Project) and uploading the image (Add Image). Once the selected image was ready, we performed annotations similar to Ilastik. This process aimed to distinguish the background and foreground (droplets). After annotating the image, we performed "Pixel Classification" using the artificial neural network (ANN_MLP) classifier with high (downsample = 4.0) resolution. For the features, the scales were 1.0, 2.0, and 4.0 for Gaussian gradient magnitude, Hessian determinant, and Hessian max eigenvalue, respectively. We created object detection for droplets and measured all detected droplets. We set a thick boundary class to make borders between each of the droplets. We saved the measurement data from the measurement menu. Batch Processing from Each of the Software. In CP, we performed batch processing by loading the set of images in the Images module and run the "Analyze Images" button. For ImageJ, we executed batch processing using the "Batch Process" option under the "Process" tab. We used a recorded macro with some adjustments to execute the images in the Input folder. By processing the images through this option, we generated results directly to the Output folder. In Ilastik, we continued the batch processing straight after setting up the workflow. Similar to CP, we executed batch processing after uploading the images and only needed to press the "Process all images" button. In QuPath, we transformed the workflow from a single image into scripts to execute the batch processing. Since QuPath provides the script builder, we did not have to script by ourselves, and we could start batch processing by executing the script and ran it for the whole image set in the project. However, the image results from QuPath require additional script using Groovy. We managed to generate the results and you may find the script in our GitHub. We stored both single and batch processing pipelines from each of the software here: (github.com/taltechmicrofluidics/Software-Analysis). Data Acquisition and Processing. We gathered all results and processed them in Microsoft Excel as follows. We tested the results with sensitivity and specificity tests and used manual counting as the reference. 42 Where TP is the correct droplet Detection compared to ground truth, FP is the wrong detection (detecting background), FN is the wrong detection (software cannot recognize existed droplet), TN is the background (0), accuracy is the quality of correctness, and precision is the similarity upon repeatable counting. The manuscript was written through contributions from all authors. I.S., O.S., and O.-P.S. conceived the study. I.S. conducted the research and wrote the article with support from all the other authors. S.B. and P.P. were responsible for the microbiology part, droplet experiments, and microscopy imaging. All authors have given their approval to the final version of the manuscript. Notes The authors declare no competing financial interest. ■ ACKNOWLEDGMENTS The research was performed partially in the laboratory setup with the support from the TTU Development Program 2016− 2022 (project no. 2014−2020.4.01.16.0032). We also acknowledge the Estonian Research Council grants MOBTP109, PRG620, and MOBJD556. Authors received help for early software screening from Matin Nuhamunada, Afif Pranaya Jati, and Ahmad Ardi in Universitas Gadjah Mada (Indonesia).
2021-04-22T13:25:58.629Z
2021-04-14T00:00:00.000
{ "year": 2021, "sha1": "b85536b1fc553e3ae36dcfb1d385391789547b15", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.1c02664", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46dd388ee136f822ec02a4b23d0eb525d6a46533", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252129293
pes2o/s2orc
v3-fos-license
Investigation of the Effect of Particle Surface Charge and Dispersion Stability on Latex Behavior in Cement Using Non-Ionic and Traditional Latexes In this work, a novel total non-ionic polystyrene-polyurethane (PS-PU) composite latex was synthesized with polymerizable polyethylene glycol ether. Contrary to traditional styrene-butyl acrylate latex (St-BA), PS-PU has a smaller size and superior dispersion stability, and it is stable in saturated Ca(OH)2 even after 72 h. In fresh-mixed mortars, PS-PU showed a little adverse effect on workability and insignificant air entrainment, with little defoamer consumption. The retardation effect of PS-PU is also much milder than traditional St-BA. As for strength, PS-PU showed a less adverse effect on early and late age compressive strength, but its effect on flexural strength is not as pronounced as St-BA at high dosages (4% and 6%). The different behavior in cementitious materials between PS-PU and St-BA can be reasoned from their different adsorption behavior and surface charge properties, as the results from characterizations suggest. The non-ionic nature of PS-PU made it less prone to destabilization and adsorption, which turned out as the aforementioned behavior in cementitious systems. The difference can further be ascribed to the difference in their polymeric structure and properties. However, there are stills some limitations to PMC, such as high dosage (10-30% binder mass), compressive strength loss and excessive air entrainment [17]. Compressive strength is one of the key indexes of cementitious materials, which directly relates to the reliability of cementitious segments in building structures [18]. Despite the relative gain in flexural strength, PMC frequently causes considerable, even severe, loss in compressive strength. The loss in compressive strength is largely due to the inhibition of cement hydration by latex, which is further caused by the adsorption of charge-rich latex particles on clinkers [19,20]. Undesired air entrainment that far exceeds adequate values if no proper defoaming is conducted is another adverse effect of polymer latexes, which not only deteriorates strength but also causes workability problems [21,22]. Excessive air entrainment usually requires a considerable amount of defoamer to mitigate [23]. Considering the above problems, PMC is becoming less popular in the past decade. The adverse effect of PMC is largely caused by the conventional preparation technique, emulsion polymerization [24]. In a Preparation and Characterization of the Polyurethane-Polystyrene Latex 2.2.1. Preparation of the Latex The latex was prepared in 2 steps: preparation of an amphiphilic polyurethane block macromonomer and subsequent emulsion polymerization using the macromonomer as a polymer building block and surfactant. Condensation of the amphiphilic polyurethane macromonomer was conducted in a dry environment. Firstly, 40 g of PPG-4000 was added to a flask and cooled to below 20 • C. Then, 3.80 g of TDI was added to the flask and stirred for 10 min, 0.02 g of dibutyl tin dilaurate was added to the flask, and the system was then heated to 50 • C and stirred for 6 h. Afterward, 28.4 g of HPEG was added dropwise to the flask in 10 min, and HPEG was pre-heated to 60-65 • C to maintain its liquid form. The system was kept at 50 • C under stirring for another 6 h. After the condensation, the macromonomer (60 g) was poured into a beaker and cooled to below 20 • C; the resultant solid was broken into pieces and dissolved in 40 g of styrene at 10-20 • C. The viscous solution was then dispersed in 500 mL of distilled water at 600 rpm for 30 min. Upon full dispersion, 0.85 g of potassium persulfate was dissolved in the monomer dispersion; then, the dispersion was heated to 55 • C, purged with N 2 , and 50 mL of a solution containing 0.11 g of sodium hydrogen sulfite was added to by a peristaltic pump at a rate of 0.33 mL/min, after the addition, the system was kept at 55 • C for another 0.5 h. Finally, unreacted monomers were removed by vacuum (3-5 kPa at 30 • C for 2 h) and the resultant latex was stored for further use. The chemical route for preparation is demonstrated in Figure 1. Preparation of the Latex The latex was prepared in 2 steps: preparation of an amphiphilic polyurethane block macromonomer and subsequent emulsion polymerization using the macromonomer as a polymer building block and surfactant. PPG and HPEG were firstly pretreated to remove water residuals: they were vacuum (10 mmHg) dried at 100 °C for 24 h. Styrene was vacuum distilled (3-5kPa) to remove inhibitors. Condensation of the amphiphilic polyurethane macromonomer was conducted in a dry environment. Firstly, 40 g of PPG-4000 was added to a flask and cooled to below 20 °C. Then, 3.80 g of TDI was added to the flask and stirred for 10 min, 0.02 g of dibutyl tin dilaurate was added to the flask, and the system was then heated to 50 °C and stirred for 6 h. Afterward, 28.4 g of HPEG was added dropwise to the flask in 10 min, and HPEG was pre-heated to 60-65 °C to maintain its liquid form. The system was kept at 50 °C under stirring for another 6 h. After the condensation, the macromonomer (60 g) was poured into a beaker and cooled to below 20 °C; the resultant solid was broken into pieces and dissolved in 40 g of styrene at 10-20 °C. The viscous solution was then dispersed in 500 mL of distilled water at 600 rpm for 30 min. Upon full dispersion, 0.85 g of potassium persulfate was dissolved in the monomer dispersion; then, the dispersion was heated to 55 °C, purged with N2, and 50 mL of a solution containing 0.11 g of sodium hydrogen sulfite was added to by a peristaltic pump at a rate of 0.33 mL/min, after the addition, the system was kept at 55 °C for another 0.5 h. Finally, unreacted monomers were removed by vacuum (3-5kPa at 30 °C for 2 h) and the resultant latex was stored for further use. The chemical route for preparation is demonstrated in Figure 1. A reference styrene-butylacrylate latex (noted as St-BA in the following Text S1) was also prepared; the detailed procedure is available in Supplementary Information. Characterization of the Latex After preparation, the solid content of the nano latexes was firstly measured, and the conversion rate was roughly estimated. Dispersion stability of the latex was verified us- A reference styrene-butylacrylate latex (noted as St-BA in the following Text S1) was also prepared; the detailed procedure is available in Supplementary Information. Characterization of the Latex After preparation, the solid content of the nano latexes was firstly measured, and the conversion rate was roughly estimated. Dispersion stability of the latex was verified using saturated Ca(OH) 2 solution to simulate a pore solution environment; the latexes were diluted into 0.1% dispersions for better observation. The size of the particles in the latexes in different environments was measured by Dynamic Light Scattering (DLS, Type CGS-3, ALV Co., Langen, Germany); samples were prepared as 0.05 % (w/w) dispersion with ultrapure water and saturated Ca(OH) 2 solution. Morphology of the latexes was verified by SEM (FEI Quanta 250, 15 kV, 50,000× g); the latexes were also sampled at 0.05% to inhibit membrane formation. Zeta potential of the latex dispersions was measured by a DT-300 (Dispersion Technology Inc., New York, NY, USA) zetaprobe at the intrinsic pH of the as-prepared latex. Fourier transform infrared spectra (FT-IR) of the samples was acquired on a FT-IR spectrometer (Type Nicolet 370, Thermo Fisher Co., Waltham, MA, USA). Mortar Testing The mortars in this study were prepared based on the procedures of GB/T 17671-1999 [35]. The w/b ratio (w/b) was 0.4, and the binder/sand ratio was 1:2.7. Mix design of latexes and references in cement composites was presented in Table 2. The flow of the mortars was regulated to 160 ± 5 mm. After mix, fluidity and density were measured, then supplementary defoamer (0.02-0.10 g) was added for latex-added mortars, and the mortar was remixed at a high stirring rate for 15 s. The process was repeated until the density plateaued, which is necessary to avoid disturbance from excessive air in subsequent tests. The mortars were then cast and cured at 20 ± 1 • C and 95% relative humidity. Three batches of mortar prisms were prepared for ages of 1 d, 7 d and 28 d; flexural and compressive strength of mortar was tested afterward. Paste Preparation Unless specifically noted, the pastes were prepared according to GB/T 8077-2012 [36] at a w/c of 0.4. The pastes here were prepared with cement replaced by 2% and 4% latex samples and appropriate amount of PCA-VIII and PXP-I to regulate the flow (200 ± 5 mm) and air entrainment (with blank as reference). The admixtures were added to the water phase prior to mixing with cement. Setting time of cement paste, which characterizes the point of early hydration product network formation, reflects the rate of early cement hydration and further indicates the impact of cement admixtures. The setting time of the pates was measured according to the procedures in GB/T 1346-2011 [37]. Characterization of Early Age Hydration Hydration heat evolution in early ages was characterized by isothermal calorimetry (IC). In the tests, about 13.8 g of the paste (prepared according to Section 2.4.1) was accurately weighed into a plastic vial. The vial was sealed and placed in a TAM Air isothermal calorimeter to measure heat development for 24 h at 20.0 • C. Zeta potential of latex-modified pastes was measured using the DT-300 zetaprobe, as described in Section 2.2.2. The pastes were prepared by the procedure in Section 2.4.1 and were directly measured. Adsorption of latex on cement particles at the start of hydration was characterized by assessing the remaining latex in the supernatant, which is measured by Total Organic Carbon (TOC) analysis. For a more convenient extract of supernatant, the w/c here is 1.0. In the experiment, 50 g of the cement with 1.0%, 2.0%, 4.0% and 6.0% latex addition were mixed with 50 g of water; the mixture then underwent the same procedures as Section 2.4.1. after mixing, the paste was centrifuged at 3000 rpm for 10 min (as pre-tested, this setting is sufficient to separate cement while keeping latex particles in the supernatant), and the supernatant was collected, diluted and tested on a Multi N/C 3100 TOC analyzer (Analytik Jena, Jena, Germany). Characterization of Hydration at Later Ages In the experiments, 25 g of paste was cast in a 50 mL plastic vial and sealed thereafter. After the target curing times (7 d, 28 d) at 20 • C, samples were demolded. The outer layer (1 mm of thickness) was removed. Samples for SEM scans were split into lumps of 3-5 mm, and hydration was suspended by 24 h isopropanol (A.R.) treating for 3 cycles, after which the samples were dried in a vacuum at 30 • C and sealed under N 2 . Samples used for XRD and TGA test were ground and treated with isopropanol in the above process. After treatment, the sample powders were collected, sieved (180-mesh), vacuum dried (30 • C), and sealed in N 2 -filled tubes. SEM observations were also conducted in an FEI Scanning electron microscope (Type Quanta 250, FEI Co., Hillsboro, OR, USA), with an acceleration voltage of 15 kV. A X-ray diffractometer was used for XRD (Type D8 Advance, Bruker Co., Rheinstetten, Germany). Before testing, 10% of α-Al 2 O 3 was introduced as internal reference. The spectra were analyzed using the Rietveld method that had been pre-installed in TOPAS. Preparation and Characteristics of Polymeric Nanoparticles Physiochemical properties of the latexes are shown in Table 2. According to Table 2, the conversion rate of both latexes is higher than 90%, which confirms successful preparation. Then, surface properties of the latex particles were assessed by zeta potential analysis. The data of the latexes at their intrinsic pH are also presented in Table 3. Unlike traditional latexes, the zeta potential of PS-PU (−5.2) is close to 0, which can be attributed to its non-ionic nature, while St-BA is −27.3 mV. These data confirmed the non-ionic nature of PS-PU. Figure 2a shows the dispersion of the latexes in different media; as can be observed, PS-PU remained stable in Ca(OH) 2 solution after 72 h, while segregation had begun in St-BA's Ca(OH) 2 solution at a ring of precipitate appeared around the surface. The mean hydraulic radius (R h ) of the latexes is 40.3 nm in water, which is small compared with traditional latexes (50-100 nm in water) [5,13]; the R h of St-BA latex (~80 nm) also fell in this range. In the Ca(OH) 2 solution, the difference between the two latexes became more distinct, as Rh of PS-PU only increased to 75.3 nm while Rh of St-BA had more than quadrupled. DLS data clearly showed the superior dispersion stability of PS-PU. SEM images of the latexes are shown in Figure 2b. As can be observed, PS-PU was presented as a free particle and small clusters, while St-BA formed stripe-like membranes. The size of the remaining particles of St-BA was also larger. Characteristic peaks of polymeric segment in PS-PU and St-BA, including aromatic, alkyl (backbone in St-BA and PEG/PPG in PS-PU) and carbonyl groups, can be found in FT-IR spectra of the samples in Figure 2c. The intensity of the characteristic peaks showed considerable deviation, which is mainly due to the difference in abundance between the two materials. The much smaller amount of aromatic and carbonyl groups in PS-PU (from TDI segments), as compared with those in St-BA (from phenyl group in St and carboxylate group in BA, respectively), resulted in low peak intensity. alkyl (backbone in St-BA and PEG/PPG in PS-PU) and carbonyl groups, can be found in FT-IR spectra of the samples in Figure 2c. The intensity of the characteristic peaks showed considerable deviation, which is mainly due to the difference in abundance between the two materials. The much smaller amount of aromatic and carbonyl groups in PS-PU (from TDI segments), as compared with those in St-BA (from phenyl group in St and carboxylate group in BA, respectively), resulted in low peak intensity. Characterizations of the latexes suggested the non-ionic nature of PS-PU, which may inhibit its adsorption on cement particles by electrostatic force. The inhibition, in turn, ensured its stability in the pore solution environment. Additionally, the covalent binding of the surfactant groups (PEG) in PS-PU can further improve its stability by avoiding destabilization caused by surfactant desorption. Mortar Fresh Properties The effect of the nano latexes on fresh mortar is shown in Table 4. As the results suggest, the addition of PS-PU slightly improved the fluidity of the fresh mortars, and the improvement increased with PS-PU dosage. The improvement is due to the relative decrease in binder content and water-reducing effect from PS-PU, which is especially Characterizations of the latexes suggested the non-ionic nature of PS-PU, which may inhibit its adsorption on cement particles by electrostatic force. The inhibition, in turn, ensured its stability in the pore solution environment. Additionally, the covalent binding of the surfactant groups (PEG) in PS-PU can further improve its stability by avoiding destabilization caused by surfactant desorption. Mortar Fresh Properties The effect of the nano latexes on fresh mortar is shown in Table 4. As the results suggest, the addition of PS-PU slightly improved the fluidity of the fresh mortars, and the improvement increased with PS-PU dosage. The improvement is due to the relative decrease in binder content and water-reducing effect from PS-PU, which is especially prominent at high dosage. The water-reducing effect is likely due to its structure, i.e., polymer core with surface PEG chain, which resembles polycarboxylate cement dispersant to some extent. Steric hindrance effects from the PEG chain on PS-PU may be the main contributing factor to its water-reducing effect. Additionally, previous research has reported water reduces the effect of PEG-modified nano polymer latex [38]. Compared with PS-PU, St-BA latex also improved the fluidity of the mortars, which is based on the same reason, but the degree of improvement is weaker, which may be due to adsorption of the latex particles by cement and destabilization caused by surfactant desorption. Excessive air entrainment that is caused by surfactants in the latex has always been a major drawback of PMC; a considerable amount of defoamers is required to mitigate the effect. The air-entraining ability of the latexes was characterized by the density of the mortars without defoamer addition and the number of defoamers to drive the density of the samples as close to the blank. As the results in Table 4 suggest, the air entraining effect of PS-PU is unconventionally low; compared with St-BA latex, its density loss is insignificant (from 2.29 × 10 3 kg m −3 to 2.28 and 2.27 × 10 3 kg m −3 , respectively) at low dosages (1%, 2%), and not pronounced at high dosages (4%, 6%), which is only 24-37% of St-BA latexes' value before defoaming. The amount of defoamers to mitigate the effect is also much lower, which is 13-18% of St-BA latex. Mortar Strength The strength of the mortar samples with 1-6% latex addition at 1-28 d is shown in Figure 3. As the results suggest, both the latexes showed retarding effect at an early age (1 d), yet the effect of ST-BA is more pronounced, with 1 d compressive strength decreased by 27-65% from 1-6% dosage, while the strength decrease in PS-PU is only 18-41% at the same dosage. The strong retarding effect of St-BA is due to the hydrolyzation of the acrylic esters within, which exposed the carboxyl groups in the polymer chain and enhanced its adsorption on cement particles. The stronger adsorption further resulted in retardation. Compared with St-BA, the lower strength loss of PU-PS modified mortars can be attributed to its non-ionic and non-hydrolyzable structure, and the decrease is mainly caused by the relative decrease in binder content. Compared with compressive strength, flexural strength decrease in the samples was relatively smaller, which is characteristic for cementitious materials modified by polymer latexes. The less flexural strength loss and thus higher flexural/compressive strength ratio is due to the formation of a composite organic/inorganic network made up of polymer and hydration products. As for later ages, the compressive strength loss of the mortars gradually lessened. In the two latexes, the strength loss of St-BA is still more significant, which is 13-48% at 7 d and 8-31% at 28 d. The strength distribution of the samples at 28 d is interesting: flexural strength decrease in St-BA is pronounced at low dosage (1%, 2%) but lessened at high dosages (4%, 6%), while flexural strength loss of PS-PU is insignificant at the low dosages, but turned prominent at the high dosages. This results in PS-PU's high flexural-compressive ratio at the low dosage and St-BA at the high dosage. The different flexural strength variation may be due to the difference in dispersion stability and film-forming between the two latexes: at low dosages, St-BA was unable to form a film network throughout the binders, and its adverse effect on strength was not mitigated by the formation of organic/inorganic network. PS-PU is more stable and affects less on strength at these dosages; while at high dosages, the network of St-BA can be formed, but the film of PS-PU is still hard to form due to its stability. As for compressive strength, the decrease is still smaller for PS-PU due to its stability. As for later ages, the compressive strength loss of the mortars gradually lessened. In the two latexes, the strength loss of St-BA is still more significant, which is 13-48% at 7 d and 8-31% at 28 d. The strength distribution of the samples at 28 d is interesting: flexural strength decrease in St-BA is pronounced at low dosage (1%, 2%) but lessened at high dosages (4%, 6%), while flexural strength loss of PS-PU is insignificant at the low dosages, but turned prominent at the high dosages. This results in PS-PU's high flexural-compressive ratio at the low dosage and St-BA at the high dosage. The different flexural strength variation may be due to the difference in dispersion stability and film-forming between the two latexes: at low dosages, St-BA was unable to form a film network throughout the binders, and its adverse effect on strength was not mitigated by the formation of organic/inorganic network. PS-PU is more stable and affects less on strength at these dosages; while at high dosages, the network of St-BA can be formed, but the film of PS-PU is still hard to form due to its stability. As for compressive strength, the decrease is still smaller for PS-PU due to its stability. Interaction in Early Ages Setting times of the paste are listed in Table 5; the results confirmed the retarding effect of the latexes, the initial setting of pastes with PS-PU addition is 260 min and 350 min, respectively, which is postponed by 70 and 160 min, and the final setting was fur- Interaction in Early Ages Setting times of the paste are listed in Table 5; the results confirmed the retarding effect of the latexes, the initial setting of pastes with PS-PU addition is 260 min and 350 min, respectively, which is postponed by 70 and 160 min, and the final setting was further delayed by 80 and 220 min. Compared with PS-PU, the setting time delay of St-BA is even more serious, as the initial set at 2% dosage is already 360 min, the time turned 520 min at 4% dosage, but the time interval between the initial and final set is relatively narrower. IC curves of the latexes are presented in Figure 4a. The curves further confirmed the retarding effect, which was found in strength tests. The main peaks of both latexes were postponed, and the degree of delay rose with dosage. Additionally, setting times that were deduced from the curves were in agreement with results from direct measurement on pastes. 4% 520 680 IC curves of the latexes are presented in Figure 4a. The curves further confirmed t retarding effect, which was found in strength tests. The main peaks of both latexes we postponed, and the degree of delay rose with dosage. Additionally, setting times th were deduced from the curves were in agreement with results from direct measureme on pastes. Early age interaction between the latexes and cement particles were assessed by z potential and adsorption tests, and the results are demonstrated in Figure 4b,c. As z potential results suggest, the variation tendency of pastes with latex addition in the fi 30 min is different for PS-PU and St-BA: zeta potential values of St-BA modified pas were far more negative than those with PS-PU modification. The zeta potential of t paste with 2% PS-PU modification dropped from 14.7 to 13.9 at 4 min and 17.3 to 16.6 30 min, while the value of the paste 2% St-BA was from 14.7 to 11.7 at 4 min. At 4% d age, the zeta potential change of PS-PU modified cement was still close to those at 2 dosage, while an increase from 2% St-BA to 4% resulted in a significant zeta potent decrease. The different trend in zeta potential variation may be due to the difference ionic properties, which further lead to different adsorption affinity with cement particl This assumption was verified by adsorption assessment by TOC. As the data in Figure Figure 4. Interaction of the latexes between cement in early ages: (a) hydration heat evolution; (b) zeta potential evolution; (c) adsorption of the latexes on cement in 4 min-2 h. Early age interaction between the latexes and cement particles were assessed by zeta potential and adsorption tests, and the results are demonstrated in Figure 4b,c. As zeta potential results suggest, the variation tendency of pastes with latex addition in the first 30 min is different for PS-PU and St-BA: zeta potential values of St-BA modified pastes were far more negative than those with PS-PU modification. The zeta potential of the paste with 2% PS-PU modification dropped from 14.7 to 13.9 at 4 min and 17.3 to 16.6 at 30 min, while the value of the paste 2% St-BA was from 14.7 to 11.7 at 4 min. At 4% dosage, the zeta potential change of PS-PU modified cement was still close to those at 2% dosage, while an increase from 2% St-BA to 4% resulted in a significant zeta potential decrease. The different trend in zeta potential variation may be due to the difference in ionic properties, which further lead to different adsorption affinity with cement particles. This assumption was verified by adsorption assessment by TOC. As the data in Figure 4c suggest, the adsorption capacity of PS-PU by cement is much lower than those with St-BA, only 40-63% the amount of the latter at 2 h, and the adsorption was less affected by dosage and time, with a much lower increase by dosage and a milder time-elapse increase. The lower adsorption of PS-PU is apparently due to its non-ionic and non-hydrolyzable properties. These data confirmed the previous suggestion. Interaction in Later Ages SEM images of the paste at ages of 7 d and 28 d are shown in Figure 5. As the images suggest, the difference between the samples was not very significant; there seem to be fewer pores in PS-PU and St-BA modified samples, especially in PS-PU modified samples. No clear film or membrane formation was observed in the images, which may be due to the relatively low dosage of the latexes. St-BA, only 40-63% the amount of the latter at 2 h, and the adsorption was less affec by dosage and time, with a much lower increase by dosage and a milder time-elapse crease. The lower adsorption of PS-PU is apparently due to its non-ionic a non-hydrolyzable properties. These data confirmed the previous suggestion. Interaction in Later Ages SEM images of the paste at ages of 7 d and 28 d are shown in Figure 5. As the imag suggest, the difference between the samples was not very significant; there seem to fewer pores in PS-PU and St-BA modified samples, especially in PS-PU modified sa ples. No clear film or membrane formation was observed in the images, which may due to the relatively low dosage of the latexes. Figure 6a,b. Cont of C3S, C2S and portlandite (i.e., the usual crystalline form of Ca(OH)2 as cement hyd tion product) are also shown in Figure 6c. To evaluate the degree of hydration, the ra between portlandite and C3S/C2S (noted as CxS in the following Text S1) was calculat As the data suggest, the degree of hydration was inhibited by the addition of both lat es, as the CH/CxS fell from 0.94 in blank by 5-15% in the PS-PU modified samples a 20-22 in St-BA modified samples. At 28 d, the decrease is 11-15% for PS-PU and 15-2 for St-BA. Still, the decrease in portlandite content for PS-PU is lower than that of St-B indicating a weaker hydration inhibition at later ages, which may be due to PS-PU's l film forming. Figure 6a,b. Content of C 3 S, C 2 S and portlandite (i.e., the usual crystalline form of Ca(OH) 2 as cement hydration product) are also shown in Figure 6c. To evaluate the degree of hydration, the ratio between portlandite and C 3 S/C 2 S (noted as CxS in the following Text S1) was calculated. As the data suggest, the degree of hydration was inhibited by the addition of both latexes, as the CH/CxS fell from 0.94 in blank by 5-15% in the PS-PU modified samples and 20-22 in St-BA modified samples. At 28 d, the decrease is 11-15% for PS-PU and 15-28% for St-BA. Still, the decrease in portlandite content for PS-PU is lower than that of St-BA, indicating a weaker hydration inhibition at later ages, which may be due to PS-PU's less film forming. Discussion As the above data suggest, the latexes exhibited distinctly different impacts on fresh properties of mortar, hydration and strength. The difference may arise from the dispersion stability of the latexes. As the characterizations in Section 3.1 suggest, PS-PU is highly stable in a cementitious environment due to its non-ionic nature, while St-BA would gradually lose its stability due to desorption of surfactants and hydrolyzation of the ester groups. The superior stability of PS-PU made it less likely to destabilize and adsorb on cement particles, which in turn exhibit its milder impact on workability. Additionally, the surfactant groups on PS-PU are highly unlikely to unbound (instead of desorb as it is covalently bounded) and transfer onto cement particles, which almost eliminated its air-entraining effect. The less affinity for cement also led to weaker hydration inhibition, as the cement particles were less hindered by adsorbed latex particles or surfactants from the latex, as results from Section 3.2 and Section 3.3.1 suggest. The mechanism for PS-PU and St-BA's different impacts on cement workability and strength is demonstrated in Figure 7. Discussion As the above data suggest, the latexes exhibited distinctly different impacts on fresh properties of mortar, hydration and strength. The difference may arise from the dispersion stability of the latexes. As the characterizations in Section 3.1 suggest, PS-PU is highly stable in a cementitious environment due to its non-ionic nature, while St-BA would gradually lose its stability due to desorption of surfactants and hydrolyzation of the ester groups. The superior stability of PS-PU made it less likely to destabilize and adsorb on cement particles, which in turn exhibit its milder impact on workability. Additionally, the surfactant groups on PS-PU are highly unlikely to unbound (instead of desorb as it is covalently bounded) and transfer onto cement particles, which almost eliminated its air-entraining effect. The less affinity for cement also led to weaker hydration inhibition, as the cement particles were less hindered by adsorbed latex particles or surfactants from the latex, as results from Sections 3.2 and 3.3.1 suggest. The mechanism for PS-PU and St-BA's different impacts on cement workability and strength is demonstrated in Figure 7. Finally, St-BA's instability still bears some advantages, which brought about the ease for film in binder network at later ages, which is beneficial for flexural strength. PS-PU's superior stability may also inhibit its effect on flexural strength improvement, as the results from Section 3.2 suggest. Finally, St-BA's instability still bears some advantages, which brought about the ease for film in binder network at later ages, which is beneficial for flexural strength. PS-PU's superior stability may also inhibit its effect on flexural strength improvement, as the results from Section 3.2 suggest. Conclusions In this work, a novel non-ionic, non-hydrolyzable polystyrene-polyurethane (PS-PU) composite latex was prepared using polymerizable polyethylene glycol ether. Compared with traditional styrene-butyl acrylate latex (St-BA), PS-PU is smaller in size and exhibits superior dispersion stability, and it is stable in saturated Ca(OH)2 even after 72 h. In fresh-mixed mortars, PS-PU showed a little adverse effect on workability and insignificant air entrainment, with little defoamer consumption. The retardation effect of PS-PU is also much milder than traditional St-BA. As for strength, PS-PU showed a less adverse effect on early and late age compressive strength, but its effect on flexural strength is not as pronounced as St-BA at high dosages (4%, 6%). The different behavior in cementitious materials between PS-PU and St-BA can be reasoned from their different adsorption behavior and surface charge properties, as the results from characterizations suggest. The non-ionic and non-hydrolyzable nature of PS-PU made it less prone to destabilization and adsorption, which turned out as the aforementioned behavior in cementitious systems. The difference can further be ascribed to the difference in their polymeric structure and properties. In summary, the results in this study suggest that the stability of the latexes can greatly affect their effect on the workability and strength of cement. Conclusions In this work, a novel non-ionic, non-hydrolyzable polystyrene-polyurethane (PS-PU) composite latex was prepared using polymerizable polyethylene glycol ether. Compared with traditional styrene-butyl acrylate latex (St-BA), PS-PU is smaller in size and exhibits superior dispersion stability, and it is stable in saturated Ca(OH) 2 even after 72 h. In fresh-mixed mortars, PS-PU showed a little adverse effect on workability and insignificant air entrainment, with little defoamer consumption. The retardation effect of PS-PU is also much milder than traditional St-BA. As for strength, PS-PU showed a less adverse effect on early and late age compressive strength, but its effect on flexural strength is not as pronounced as St-BA at high dosages (4%, 6%). The different behavior in cementitious materials between PS-PU and St-BA can be reasoned from their different adsorption behavior and surface charge properties, as the results from characterizations suggest. The non-ionic and non-hydrolyzable nature of PS-PU made it less prone to destabilization and adsorption, which turned out as the aforementioned behavior in cementitious systems. The difference can further be ascribed to the difference in their polymeric structure and properties. In summary, the results in this study suggest that the stability of the latexes can greatly affect their effect on the workability and strength of cement. Conflicts of Interest: The authors declare no conflict of interest.
2022-09-09T16:27:02.189Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "4f4beb3c8ab6ba6e85d751e575350dbf41e0aeea", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/15/17/6145/pdf?version=1662359977", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "af03904c68201b339f6d9db3f3f8026308772540", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
247605703
pes2o/s2orc
v3-fos-license
Supramolecular Thixotropic Ionogel Electrolyte for Sodium Batteries Owing to the potential of sodium as an alternative to lithium as charge carrier, increasing attention has been focused on the development of high-performance electrolytes for Na batteries in recent years. In this regard, gel-type electrolytes, which combine the outstanding ionic conductivity of liquid electrolytes and the safety of solid electrolytes, demonstrate immense application prospects. However, most gel electrolytes not only need a number of specific techniques for molding, but also typically suffer from breakage, leading to a short service life and severe safety issues. In this study, a supramolecular thixotropic ionogel electrolyte is proposed to address these problems. This thixotropic electrolyte is formed by the supramolecular self-assembly of D-gluconic acetal-based gelator (B8) in an ionic liquid solution of a Na salt, which exhibits moldability, a high ionic conductivity, and a rapid self-healing property. The ionogel electrolyte is chemically stable to Na and exhibits a good Na+ transference number. In addition, the self-assembly mechanism of B8 and thixotropic mechanism of ionogel are investigated. The safe, low-cost and multifunctional ionogel electrolyte developed herein supports the development of future high-performance Na batteries. Introduction In the last few decades, lithium (Li)-ion batteries (LIBs) have been extensively explored due to their outstanding energy and power densities [1][2][3]. However, with the successful commercialization of LIBs, serious safety concerns about ignition or explosions are gradually emerging [4], and the global Li supply for LIBs is depleting [5][6][7][8][9][10]. Thus, alternatives to LIBs have become a hot topic [11]. Considering the high abundance, low costs, and relatively low atomic mass (Compared with Mg, Zn, etc.) of sodium, as well as its similar chemical properties to those of Li, sodium (Na)-ion batteries (NIBs) have recently become a research focus [12,13]. However, the performance of NIBs is still far inferior to LIBs, and it is imperative to further investigate electrode materials and electrolytes for NIBs [12,[14][15][16][17]. The type of electrolyte is key to achieving optimal NIBs, because an electrolyte acts as a medium for charge transfer in batteries, which determines the current density, time stability, and safety or reliability of batteries [18]. Typical organic solvents such as propylene carbonate and ethylene carbonate containing Na salts exhibit a high ionic conductivity, a wide electrochemical stability window, a relatively higher capacity, and a stable anodepassivation layer; however, their flammability and high vapor pressure lead to endless conductivity, a wide electrochemical stability window, a relatively higher capacity, and a stable anode-passivation layer; however, their flammability and high vapor pressure lead to endless security problems [14,16,19]. In this regard, all-solid-state electrolytes have been considered as a promising alternative to address the safety issues because of nonflammability, high energy densities, and possible operation in a wide range of voltages and temperatures [20]; however, their intrinsic drawbacks, including low ionic conductivity, poor air compatibility, narrow electrochemical stability, and the grain-boundary resistance arising from the solid-solid interface, have not yet been resolved [16,18]. In recent years, considerable attention has been focused on the development of gel electrolytes, which exhibit good ionic conductivity due to the liquid-like ion-transport mechanism and good interfacial contact between electrodes and electrolytes due to mechanical softness and good wettability [18,21,22]. Moreover, gel electrolytes can effectively resist external mechanical stimulation and prevent electrolyte leakage [23][24][25]. Generally, gel electrolytes are more ideal electrolyte materials compared with their liquid and solid counterparts because of their safety, high ionic conductivity, and good electrochemical properties [26]. In previous studies, we developed a series of D-gluconic acetal-based gelators that can harden a wide variety of ILs, the conductivity and rheological properties of the resultant ionogels were systematically investigated, and the application potential of several outstanding ionogels in flexible display, intelligent conductivity, and lubricants was explored [27,28]. Herein, a novel supramolecular thixotropic ionogel electrolyte for Na batteries is described. A room-temperature ionic liquid (IL), viz. butylmethylpyrrolidinium-bis-(trifluoromethylsulfonyl)imide (BMPTFSI, Chart 1), was selected as the electrolyte substrate because its cathodic stability limit (associated with the reduction of BMP + ) is beyond the Na plating/stripping reaction, and TFSIcan withstand a high potential (>5.0 V vs. Na) [29]; moreover, excellent properties of ILs such as non-flammability, a low vapor pressure, wide electrochemical stability, and high thermal stability guarantee the safety of electrolyte materials [30,31]. A D-gluconic acetal-based gelator (B8, Chart 1) was selected to gel the IL [27]. Fortunately, the resultant supramolecular ionogel electrolyte based on B8, BMPTFSI, and NaTFSI (Chart 1) exhibited unprecedented performance, including favorable ionic conductivity, good Na + transference number (TNa + ), high mechanical strength, chemical stability, thixotropy, and a rapid self-healing property, indicative of a novel intelligent gel electrolyte with application potential for sodium batteries. Thixotropy, as well as stimuli-responsive liquid-/solid-phase transitions, are significant for the molding of materials in several industrial processes, including paints, ceramic sols, and device assembly [32]. Moreover, the self-assembly and thixotropic mechanism of the supramolecular ionogel electrolyte were investigated. Addressing the shortage of lithium resources and bright prospect of NIBs, our studies on developing a multifunctional ionogel electrolyte provide some insights into the future development of high-performance supramolecular ionogel electrolytes for NIBs. Chart 1. Chemical structures of B8, BMPTFSI, and NaTFSI. Preparation of the B8-BMPTFSI Ionogel and Insights into the Mechanism of Self-Assembly and Thixotropy To prepare a B8-BMPTFSI ionogel, an appropriate amount of B8 was added into a certain volume of BMPTFSI, which was stirred at 60 • C. After B8 was completely dissolved, the hot solution was cooled to 25 • C for 20 min, affording the B8-BMPTFSI ionogel. The critical gelation concentration (CGC) of the B8-BMPTFSI ionogel was 0.7% (w/v), and the ionogel was thermal-reversible, the sol-gel phase inversion temperature (Tg) of the 0.7% (w/v) B8-BMPTFSI ionogel was 76.4 • C. The B8-BMPTFSI ionogel exhibited hysteresis-free thixotropic behavior, and corresponding rheological data ( Figure 1a) indicated that the gel can recover itself after damage by shear force, and that recovery can be repeated. X-ray diffraction (XRD) patterns of the B8-BMPTFSI ionogel ( Figure 1d) were recorded to obtain information regarding the self-assembly of B8. Three clear reflection peaks corresponding to d-spacings of 29.57 Å, 14.51 Å, and 9.87 Å in a ratio of 1: 1/2: 1/3 were observed, respectively, suggesting the presence of a lamellar structure with a periodicity of 29.57 Å in the gel state [35]. The peak corresponding to a d-spacing of 3.56 Å is characteristic of the π-π stacking distance, indicative of the presence of π-π interactions between forces between the side chains of B8 [33]. 1 H NMR spectral analysis of B8 at variable concentrations in DMSO-d 6 was also conducted in the presence of BMPTFSI. At 5% (w/v) B8, the N-H signal of the amide group was observed at 7.470 ppm, N-H signals of the urea group were observed at 5.699 ppm, and different O-H signals were observed at 4.718 and 4.451 ppm. However, with the increase in the B8 concentration, all the above-mentioned signals shifted downfield, indicating that the protons of amide, urea, and hydroxyl groups are involved in hydrogen-bond formation [34]. X-ray diffraction (XRD) patterns of the B8-BMPTFSI ionogel ( Figure 1d) were recorded to obtain information regarding the self-assembly of B8. Three clear reflection peaks corresponding to d-spacings of 29.57 Å, 14.51 Å, and 9.87 Å in a ratio of 1: 1/2: 1/3 were observed, respectively, suggesting the presence of a lamellar structure with a periodicity of 29.57 Å in the gel state [35]. The peak corresponding to a d-spacing of 3.56 Å is characteristic of the π-π stacking distance, indicative of the presence of π-π interactions between B8 molecules [36]. The peak with a d-spacing of 4.02 and 4.50 Å corresponded to the hydrogenbonding distance of A8 molecules and the packing distance between the long alkyl chains of A8 molecules, respectively [37]. Attempts were made to prepare a B8 crystal to investigate the self-assembly mode of B8; however, all attempts were futile. Fortunately, however, the crystal of Z1, a precursor compound of G16 and A8, was obtained from an H 2 O/methanol (1:1) solution, and its single-crystal XRD data were collected ( Figure S2, Table S1, and CIF S1): Assemblies of D-gluconic acetal fragments along the 1D direction were observed. Based on the singlecrystal XRD data of Z1 and by theoretical calculation (Gaussian 09), the self-assembly mode of B8 was optimized and simulated (CIF S2). Figure 1e, f shows the calculated assembly mode of B8: The H-bonding interactions of polyhydroxy fragments and π···π interactions of chlorinated benzenes undoubtedly played a key role in the formation of the 1D assembly [27,37], and the two chlorine atoms on the benzene ring played a subtle role in supramolecular assemblies, which is well known to involve dispersive halogen interactions (halogen-arene and halogen-halogen interactions) [38,39]. In addition, Hbonding interactions of C=O···H-N between side chains were observed, which were also responsible for the 1D assembly of B8 (Figure 1f), the bond lengths of which were 3.983 Å, corresponding to a d-spacing of 4.02 Å in the XRD experiment ( Figure 1d) [40]. C -Cl···H-N interactions ( Figure 1e) [37,41] and VDW forces between side chains played a key role in the 3D assembly of B8. As shown in Figure 1e and f, the distance between side chains, the width of 1D assembly, the distance of the layered structure, and the distance of B8 molecule were 4.683 Å, 10.019 Å, 14.578 Å, and 29.843 Å, respectively, corresponding to d-spacings of 4.50 Å, 9.87 Å, 14.51 Å, and 29.57 Å in the XRD spectrum ( Figure 1d). Although slight deviations were observed, it was sufficient to confirm the reliability of our calculation results. In conclusion, the 1D self-assembly of B8 gelators depended on π-π stacking between benzene rings, hydrogen-bonding between polyol fragments, and C=O···H-N H-bonding interactions between side chains. The C-Cl···H-N interactions and VDW forces between side chains were responsible for the 3D self-assembly of B8 assemblies. In addition, the re-contact and re-assembly of the functional groups on the 1D assembly surface after mechanical damage is the essence of thixotropy [42,43]. Influence of Cations on the B8-BMPTFSI Ionogel The internal network of supramolecular gels is based on physical interactions between molecules, which is sensitive to external stimuli, including charged ions. The influence of cations on the 2% (w/v) B8-BMPTFSI gel was tested by the addition of different salts with TFSI counter ions at a 1M concentration (Figure 2). The gel was completely destroyed after the addition of Mg 2+ salt for 5 h, of Ca 2+ salt for 11 h, and of Li + salt for 48 h (mainly due to the destruction of intermolecular hydrogen bonds by cations) [44]. However, the gel maintained its original state for even 1 week after the addition of Na + and K + . Obviously, the influence of cations on the B8-BMPTFSI gel decreased in the order of To investigate the reason for the decrease in the conductivity of the gel electrolyte series of comparative tests were conducted. A marginal change in conductivities of B BMPTFSI ionogels at different B8 concentrations (from 0% to 4%, w/v) was observed (F ure 3a); however, with the increase in the NaTFSI concentration, the conductivity of B8-BMPTFSI-NaTFSI ionogel decreased significantly ( Figure 3b). Hence, the increase the NaTFSI concentration directly leads to the decrease in electrolyte conductivity. shown in Figure 3c,d, the morphologies of the B8-BMPTFSI gel (4% B8, w/v) and B BMPTFSI-NaTFSI gel (4% B8, w/v; 0.3 M NaTFSI) were visualized via polarizing opti microscopy (POM). Both of the abovementioned gels were characterized by a reticu network with several micropores, but the size of micropores of the B8-BMPTFSI-NaT gel was far less than that of the B8-BMPTFSI gel. Moreover, according to the scanni electron microscope (SEM) images of xerogels ( Figure S3), the abovementioned gels w also characterized by a reticular network, and the micropores of the xerogel of B BMPTFSI-NaTFSI gel were smaller than those of the B8-BMPTFSI gel. Smaller micropo were not favorable for ion movement, leading to the decrease in conductivity [47]. In co clusion, the addition of NaTFSI leads to a denser microstructure than the original gel [4 leading to the decreased conductivity of the gel electrolyte. The cation series above fitted perfectly into the Hofmeister series: The ions on the left tended to stabilize the native structure of a structural fragment and decrease its solubility, while those on the right tended to facilitate protein denaturation and increase solubility [45,46]. Fortunately, Na + did not permit B8 to lose its ability to self-assemble in BMPTFSI, thereby laying the foundation for the preparation of the sodium-ion gel electrolyte of B8. Herein, NaTFSI was selected as a source of Na + in the ionogel electrolyte, and CGCs of B8 required to gel the BMPTFSI solution at different NaTFSI concentrations were tested (Table 1). Clearly, with the increase in the sodium concentration of IL, the B8 concentration required to form gel also increased. However, with the increase in the NaTFSI and B8 concentrations, the conductivity of the resultant ionogel electrolyte decreased. To investigate the reason for the decrease in the conductivity of the gel electrolyte, a series of comparative tests were conducted. A marginal change in conductivities of B8-BMPTFSI ionogels at different B8 concentrations (from 0% to 4%, w/v) was observed ( Figure 3a); however, with the increase in the NaTFSI concentration, the conductivity of the B8-BMPTFSI-NaTFSI ionogel decreased significantly (Figure 3b). Hence, the increase in the NaTFSI concentration directly leads to the decrease in electrolyte conductivity. As shown in Figure 3c,d, the morphologies of the B8-BMPTFSI gel (4% B8, w/v) and B8-BMPTFSI-NaTFSI gel (4% B8, w/v; 0.3 M NaTFSI) were visualized via polarizing optical microscopy (POM). Both of the abovementioned gels were characterized by a reticular network with several micropores, but the size of micropores of the B8-BMPTFSI-NaTFSI gel was far less than that of the B8-BMPTFSI gel. Moreover, according to the scanning electron microscope (SEM) images of xerogels ( Figure S3), the abovementioned gels were also characterized by a reticular network, and the micropores of the xerogel of B8-BMPTFSI-NaTFSI gel were smaller than those of the B8-BMPTFSI gel. Smaller micropores were not favorable for ion movement, leading to the decrease in conductivity [47]. In conclusion, the addition of NaTFSI leads to a denser microstructure than the original gel [48], leading to the decreased conductivity of the gel electrolyte. Practical Performance of the B8-BMPTFSI-NaTFSI Ionogel Outstanding mechanical characteristics and electrochemical performance ar factors for considering such a supramolecular ionogel for different applications. BMPTFSI-NaTFSI ionogel exhibited excellent self-healing and thixotropic prope shown in Figure 4a, the B8-BMPTFSI-NaTFSI ionogel (4% B8, w/v, 0.3 M NaTFS ited a self-standing property for possible molding into a column, and two piec ionogel merged into one integrated block within minutes via their connection, illu the self-healing property sufficiently. In addition, the B8-BMPTFSI-NaTFSI iono B8, w/v, 0.1 M NaTFSI) exhibited shear-thinning and recovered immediately afte ing the shear force (Figure 4b). Step-strain measurement of oscillatory rheology fo BMPTFSI-NaTFSI ionogel (4% B8, w/v; 0.3 M NaTFSI) was performed (Figure 4 plete recovery of the storage modulus was observed following strain-induced moreover, the rate and extent of recovery were almost unchanged over two c breaking and self-repairing (Figure 4d), indicative of the reversible and durable pr of the ionogel [49]. Further analysis of the rheological data showed a high G'/ (G'/G" = 5.36) and a high yield stress (14.4 Pa, as shown in dynamic stress sweep i S4) for B8-BMPTFSI-NaTFSI ionogel (4% B8, w/v; 0.3M NaTFSI), indicative of a g chanical stability of the ionogel [27,49,50]. Practical Performance of the B8-BMPTFSI-NaTFSI Ionogel Outstanding mechanical characteristics and electrochemical performance are crucial factors for considering such a supramolecular ionogel for different applications. The B8-BMPTFSI-NaTFSI ionogel exhibited excellent self-healing and thixotropic properties. As shown in Figure 4a, the B8-BMPTFSI-NaTFSI ionogel (4% B8, w/v, 0.3 M NaTFSI) exhibited a self-standing property for possible molding into a column, and two pieces of the ionogel merged into one integrated block within minutes via their connection, illustrating the self-healing property sufficiently. In addition, the B8-BMPTFSI-NaTFSI ionogel (3% B8, w/v, 0.1 M NaTFSI) exhibited shear-thinning and recovered immediately after removing the shear force (Figure 4b). Step-strain measurement of oscillatory rheology for the B8-BMPTFSI-NaTFSI ionogel (4% B8, w/v; 0.3 M NaTFSI) was performed (Figure 4c): Complete recovery of the storage modulus was observed following strain-induced damage; moreover, the rate and extent of recovery were almost unchanged over two cycles of breaking and self-repairing (Figure 4d), indicative of the reversible and durable properties of the ionogel [49]. Further analysis of the rheological data showed a high G'/G" ratio (G'/G" = 5.36) and a high yield stress (14.4 Pa, as shown in dynamic stress sweep in Figure S4) for B8-BMPTFSI-NaTFSI ionogel (4% B8, w/v; 0.3M NaTFSI), indicative of a good mechanical stability of the ionogel [27,49,50]. Figure 5A shows the temperature dependence of the conductivities of the B8-BMPTFSI and B8-BMPTFSI-NaTFSI ionogels. Although concentrations of B8 and NaTFSI were different, the relationship between the ion conductivity of all ionogels and 1000/T exhibited good linearity, indicative of a good fit into the classical Arrhenius equation [28]. In addition, the plots described in Figure 5A were fitted by the Vogel-Tammann-Fulcher (VTF) equation: The result suggests that the migration of the ions in B8-BMPTFSI and B8-BMPTFSI-NaT-FSI gels is similar to that of a viscous liquid [25]. The ionic conductivity of the B8-BMPTFSI-NaTFSI ionogel still achieved 10 −3 S/m, albeit less than that of B8-BMPTFSI ionogel. Step-strain measurements of the B8-BMPTFSI-NaTFSI gel (4% B8, w/v; 0.3 M NaTFSI) over 2 cycles (with an alternating strain of 0.05% and 100% with 1 Hz at 25 • C). (d) Overlaid zoom of the recovery of the B8-BMPTFSI-NaTFSI gel after each cycle. All colored gels are doped with crystal violet. Figure 5A shows the temperature dependence of the conductivities of the B8-BMPTFSI and B8-BMPTFSI-NaTFSI ionogels. Although concentrations of B8 and NaTFSI were different, the relationship between the ion conductivity of all ionogels and 1000/T exhibited good linearity, indicative of a good fit into the classical Arrhenius equation [28]. In addition, the plots described in Figure 5A were fitted by the Vogel- Tammann To evaluate the chemical stability of the B8-BMPTFSI-NaTFSI ionogel, aging experiments were performed on the ionogel; therefore, sodium metal was immersed into the ionogel and then placed in an inert environment at 50 °C for 30 days. After the B8-BMPTFSI-NaTFSI ionogel was in contact with the sodium metal at 50 / for 30 days, no clear color change was observed, and its state was not significantly different from that of the newly prepared ionogel ( Figure S5a). In addition, 1 H-NMR data indicated that the The result suggests that the migration of the ions in B8-BMPTFSI and B8-BMPTFSI-NaTFSI gels is similar to that of a viscous liquid [25]. The ionic conductivity of the B8-BMPTFSI-NaTFSI ionogel still achieved 10 −3 S/m, albeit less than that of B8-BMPTFSI ionogel. To evaluate the chemical stability of the B8-BMPTFSI-NaTFSI ionogel, aging experiments were performed on the ionogel; therefore, sodium metal was immersed into the ionogel and then placed in an inert environment at 50 • C for 30 days. After the B8-BMPTFSI-NaTFSI ionogel was in contact with the sodium metal at 50 / for 30 days, no clear color change was observed, and its state was not significantly different from that of the newly prepared ionogel ( Figure S5a). In addition, 1 H-NMR data indicated that the signals of the main functional groups of BMPTFSI and B8 of the aged ionogel do not change ( Figure S5b-e), illustrating the excellent chemical stability of the B8-BMPTFSI-NaTFSI ionogel. Furthermore, electrochemical impedance spectroscopy (EIS) was employed to measure the T Na + of the B8-BMPTFSI-NaTFSI gel (4% B8, w/v, 0.3 M NaTFSI) using a symmetrical Na/electrolyte/Na cell. Figure 5B represents the Nyquist plots for the ionogel electrolyte before and after polarization; a model equivalent circuit manifested from the Nyquist plots is shown as Figure 5C, in which R0 and R1 denotes the resistances to ionic and electronic transport, respectively [51]. The calculated T Na + value of the ionogel was 0.1835, a satisfactory value [30]. Thus, a multifunctional ionogel electrolyte suitable for NIBs has been successfully developed. The performance parameters, including ionic conductivity, self-healing, and T Na + , of our ionogel electrolyte were compared with those of other ionogel electrolytes for NIBs (from 2010 to present, as shown in Table S2), the ionogel electrolyte of this work is the only self-healing ionogel electrolyte reported for Na batteries with high room temperature conductivity and good T Na + . However, a lot of work needs to be carried out before a high-performance sodium-ion battery can be successfully produced based on our ionogel, such as the compatibility study of the ionogel with different electrodes, the stability and efficiency study of the ionogel at different temperatures, etc. Conclusions In this study, a novel supramolecular ionogel electrolyte based on the D-gluconic acetalbased gelator with a favorable ionic conductivity, satisfactory Na + transference number (T Na + ), high mechanical strength, chemical stability, thixotropy, and rapid self-healing property was prepared. The introduction of NaTFSI induced a denser microstructure of the ionogel, leading to the decreased conductivity of the ionogel electrolyte; however, the resultant B8-BMPTFSI-NaTFSI ionogel still exhibited high conductivity. The B8-BMPTFSI-NaTFSI ionogel electrolyte exhibited excellent chemical stability to Na metal, which is a potentially safe electrolyte material for Na batteries. The thixotropic ionogel electrolyte effectively resisted external mechanical stimulation and prevented electrolyte leakage, thus prolonging the service life of the electrolyte material. In addition, the application of thixotropic nature to the electrolyte will realize its free molding in accordance to the shape of the conduction interface, which is difficult for several electrolytes. The self-assembly and thixotropic mechanisms of ionogel were also examined and described in detail. However, more work needs to be carried out to study the effect of ions on the self-assembly of organic low-molecular-weight gelators, so as to develop gelators with self-assembly ability in the presence of Li + and Mg 2+ , and expand the application range of supramolecular ionogels in the field of metal-ion batteries. In total, the multifunctional thixotropic B8-BMPTFSI-NaTFSI ionogel developed exhibited immense potential for application in high-performance Na batteries, and our studies on self-assembly and thixotropic mechanism provided a direction for the future design of more functionalized supramolecular ionogel electrolytes. Synthesis A8: Synthetic route of B8 is shown in Figure S1. Precursor B6 was synthesized according to the method reported previously [52]. First, 3.9 g (8.72 mmol) of B6 was dissolved in 40 mL of anhydrous DMSO at room temperature and then cooled to 0 • C followed by the dropwise addition of 2 mL (11.34 mmol) of octyl isocyanate. Then, the reaction mixture was warmed to room temperature and stirred vigorously for 2 h, followed by pouring it into 200 mL of distilled water under vigorous stirring. A large amount of a white solid precipitated, which was collected by filtration. The filter cake was three times washed with distilled water. After drying, B8 was obtained as a white powder. Yield: 4.24 g (6.98 mmol, 80%). 13 Figure S6-S8. Instrumentation Gelation tests were investigated by the typical tube-inversion method. The preweighed gelator in a certain solvent in a sealed test tube (inner diameter: 13 mm) was heated until its dissolution, then the resultant solution was cooled to room temperature. The test tube was inverted, and gelation was confirmed when a homogeneous substance was formed with no gravitational flow. The heating temperature selected to dissolve a certain gelator must be less than its melting point to prevent compound deterioration. The CGC is defined as the minimum amount of a gelator required to immobilize 1 mL of solvent. The Tg was determined by the "falling ball method" described previously [27]. The xerogel was prepared by freeze-drying following solvent exchange [28]; the ionogel was immersed into water for 5 days, and the water was refreshed every 5 h; solvent exchange was conducted at room temperature without stirring and ultrasonication, etc.; and the xerogel was obtained by subsequent freeze-drying step. The following experiments and tests were conducted at room temperature and a humidity of 50% unless specified otherwise. Rheological studies of ionogels were conducted using a rheometer (Anton Paar Physica MCR 301) equipped with a plate (15 mm diameter). The gels were equilibrated at 25 • C between the plates that were adjusted to a gap of 0.5 mm. The frequency sweep at a constant strain of 0.05% was obtained from 0.1 to 100 rad/s. Strain sweep was performed in the 0.01−150% range at a constant frequency (1 Hz). Time sweep was conducted to observe the recovery property of the gel. First, a constant strain of 0.05% was applied on the sample. Then, a constant strain that was sufficient to break the gel was applied, followed by the application of a constant strain (0.05%) again. The storage modulus G' and the loss modulus G" of the sample were recorded as functions of time in this experiment. All NMR spectra were recorded on a Bruker DPX 400 MHz spectrometer. Mass spectra were recorded on a TOF-QII high-resolution mass spectrometer. Morphologies of the thin gel samples on a glass plate were investigated by polarizing optical microscopy (POM, Nikon Eclipse 50i POL). The morphologies of the xerogels were obtained by a Hitachi S-4800 SEM instrument operating at 3-5 kV; the xerogels were coated with a thin layer of Au before the experiment. FT-IR spectra were recorded on an FTS 3000 spectrometer with KBr pellets. Powder X-ray diffraction (XRD) patterns were recorded on a Bruker D8-S4 instrument (Cu Kα radiation, λ = 1.546 Å). The d spacing values were calculated by Bragg's law (nλ = 2d sin θ). Theoretical calculation was performed by Gaussian 09, and all of the reported structures were optimized by the density functional theory (DFT) using the B3LYP functional with the 6-31 G (d, p) basis set. Based on the (B3LYP/ 6-31 G (d, p)) optimized geometries, the energy results were further refined by single-point energy calculation at the B3LYP*/ 6-31 G (d, p) level of theory. Conductivities of the ionogels were measured using a conductivity meter (DDS-307A, Shanghai INESA Scientific Instrument Co., Ltd., China). The electrode was immersed into the ionogel at a temperature above the Tg, followed by cooling at 25 • C for gelation. Conductivities at different temperatures were determined, and the temperature was controlled using a thermostatic heater. The Na + transference number (T Na + ) of the B8-BMPTFSI-NaTFSI gel was determined by a DC polarization method combined with impedance spectroscopy [30]. A small polarization (∆V, 50 mV) was applied to a symmetrical Na/electrolyte/Na cell, and then the initial current (I 0 ) and steady-state current (I S ) were measured. The initial (R 0 ) and final (R S ) resistances of the passivation layers on the Na electrodes were also examined. T Na + can be calculated according to T Na + = I S (∆V -I 0 R 0 )/I 0 (∆V -I S R S ). Author Contributions: S.C. contributed to conceptualization, methodology, validation, formal analysis, investigation, data curation, writing the original draft, visualization, and project administration; L.F., Y.F. and Z.L. contributed to resources, reviewing and editing the article, and supervision; Y.H., X.W. and Y.K. contributed to conceptualization and resources; B.X. and L.H. contributed to resources, reviewing and editing the article, funding acquisition, and project administration. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2022-03-23T15:31:52.843Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "997a25a8c180bf39ae709203d7d24db754d2060d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2310-2861/8/3/193/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a472660ac29a5fa71f044fb717bba5c5dba6cb70", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
119320055
pes2o/s2orc
v3-fos-license
Optimal exponentials of thickness in Korn's inequalities for parabolic and elliptic shells We establish Korn's interpolation inequalities and the rigidity results of the strain tensor of the middle surface for the parabolic and elliptic shells and show that the best constant in Korn's inequalities scales like $h^{3/2}$ for the parabolic shell and $h$ for the elliptic shell, removing the main assumption that the middle surface of the shell is given by one single principal coordinate in the literature and, in particular, including the closed elliptic shell. Introduction and Main Results Korns inequalities have arisen in the investigation of the boundary value problem of linear elastostatics, [19,20] and have been proven by different authors, e.g., [7,16,17,18,28]. Some generalized versions of the classical second Korn inequality have been recently proven in [1,5,26,27]. The optimal exponential of thickness in Korn's inequalities for thin shells represents the relationship between the rigidity and the thickness of a shell when the small deformations take place since Korn's inequalities are linearized from the geometric rigidity inequalities under the small deformations ( [6]). Thus it is the best Korn constant in the Korn inequality that is of central importance (e.g., [4,21,22,23,24,25]). Moreover, it is ingenious that the best Korn constant is subject to the Gaussian curvature. The one for the parabolic shell scales like h 3/2 ( [10,11]), for the hyperbolic shell, h 4/3 ( [14]) and for the elliptic shell, h ( [14]). All those results were derived under the main assumption that the middle surface of the shell is given by a single principal coordinate system in order to carry out some necessary computation. This assumption is where the properties ∇ ∂z n = κ z ∂z, ∇ ∂θ n = κ θ ∂θ for p ∈ S hold. In the case of the parabolic or hyperbolic shell, a principal coordinate only exists locally (Proposition 2.1). There is even no such a local existence for the elliptic shell. However, the assumption (1.1) in [10,11,14] can be removed if the Bochner technique is employed to perform some necessary computation. The Bochner technique provides us the great simplification in computation, for example, see [31] or [33]. Here we remove the assumption (1.1) to obtain that the optimal exponentials are 3/2 and 1 for the parabolic shell and the elliptic shell, respectively. In particular, the closed elliptic shell is included here. The case of the hyperbolic shell is treated in [35] where we show that the optimal exponential is 4/3 without the assumption (1.1). Let M ⊂ IR 3 be a C 3 surface with the induce metric g and a normal field n. Let S ⊂ M be an open bounded set with a regular boundary ∂S. We consider a shell with thickness h > 0 Ω = { x + t n(x) | x ∈ S, −h < t < h }. Let κ be the Gaussian curvature of M. We say that Ω is parabolic if where Π = ∇ n is the second fundamental form of M. If then Ω is said to be elliptic. Set Here it can happen that ∂S = ∅, for example, to a closed elliptic shell, for which H 1 0 (Ω, All the norm · in this paper is that of L 2 (Ω), unless it is specified. We have the following. In particular, we have If Ω is a closed elliptic shell, then there is C > 0 such that for any y ∈ H 1 (Ω, IR 3 ), where so (3) is the set of all 3 × 3 skew matrices. The exponentials of the thickness in (1.5) and (1.6)-(1.7) are optimal, respectively, for the parabolic shell and the elliptic shell, respectively. Remark 1.1 The interpolation inequality (1.4) is given in [10,11,14] under the assumption (1.1) and extended in [12] to the case that there is a local principal coordinate for each p ∈ S. The inequalities (1.5) and (1.6) are given in [11,12] and [14], respectively, under the assumption (1.1). Proof Theorem 1.1 Let (M, g) be a Riemanniann manifold. Let T be a 2-order tensor field on (M, g) and let X be a vector field on (M, g). We define the inner multiplication of T with X to be another vector field, denoted by i (X)T, given by For any y ∈ H 1 (Ω, IR 3 ), we decompose y into where u = y, n and U (·, t) is a vector field on S for |t| < h. It follows from (2.1) that where ∇ and D are the covariant differentials of the dot metric in IR 3 and of the induced metric in S, respectively, and W t = ∂ t W and w t = ∂ t w. We need to deal with the relations between ∇ and D carefully. By defining ∇ n n = 0, we introduce an 2-order tensor p(y) on IR 3 x by p(y)(α,β) = ∇ ∇ nα y,β forα,β ∈ IR 3 . Lemma 2.2 Let w ∈ H 2 (Ω) be a function. Then the following formulas hold true. are the Laplacion and the divergence of the dot metric in IR 3 , respectively, and tr g is the trace of the induced metric g in S. Moreover, ∆ n is a vector field on S. Proof Let x ∈ S be given. Let E 1 , E 2 be a frame field normal at x in S, i.e., E i , E j = δ ij in some neighbourhood of x on S, (2.8) Then In addition, we obtain which yields the formula in (ii). Using the symmetry of ∇ 3 w and the formulas (2.8)-(2.11), we have from which it follows that where the following formula has been used Finally, we have that is, ∆ n ∈ S x . ✷ Lemma 2.3 Let y ∈ H 2 (Ω, IR 3 ) be given in (2.1) and let Υ(y) and X(y) be given in (2.7). Then 12) for z = x + t n ∈ Ω, where div is the divergence of the dot metric in IR 3 . Proof Let x ∈ S be given. Let E 1 , E 2 be a frame field normal at x in S such that (2.8)-(2.11) hold. Then E 1 , E 2 , and n(x) forms an orthonormal frame at z = x + t n(x). Using (i) in Lemma 2.2 and where the following formulas have been used ✷ We need the following lemma from [13]. where κ is the Gaussian curvature. It follows that In the sequel, we sometimes use the norm instead of the norm The next lemma is the key to our analysis that is the 3-dimensional version of [12,Lemma 4.5]. In the 2-dimensional case [12,Lemma 4.5] establishes the inequality (2.17) without the assumption (2.16) below. fulfills the inequality Proof Using (2.16) and (2.14), we have where We integrate (2.18) in t over (h/2, h) to obtain, by (2.14), Integrating the above inequality in x over S yields, by (2.19), It follows from (iii) in Lemma 2.2 that from which we obtain, by (vi) in Lemma 2.2, Thus we have A similar argument yields where p(y) is given in (2.4). (2.23) ∆w = div X(y) + tr g i (W )DΠ − [ tr g Υ(y)] t + 2w t tr g Π + w tt It follows from (2.12) that We integrate the above identity over Ω in z = x + t n to have, by (2.6), that is, ∇(w −ŵ) ≤ C( sym I(y) + h y Thus we have Let X (S) be the set of all vector fields on S. For any X, Y ∈ X (S), the curvature operator R XY is defined by where [·, ·] is the Lie product. The Ricci identity reads where T is a k-order tensor field. This formula can help us to exchange the order of the second-order covariant differential of a k-order tensor field. Let x ∈ S be given and let e 1 , e 2 be an orthonormal basis of M x with the positive orientation in the induced metric g. For any W ∈ H 1 (S, X (S)), we denote a 2-form σ(W ) on S by where ∧ g is the exterior product of the induced metric g on S. Then σ(W ) is well defined. In fact, letê 1 ,ê 2 be another orthonormal basis with the positive orientation. Suppose that e 1 = α 11ê1 + α 12ê2 , e 2 = α 21ê1 + α 22ê2 . where I is the identity matrix in IR 2 . It follows that Then there is a function ϕ on S, independent of the choice of orthonormal base, such that where E is the volume element of the induced metric g. where ϕ is given in (2.30) and µ and τ are the outside normal and the tangential along the boundary ∂S in the induced metric g, respectively. Proof For W given, we denote a vector field B(W ) on S by Since D τ µ = D τ µ, τ τ and D τ τ = − D τ µ, τ µ on the boundary ∂S, we have Let x ∈ S be given. Let E 1 , E 2 be a frame field normal at x with the positive orientation. Then It follows (2.30), (2.29), and (2.32) that Thus (2.31) follows from (2.35) and (2.33). ✷ In the sequel, for a vector field W ∈ X (S), we denote where E 1 , E 2 is an orthonormal frame on S. From (2.34), we have where ϕ is given in (2.30). Moreover, if f is a function, we denote We need the following. Lemma 2.7 Let M be of C 3 . Let λ(q) be a principal curvture for each q ∈ M. Let p ∈ M be given. Suppose that there is a neighbourhood N of p such that the following assumptions hold. Then there exists locally a C 1 vector field X such that ∇ X n = λX in a neighbourhood of p. Proof Let ψ : N → IR 2 be a local coordinate at p with ψ(q) = (x 1 , x 2 ) and ψ(p) = 0. Consider the matrices We may assume that λ(0) − a 11 (0), −a 12 (0) = 0. Thus Obviously, the above X meets our need. ✷ For each p ∈ M, we denote by Q : M p → M p the rotation by π/2 along the clockwise direction, which is very useful in the case of the negative curvature, see [34]. For any α ∈ M p , α, Qα forms an orthonormal basis on M p . Proposition 2.1 Let p ∈ M be given. Suppose that there are two different principal curvatures, λ 1 = λ 2 , at p. Then there exists a local principal coordinate ψ = x around p, i.e., ∇ ∂x i n = λ i ∂x i in a neighbourhood of p for i = 1, 2. Proof From Lemma 2.7 there is a vector field X with |X| = 1 such that ∇ X n = λ 1 X in a neighbourhood of p. (2.37) Let Y = QX. Then X, QX forms an orthonormal basis. Thus We claim there exist functions f 1 and f 2 such that We define a curve by α ′ (t) = X(α(t)) for t ∈ (−ε, ε), α(0) = p. Then (2.39) implies thatψ(η(t, s)) = (t, s) is a local coordinate such that ∂t = f 1 X, ∂s = f 2 Y. ✷ Next, we consider a rigidity lemma on the strain tensor of the middle surface. In the case of the parabolic or the hyperbolic, it has established in [10]- [14] when the middle surface is given by a single principal coordinate. In the case of the elliptic shell, it has been given in [3] if the middle surface consists of a single coordinate. Here we treat it coordinates free, which particularly includes the case of the closed elliptic shells. Proposition 2.2 Suppose Ω is a parabolic shell. Then there is C > 0 such that for any y = W + w n ∈ H 1 0 (S, IR 3 ). In the above sense, we have Thus (2.43) follows from Lemma 2.8 below. ✷ Lemma 2.8 LetŜ ⊂ M be such that (2.44) hold. Let p ∈Ŝ be given and γ > 0 be given small. Then exist a neighbourhood N of p and constants C > 0, independent of γ, and C γ > 0, such that for any y = W + w n ∈ H 1 0 (Ŝ, IR 3 ). It follows from Proposition 2.3 immediately that Corollary 2.1 Let S be elliptic. (i) If |∂S| > 0, then there is C > 0 such that for any y = W + w n ∈ H 1 0 (S, IR 3 ). (ii) If S is a closed surface, then there is C > 0 such that, for any y = W + w n ∈ H 1 (S, IR 3 ), there exists an infinitesimal identity y 0 ∈ H 1 (S, IR 3 ), satisfying Proof of Proposition 2.3 Let p ∈ S be given. Let e 1 , e 2 be an orthonormal basis of M p with the positive orientation such that Let E 1 , E 2 be a frame field normal at p such that Then E i , E j = δ ij a neighbourhood of p, and D E j E i = 0 at p. (i) Let Ω be parabolic. From Proposition 2.1, a local principal coordinate exists on S. In such a principal coordinate an ansatz has been constructed in [10,Theorem 3.3]. Let p 0 ∈ S be given and let σ 0 > 0 be such that where B(p 0 , σ 0 ) is the geodesic plate in the induced metric g centered at p 0 with radius σ 0 . Let ϕ ∈ C 2 0 (S) be such that ϕ(p) = 1 for p ∈ B(p 0 , σ 0 ). Let ρ(p) = d g (p, p 0 ) be the distance from p ∈ S to p 0 in the induced metric g on M. We set y = W + w n, w = ϕ cos(φρ), W = −tDw, φ = 1 h 1/2 . Denote B(σ 0 ) by the plate in M p 0 centered at the origin with radius σ 0 . Let dx be the volume element in M p 0 . From the volume comparison theorem, we have Conflict of interest statement There is no conflict of interests. Ethical approval: This article does not contain any studies with human participants or animals performed by the author.
2018-09-01T01:16:50.000Z
2018-07-29T00:00:00.000
{ "year": 2020, "sha1": "1e69000f82433de79457ab9ad5e4bc01aad72aab", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.11114", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1e69000f82433de79457ab9ad5e4bc01aad72aab", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
219497303
pes2o/s2orc
v3-fos-license
Electrochemical nitrite sensing for urine nitrification Sensing nitrite in-situ in wastewater treatment processes could greatly simplify process control, especially during treatment of high-strength nitrogen wastewaters such as digester supernatant or, as in our case, urine. The two technologies available today, i.e. an on-line nitrite analyzer and a spectrophotometric sensor, have strong limitations such as sample preparation, cost of ownership and strong interferences. A promising alternative is the amperometric measurement of nitrite, which we assessed in this study. We investigated the sensor in a urine nitrification reactor and in ex-situ experiments. Based on theoretical calculations as well as a practical approach, we determined that the critical nitrite concentrations for nitrite oxidizing bacteria lie between 12 and 30 mgN/L at pH 6 to 6.8. Consequently, we decided that the sensor should be able to reliably measure concentrations up to 50 mgN/L, which is about double the value of the critical nitrite concentration. We found that the influences of various ambient conditions, such as temperature, pH, electric conductivity and aeration rate, in the ranges expected in urine nitrification systems, are negligible. For low nitrite concentrations, as expected in municipal wastewater treatment, the tested amperometric nitrite sensor was not sufficiently sensitive. Nevertheless, the sensor delivered reliable measurements for nitrite concentrations of 5–50 mgN/L or higher. This means that the amperometric nitrite sensor allows detection of critical nitrite concentrations without difficulty in high-strength nitrogen conversion processes, such as the nitrification of human urine. Introduction In many wastewater treatment processes, maintaining low nitrite concentrations is crucial for several reasons. First, the effluent discharge limit for the nitrite concentration must be met. In Switzerland it lies at 0.3 mg N /L (Gujer, 2006) due to its toxicity for fish (Kroupova et al., 2005). Second, high nitrite concentrations must be avoided in order to prevent the formation of harmful gases such as nitric oxide and nitrous oxide during nitrification and denitrification (Schreiber et al., 2012). Third, nitrite is an intermediate product in nitrification and exists in equilibrium with its protonated form, free nitrous acid (HNO 2 ), which can inhibit bacteria in wastewater treatment (Zhou et al., 2011). In many processes, such as conventional nitrification, for example nitrification of urine for fertilizer recovery, such inhibition should be avoided. Indeed, once nitrite oxidizing bacteria (NOB) are inhibited by nitrite, the compound accumulates faster and NOB are inhibited even more strongly, which can be detrimental for processes relying on complete nitrification of ammonia over nitrite to nitrate (Udert and W€ achter, 2012). In a number of specialized processes, such as SHARON-Anammox (Volcke et al., 2006), DAEMON (Wett et al., 2007), and OLAND (Kuai and Verstraete, 1998), it is paramount that NOB growth is limited. The presence of nitrite can cause such growth, in turn leading to reduced energy and electron donor efficiency of these processes. Thus, nitrite is a key variable for optimal operation of many nitrifying processes, regardless of the desirability of NOB activity. An on-line nitrite measurement would be an ideal tool to detect detrimental nitrite concentration levels, and to prompt timely for corrective actions. To our knowledge, two principles are available today to measure nitrite on-line in wastewater treatment. One is based on a colorimetric measurement, i.e. on-line nitrite analyzers (e.g. Nitrite analyzer Liquiline System CA80NO, Endress þ Hauser, Reinach, Switzerland; SA9101, Skalar Analytical B.V., Breda, Netherlands). This technique shows no significant drift and has a high sensitivity and specificity. However, the cost of ownership (Ellram, 1995) of an on-line analyzer is typically high due to high hardware costs and intensive maintenance requirements (Ma si c et al., 2015;Rieger et al., 2008). Furthermore, it is an ex-situ measurement and needs sampling preparation (Rieger et al., 2004). The complexity and costs for such automated sampling preparation can be an issue especially for small treatment plants. Another continuous nitrite measurement is based on the light absorbance measurement principle (Thürlimann et al., 2017). This is a spectrophotometric measurement, i.e. through measurement of light absorbance at multiple wavelengths, and makes use of the fact that many chemical compounds, including nitrite, absorb light differently at different wavelengths. Robust devices built around this principle are available commercially. Importantly, these measurements can be obtained in-situ and do not require sensitive sample preparation steps (Drolc and Vrtov sek, 2010). This type of sensors is sensitive to a variety of inorganic and organic compounds so that one can use a single device to measure multiple variables at once. Unfortunately, this also means that the absorbance measurements lack specificity. This, in turn, challenges accurate calibration and induces signal drift when the mixture of non-target components that absorb light, e.g., organic matter or solids, deviates from the mixtures observed during sensor calibration. Such a phenomenon is often described as a change of the background spectrum (Thürlimann et al., 2019). While sensor drift can be compensated by fault-tolerant control schemes (Deshpande et al., 2009;Thürlimann et al., 2019), it is generally considered desirable to avoid sensor drift entirely. The amperometric nitrite measurement, which we assessed in this study, is a promising alternative that may lead to the construction of a drift-free on-line nitrite sensor. With this method, nitrite concentrations in aqueous solutions are measured electrochemically. The sensor consists of three electrodes: a reference electrode, a working electrode, in our case the anode, and a counter electrode, in our case the cathode ( Fig. 1) (Helm et al., 2010). The operation mode is potentiostatic, which means that a constant potential is applied between the reference and the working electrode. When a chemical substance reacts at the working electrode, e.g. nitrite is oxidized at the anode, electrons are transferred between the substance and the working electrode. When, at the same time, a chemical substance reacts at the counter electrode, e.g. protons are reduced to hydrogen at the cathode, an electric current is generated between the working and counter electrode (Hamann and Vielstich, 2005). This current can be set in correlation to the concentration of the analyte, i.e. the substance, which reacts at the working electrode. In contrast to the potentiometric measuring principle, which is based on a passive registration of the potential difference between two electrodes (Schwedt et al., 2016) and is applied in ion-selective electrode (ISE) sensors for ammonium or nitrate, amperometric measurement is based on actively controlling the potential at the working electrode and measuring the resulting electric current. The amperometric measuring principle has been studied and commercialized and normally requires a membrane or some other diffusion barrier (Helm et al., 2010) in order to achieve selectivity. Amperometric sensors are available e.g. for dissolved oxygen (Clark electrode) (Hamann and Vielstich, 2005), ozone (e.g. 9185 sc Ozone amperometric analyzer, Hach Lange GmbH, Rheineck, Switzerland) (Helm et al., 2010) and free chlorine (e.g. analog free chlorine sensor CCS51, Endress þ Hauser, Reinach, Switzerland). According to the review of Helm et al. (2010), temperature influences amperometric measurements since it affects the rate of diffusion. With temperature, the width of the diffusion layer decreases and the diffusion coefficient increases, which causes a higher electric current (Ahmad, 2006). Furthermore, a low pH could cause a higher signal because more protons are available for reduction at the cathode. We also expect an effect of the electric conductivity in the electrolyte since it facilitates the exchange of electrons in the liquid. Mixing conditions could also have an impact on the sensor signal, because they influence the thickness of the diffusion layer (Helm et al., 2010). Furthermore, the oxygen concentration could influence the amperometric measurement since oxygen has a high redox potential and is easily reduced with protons to water at the cathode. Air bubbles could also affect the measurement since they pose both electrochemical resistance and mass transfer barriers to the electrode reactions (Zhang and Zeng, 2012). The aeration rate in the nitrification reactor combines these effects. The aim of this study was to evaluate whether an amperometric sensor allows a reliable in-situ measurement of nitrite, which could later be used for process control of urine nitrification. For this purpose we determined the required upper limit of the working range of a nitrite sensor for urine nitrification. Furthermore, we studied the robustness of the sensor signal against changes in the operational conditions such as temperature, pH, electric conductivity and aeration rate. We also assessed the effects of typical wearand-tear of the amperometric nitrite sensor, i.e. we investigated the drift behaviour as well as fouling, which is one important root cause of drift. The overall hypothesis of our study was that an amperometric sensor is a robust and simple method to monitor nitrite concentrations during urine nitrification. Materials and methods In the following chapters we describe the amperometric nitrite measurement principle and the experimental design we used to test its hardware implementation. Furthermore, we explain how we assessed the necessary upper limit of the working range for urine nitrification and how the collected data were analysed. Finally, we give an overview about the conducted experiments. Amperometric nitrite measurement In our study of the amperometric sensor, the nitrified urine was the electrolyte and nitrite the analyte. We chose graphite for the working and counter electrode because it is rather cheap and has been applied in urine treatment before. A potentiostat (PGU 10V-IMP-S, IPS Elektroniklabor GmbH & co. KG, Germany) and the software EcmWin (V2.4) were used to hold the required potential between the reference and working electrode as well as to measure the occurring electric current. In amperometric nitrite measurement, the target reaction is the selective nitrite oxidation to nitrate that occurs at the anode, which is the working electrode. Previous experiments in our lab showed that an anode potential of 1.20 V vs. standard hydrogen electrode (SHE) is suitable for a selective nitrite oxidation at the anode. Fig. 1 shows a scheme of the measurement setup. We studied two variants of the amperometric nitrite sensor both of which were not equipped with a membrane or another diffusion barrier. Firstly, a large sensor was tested which is a handmade prototype ( Fig. 2A): Two graphite electrodes (isostatically fine pressed graphite, Ø ¼ 7 mm, R 8650, SGL Carbon GmbH, Germany) served as working and counter electrode. A silver-chloride reference electrode was used (Ag/AgCl, E ¼ þ0.21 V vs. SHE, In-Lab ® reference, in 3 M KCl, Mettler-Toledo, Switzerland) and was mounted into a Luggin capillary filled with 3 M KCl solution. The Luggin capillary and the graphite electrodes were put into a rubber cone and inserted into a PVC-pipe. The electrodes were placed in a way that they did not protrude from the rubber cone and had a reactive surface of 38 mm 2 , each. The potential between the reference and the working electrode was set to a value of E ¼ 0.99 V vs. Ag/AgCl 2 , which corresponds to 1.20 V vs. SHE. The second variant of the sensor was smaller and manufactured by IPS Elektroniklabor GmbH & Co. KG, Germany (Fig. 2B). Three graphite electrodes were used (Ø ¼ 3 mm, R 8650, TVB GmbH, Germany): two of them served as working and counter electrode, respectively, and the third electrode was the reference electrode. Experiments in our lab resulted in an approximate standard electrode potential of graphite as a reference electrode in nitrified urine of 0.31 V vs. SHE. The electrodes were sealed with a highly isolating material (PURe Isolation ST 33, copaltec GmbH, Germany) within the housing (Ø ¼ 12 mm). The electrodes did not protrude and had a reactive surface of 7 mm 2 , each. The potential between the reference and the working electrode was set to a value of 0.89 V vs. graphite, which approximately corresponds to 1.20 V vs. SHE. Experimental setup We tested the amperometric nitrite sensor in a continuous stirred-tank reactor (CSTR) for urine nitrification at Eawag in Dübendorf. The liquid volume was between 120 L and 140 L and the reactor was operated with a two-point control based on pH as was implemented by Udert and W€ achter (2012). The average total ammonia concentration of the urine that was fed to the reactor was 2870 ± 330 mg N /L and the pH was 8.8 ± 0.1. Further details to the operation of the CSTR can be found in Fumasoli et al. (2016). Determination of the required upper limit of the working range In order to define the upper limit of the nitrite range to be covered by the sensor, we determined the optimal nitrite concentration for urine nitrification. According to Anthonisen et al. (1976), NOB use NO 2 À as substrate but are inhibited by HNO 2 . As a consequence, the optimal nitrite concentration for NOB activity depends on the pH value the NOB are exposed to. Based on the model of Fumasoli (2016) and constants derived by Reimann (2019), we derived an equation for NOB activity (Equation (1)), determined its maximum and solved it for the nitrite concentration [NO 2 À ] opt (Equation (2)). We defined [HNO 2 ] according to the acidic equilibrium in dependency of the nitrite activity, pH and the dissociation constant for nitrite. The ionic strength was estimated to be 0.16 mol/L based on the concentration of the major inorganic ions. By using this ionic strength and the Davies approach (Stumm and Morgan, 1996) we calculated an average activity coefficient of 0.75 for single charged ions. The detailed explanation of this assessment can be found in the Supplementary Material S1.1.1. Besides the theoretical calculations, we estimated the nitrite concentration for maximum activity of NOB based on results of Thürlimann et al. (2019). According to their study, the course of the nitrite concentrations during nitrite degradation by NOB shows an inflection point, which corresponds to the value where the NOB are most active. At this point the kinetics of NOB change from substrate inhibition to substrate limitation. Based on this approach we determined the optimal nitrite concentration at pH values between 5.9 and 6.1. Details can be found in the Supplementary Material S1.1.2. Signal processing The measurement frequency for both nitrite sensors was 1 Hz. The data were smoothened with the median of 30 s for all in-situ experiments. For the evaluation of the response time of the amperometric nitrite sensor (see Chapter 2.5.4) the sensor signal was not smoothened but was used in its raw form. The sensor signal measured during the signal trend experiment (see Chapter 2.5.6) was smoothened with a 5 min moving median. For the other ex-situ experiments, the median of 2.5 min was taken for each data point starting 30 s after the sensor was put into the stock solution in order to minimize initial response time effects. Determining the calibration curve For each month, we took all data points which lay between 0 mg N /L and the upper limit of the working range and fitted a linear curve with a least squares fit. The standard deviation of the method s xo was calculated according to ISO (1990). The lower s xo the better the performance of the analytical method. Furthermore, we calculated the coefficient of determination R 2 which is a statistical measure of how well the model represents the real measurements (Fahrmeir et al., 2016). An R 2 of 1 means that the model perfectly fits the data points. Model structure identification We used a polynomial regression model describing the sensor signal, i.e. the current density, as a function of the variables nitrite, temperature and pH. Polynomial terms up to a power of 3 were tested, leading to a total of 10 terms for the most complex model (Equation (3)). In order to test which terms improve the model, we created all possible combinations of these terms by setting some of the parameters to zero. This resulted in 2 10 ¼ 1024 different models which were compared according to their root mean square error (RMSE). We compared the most complex function that includes all terms (Equation (3)) to the linear model only including the nitrite concentration (Equation (4)) in order to determine whether the inclusion of temperature and pH improved the estimation of the electric current. In the Supplementary Material S1.2, the model identification procedure is explained in more detail. Lange GmbH, Germany). With these cuvette tests, we typically achieve a measurement accuracy of 1%. Temperature and pH In order to test the sensor's dependency on temperature and pH, 79 random samples from the urine nitrification reactor within the aimed nitrite range at temperatures between 22.5 and 26.5 C and at pH values between 6.0 and 7.1 were taken and compared with current density measurements with the large sensor (Supplementary Material S1.3). The temperature variability resulted from the exhaust heat of a distiller, that was placed in the room of the experimental setup, and was not controlled. In order to achieve different pH values the pH set point was adapted in the nitrification control. Aeration Another goal of this study was to assess which influence different aeration rates have on the amperometric nitrite measurement. For this purpose, we collected 58 random samples at aeration rates between 0 and 0.6 Nm 3 /h with the large sensor from the urine nitrification reactor. In order to receive different nitrite concentrations for every aeration rate, the pH set point was increased or decreased during each test. Electric conductivity We tested the influence of electric conductivity also. We conducted an ex-situ experiment with stock solutions at nitrite concentrations of 25 mg N /L and 50 mg N /L (NaNO 2 , assay ¼ 99%, Merck KGaA, Darmstadt, Germany) and at different levels of conductivity. The electric conductivity was adjusted with sodium chloride (assay ! 99.5%, Merck KGaA, Darmstadt, Germany) to temperature corrected values of 10, 20, 30, 40 and 50 mS/cm. We further produced stock solutions at 25 mg N /L and 50 mg N /L without adding sodium chloride, which resulted in electric conductivities of 0.3 mS/cm and 0.5 mS/cm, respectively. 250 mL of each stock solution were mixed with a magnetic stirrer (color squid white, IKA®, Staufen, Germany) at 400 rpm and the current density was measured with the small sensor. The measurements were repeated 4 times. Response time The assessment of the response time was executed according to ISO 15839 (ISO, 2003). In this initial experiment, we considered the working range to be 0 to 24 mg N /L. Therefore, the response time assessed in this study is only valid for that range. Details to the procedure and calculations can be found in the Supplementary Material S1.4. Typical wear-and-tear In order to evaluate whether the signals produced by the nitrite sensor exhibit drift, and when so, whether the root cause of drift includes fouling, in addition to corrosion or disintegration of the sensor, we conducted ex-situ validation tests. Stock solutions of nitrite concentrations between 2 and 34 mg N /L were prepared using NaNO 2 (assay ¼ 99%, Merck KGaA, Darmstadt, Germany) and nanopure water. No additional salt was added to these solutions. 250 mL of the stock solutions were put into a glass beaker and continuously stirred with a magnetic stirrer at 400 rpm (color squid white, IKA®, Staufen, Germany). The current density was measured for each concentration before and after cleaning the sensor and a linear curve of the current density in dependency of the nitrite concentration was fitted with a least squares fit. First, the ex-situ validation of the sensor was executed weekly, then every two weeks and the last measurement was executed after the sensor was in the nitrification reactor for 5.5 weeks. The ex-situ experiments were conducted with the large sensor. The fouling effect on the small sensor was tested with nitrified urine in a similar ex-situ experiment. Nitrified urine without detectable nitrite was collected and spiked with NaN 3 (assay ! 99%, Merck Schuchardt OHG, Hohenbrunn, Germany) in order to stop biological activity. Six solutions were produced at nitrite concentrations of 0, 10, 20, 30, 40 and 50 mg N /L (NaNO 2 , assay ¼ 99%, Merck KGaA, Darmstadt, Germany). 250 mL of the stock solutions were stirred with a magnetic stirrer at 400 rpm (color squid white, IKA®, Staufen, Germany). The experiment was conducted after the sensor had been in the nitrification reactor for 10 days. The current density in each of these solutions was measured before and after cleaning the sensor. A linear curve of the current density as a function of the nitrite concentration was fitted with a least squares fit for the data before and after cleaning the sensor. Ex-situ versus in-situ To determine whether the nitrite sensor can be calibrated exsitu, we compared the calibration curves obtained from an exsitu experiment with the small sensor with synthetic nitrite solutions at 0, 5, 10, 15, 20, 25, 35, 40, 45 and 50 mg N /L (NaNO 2 , assay ¼ 99%, Merck KGaA, Darmstadt, Germany) and an electric conductivity of 16 mS/cm (temperature corrected, NaCl, assay ! 99.5%, Merck KGaA, Darmstadt, Germany) and from the insitu measurements. The in-situ data were collected in November 2018 with the small sensor. This experiment was not conducted with the large sensor. Furthermore, the signal trend of a synthetic nitrite solution at a nitrite concentration of 25 mg N /L (NaNO 2 , assay ¼ 99%, Merck KGaA, Darmstadt, Germany) and an electric conductivity of 16 mS/ cm (NaCl, assay ! 99.5%, Merck KGaA, Darmstadt, Germany) was observed with the small sensor over 400 min while it was stirred with a magnetic stirrer at 400 rpm (color squid white, IKA®, Staufen, Germany). The pH and the dissolved oxygen concentration were measured and logged (pH 340 and Cond 340i, WTW, Weilheim, Germany) in order to be able to investigate the influence of these parameters. Determination of the required upper limit of the working range Our calculations resulted in optimal nitrite concentrations [NO 2 À ] opt of 12e30 mg N /L at a pH between 6.0 and 6.8 (Fig. 3). These values are based on Equation (2) 2019), we estimated that this inflection point occurs at 12 mg N /L at a pH between 5.9 and 6.1, which confirms the theoretical calculations shown in Fig. 3. To ensure maximum NOB performance and prevent inhibition by nitrite the urine nitrification reactor should be operated close to [NO 2 À ] opt . According to ISO 8466-1 (ISO, 1990), the most frequently expected concentration should lie in the centre of the working range of an analytical method. Since we expect the optimal nitrite concentration to be in a range of 12e30 mg N /L, the upper limit of the working range should be between 24 and 60 mg N /L. Our subjective yet informed decision is that the sensor should be able to reliably measure concentrations up to 50 mg N /L. Data calibration The linear calibration curves of all months resulted in a standard deviation of the method s xo of 4 mg N /L or less (Table 1). The measurements from July 2018 resulted in a particularly low standard deviation because the four samples from that month were collected during one day only, so that environmental conditions or sensor properties were very similar for all four measurements. The coefficients of determination R 2 show that, overall, quadratic curves did not result in substantially better fits than linear curves. More details to the calibration curves can be found in the Supplementary Material S2.1. As an example for a data fit, Fig. 4 shows the measurements with the large sensor in April 2018 and with the small sensor in October 2018 with a linear fit and the 95%-confidence interval. It can be seen that the offsets of the functions are close to 0 A/m 2 and the 95%confidence interval covers 0 A/m 2 . Since all fits showed similar small offsets and linear behaviour, we assume that a linear curve through the origin is suitable for calibration. As a consequence, for practical applications a one-point calibration of the sensitivity only might be sufficient. However, in our investigations, we determined both the offset and sensitivity to obtain maximum accuracy.. The spread of the nitrite concentration residuals generally increased with increasing nitrite concentrations (Fig. 5). However, the relative prediction residuals, that is the residuals after division by the nitrite concentration ( Fig. 6 and Fig. 7), were nearly constant above a nitrite concentration of 5 mg N /L: 90% of the data above 5 mg N /L have a maximum relative deviation of 20% and 21% for the large and the small sensor, respectively. This means that the standard deviation of the measurement error is close to proportional to the measured value in this range. Below 5 mg N /L, the relative deviation increases substantially with decreasing nitrite concentration. Therefore, we conclude that measurements below 5 mg N /L have large errors and that the sample for the one-point calibration should be at a nitrite concentration higher than 5 mg N /L. We suggest to choose a point close to the centre of the working range since this is the concentration we expect most frequently, i.e. around 25 mg N /L (see Chapter 3.1). Temperature and pH The results obtained in April to May 2018 with the large sensor are discussed first. Experiments with the small sensor confirmed the signal's dependencies on the nitrite concentration, temperature and pH. The data obtained with the large sensor are shown here, while the data obtained with the small sensor are shown in the Supplementary Material S2.2. The polynomial regression showed that the current density can be described well when neglecting pH and temperature. The linear model (Model B, Equation (4) À ] 3 , T, T 2 , T 3 , pH, pH 2 , and pH 3 ), resulted in an RMSE of 0.7 A/m 2 . Including temperature and pH increases the complexity of the model, yet does not offer a considerably lower RMSE than the linear model. Furthermore, when compared visually (Fig. 8B), it is apparent that the residuals of Model A and Model B lie in the same range. Therefore, we conclude that the influence of temperature and pH on the current density signal was negligible in our experiments. The standard deviation of the method s xo of Model B was 3.3 mg N /L. Aeration The assessment in June 2018 showed that the aeration rate does not influence the signal of the large sensor except at an aeration rate close to 0 Nm 3 /h (Fig. 9). When aeration was between 0 and 0.05 Nm 3 /h ± 3%, a lower signal was measured which we assume to be caused by diffusion limitation, i.e. a limitation of the transport of nitrite to the electrode due to poor mixing conditions. We merged the data from all assessed aeration rates, omitting the data at an aeration rate of approximately 0 Nm 3 /h. We fitted a linear curve to the current density in dependency of the nitrite concentration with a least squares fit. This resulted in a standard deviation of the method s xo of 3.3 mg N /L. We conclude that the aeration rate does not influence the signal as long as the aeration rate in the reactor is 0.2 Nm 3 /h or more. In the experiments with the small sensor we also did not find any influence of aeration on the measurement signal, when aeration was 0.2 Nm 3 /h or higher. The results can be found in the Supplementary Material S2.3. Electric conductivity The assessment of the influence of electric conductivity showed that a difference in electric conductivity does not affect the sensor signal at 10 mS/cm or more (Fig. 10). However, as expected, at low electric conductivities of 0.3 mS/cm and 0.5 mS/cm, for 25 mg N /L and 50 mg N /L, respectively, we measured a lower current density. If the liquid to be measured is not conductive, an electrochemical measurement is not possible. Table 1 Number of samples and their mean, standard deviation of the method s xo and coefficient of determination R 2 of the monthly separated data from the large and small sensor. The temperature was never controlled but natural fluctuations of the room temperature occurred. Response time The assessment of the response time resulted in a rise time of 1.4 ± 0.8 s and a fall time below 5.2 ± 1.3 s. These results were obtained with synthetic nitrite solutions and response times in nitrified urine might be slightly different. Nevertheless, the experiment indicates that the sensor reacts within a few seconds to changes in the nitrite concentration. 3.5. Typical wear-and-tear 3.5.1. Drift In Fig. 11, one can view the calibration results obtained in the exsitu experiments with the large sensor for a period of 4 months. The slopes of the fitted linear curves were 0.087 ± 8% (A/m 2 )/(mg N /L) and the offsets were in a range of À0.08 to 0.10 A/m 2 , with only one exception on May 22, 2018, thus suggesting a lack of drift for the offset as well as the sensitivity. The in-situ measurements ( Fig. 12A and B) also showed a rather stable offset close to 0 A/m 2 (large sensor: 0.20e1.22 A/m 2 , small sensor: 0.00e0.29 A/m 2 ) while the sensitivity of the calibration curve changed over time (large sensor: 0.349 ± 18% (A/m 2 )/(mg N /L), small sensor: 0.118 ± 61% (A/m 2 )/(mg N /L)). The sensitivity of the small sensor showed a stronger drift than the one of the large sensor, which might have two causes. First, the smaller reaction surface makes the sensor more susceptible to fouling or changes in the environmental conditions. Second, the reference electrode in the small sensor was graphite, while the large sensor had a conventional silver chloride reference. The graphite reference electrode might have been more prone to aging effects. Fouling Control measurements did not show any significant fouling effects on the large sensor during its operation in the nitrification reactor, even after 5.5 weeks operation in the reactor. The ex-situ calibration curves, determined in synthetic nitrite solutions, show very similar slopes before and after cleaning (Supplementary Material S2.4). In contrast, the ex-situ experiment with the small sensor, which was executed with nitrified urine, showed a stronger effect of fouling (Supplementary Material S2.4). The offsets lie close together while the sensitivity of the calibration curve is different before and after cleaning the sensor. As expected, the signal is higher after cleaning since there is no biofilm that either consumes the nitrite before it is measured or limits nitrite diffusion. The different outcome of these experiments might be due to differences in the sensor construction. However, we assume that the different results were caused by the medium composition, which varies in factors such as particles, background matrix, electric conductivity or density. Ex-situ versus in-situ We found that the amperometric nitrite sensor should be calibrated in-situ rather than ex-situ. The small sensor measured a higher current density ex-situ (synthetic nitrite solution with an electric conductivity of 16 mS/cm) than in-situ. For both calibration curves the offsets were close to 0 A/m 2 while the slopes were different (Fig. 13A). The experiment was not conducted with the large sensor. Another experiment showed that the sensor measured the current density of the synthetic nitrite solution with a downward drift over the measurement time of 400 min while the pH and the dissolved oxygen concentration stayed nearly stable (Fig. 13B). This suggests that it takes at least 400 min to form an equilibrium between reaction kinetics, diffusion and electric field. This supports our recommendation to calibrate the sensor in-situ. Influence of environmental conditions We found that the effects of temperature, pH, aeration, electric conductivity and dissolved oxygen concentration on the amperometric nitrite sensor are negligible. In the assessed range of temperature and pH (22.5 e 26.5 C and 6.0 e 7.1, respectively), no significant influence was detected. Furthermore, we did not find any dependency on the aeration rate, as long as the urine nitrification reactor was aerated. Neither did a variation in electric conductivity cause any change in the sensor signal at 10 mS/cm or more. The amperometric nitrite sensor is well suited for urine nitrification, since pH, temperature and the electric conductivity are usually in ranges for which we did not find any significant influence. Fig. 11. Results from the ex-situ drift analysis with the large sensor. The data points were collected after the sensor was cleaned. Drift While there was no noticeable drift in the ex-situ experiments, we observed that the amperometric nitrite sensor has a drift insitu, i.e. the sensitivity varies while the offset stays rather stable around 0 A/m 2 . Further research is necessary in order to find out whether the sensitivity decreases steadily or whether a seasonal variability can be observed. Nevertheless, we propose that the drift can be compensated for by regular calibration, for which a onepoint calibration may be sufficient. We suggest monthly in-situ calibration of the amperometric nitrite sensor. Another option is to quantify and include the drift in process control. Thürlimann et al. (2019) showed that a nitrite sensor prone to offset drift can be used for stabilizing control in urine nitrification by applying qualitative trend analysis. Note that the amperometric measurement exhibits drift of the sensitivity, which one may not be able to account for with qualitative trend analysis. Accuracy of the amperometric nitrite sensor The calibration curves of the in-situ experiments for each month resulted in a coefficient of variation of the method V xo , that is the standard deviation of the method s xo divided by the mean concentration, of 17% or less for the nitrite concentration range of 0e50 mg N /L. In comparison, according to Hach Lange GmbH, spectrophotometric cuvette tests have a V xo of 3% for the range of 0.6e6.0 mg N /L (LCK 342, Hach Lange GmbH, Germany) and for the range of 0.015e0.6 mg N /L (LSK 341, Hach Lange GmbH, Germany). As expected, the amperometric nitrite sensor signal is not as accurate as lab-based measurements. However, the sensor is especially valuable as an automated on-line measurement. Strengths and weaknesses of sensors for nitrite measurement To our knowledge, there exist three measurement principles that have been tested for on-line nitrite measurement in wastewater. Two of these are available commercially in the form of nitrite analyzers (colorimetric measurement principle) and spectrophotometric sensors (light absorbance measurement principle). A third principle, amperometric measurement, is tested for the first time in this work. Analyzers have the particular advantage of providing measurements that can be unbiased, precise, and drift-free. One disadvantage is that this measurement is produced ex-situ with a measurement cycle that lasts 10e20 min, thus leading to delay that may be significant for process control purposes. This also means the measurement frequency is fairly low. Even more important is that significant maintenance efforts are required for this type of equipment. This includes upkeep of the sample preparation system (e.g., filter replacement) and ensuring that reagents are both fresh and available in sufficient amounts. In contrast, spectrophotometric instruments can be used in-situ, do not require sample preparation, and can be equipped with self-cleaning devices, such as pressured air nozzles, brushes, or wipers. This reduces the cost of maintenance, while not eliminating it entirely. A key advantage of spectrophotometric measurements is that they are sensitive to many compounds. This however induces a lack of specificity, which is accounted for by specialized knowledge, such as the execution of calibration experiments and software tools for information extraction (Ma si c et al., 2015;Thürlimann et al., 2019). The proposed nitrite sensor combines advantages from both analyzers and spectrophotometric measurements: it is simultaneously sensitive and specific to nitrite and is expected to require limited maintenance efforts only. In addition, it can be produced from cheap materials. The amperometric nitrite measurement is not yet a mature technology and therefore further testing, including sensor and model validation, is necessary. The observed drift of the sensor's specificity in in-situ deployment is the only known detractor at this moment, which needs to be quantified and addressed in future research. Application We found that 90% of the data above 5 mg N /L have a maximum relative deviation of 20% and 21% for the large and the small sensor, respectively. The treatment of municipal wastewater requires a precise nitrite control at low concentrations, otherwise we risk nitrous oxide production as well as exceedance of the effluent discharge limit of 0.3 mg N /L for the nitrite concentration (Gujer, 2006). Since the relative prediction residuals increase strongly below 5 mg N /L (see section 3.2), we do not recommend the sensors in the current configuration for nitrite measurement in municipal wastewater treatment, where nitrite concentrations below 5 mg N /L can already be critical. Contrary, in urine nitrification higher nitrite concentrations are tolerated. Our calculations suggest that nitrite concentrations of e.g. 12 mg N /L at a pH of 6.0 or 30 mg N /L at a pH of 6.8 must not be exceeded, in order to prevent nitrite accumulation. Based on our experience with the sensor, we assume that with the amperometric nitrite sensor, high nitrite concentrations can be avoided without difficulty. Therefore, we conclude that the sensor is a promising tool for process control of urine nitrification. Conclusions With standard deviations below 4 mg N /L and a minimum linear nitrite range of 0e50 mg N /L, the amperometric nitrite sensor covers the critical range of nitrite accumulation and is well suited for on-line nitrite monitoring and process control in urine nitrification. We expect that the current amperometric sensor is also well suitable for controlling nitrification of other high strength nitrogen solutions such as digester supernatant. However, an increase of the sensitivity is necessary for nitrite control in the mainstream of municipal wastewater treatment. Drift corrections are necessary, but monthly intervals for calibration are sufficient. We propose that a one-point calibration of the sensitivity is suitable. The sensors must be calibrated in-situ. In urine nitrification, the increased nitrite concentrations required for the one-point calibration can be triggered by short increases of the inflow. Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Kai M. Udert is co-owner of the Eawag spin-off Vuna Ltd, which has a license for electrochemical nitrite removal and control. Peter Schrems produced the small nitrite sensor.
2020-05-28T09:12:47.924Z
2020-05-23T00:00:00.000
{ "year": 2020, "sha1": "37b125b43ee787f49b73c1628a2c6309f2e509c3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.wroa.2020.100055", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e333694c890a9c4dd38996cf26276633a2ce5ba", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
251751004
pes2o/s2orc
v3-fos-license
Mechanism and Therapeutic Targets of c-Jun-N-Terminal Kinases Activation in Nonalcoholic Fatty Liver Disease Non-alcoholic fatty liver (NAFL) is the most common chronic liver disease. Activation of mitogen-activated kinases (MAPK) cascade, which leads to c-Jun N-terminal kinase (JNK) activation occurs in the liver in response to the nutritional and metabolic stress. The aberrant activation of MAPKs, especially c-Jun-N-terminal kinases (JNKs), leads to unwanted genetic and epi-genetic modifications in addition to the metabolic stress adaptation in hepatocytes. A mechanism of sustained P-JNK activation was identified in acute and chronic liver diseases, suggesting an important role of aberrant JNK activation in NASH. Therefore, modulation of JNK activation, rather than targeting JNK protein levels, is a plausible therapeutic application for the treatment of chronic liver disease. Introduction Non-alcoholic fatty liver (NAFL) is a disorder where excess fat accumulates in the liver (steatosis) due to non-alcoholic or non-viral causes. Non-alcoholic steatohepatitis (NASH) is hepatic steatosis associated with hepatocellular injury, innate immune cell-mediated inflammation, and progressive fibrosis in the liver. A total of 30-40% of adults in the United States have NAFL, defined as hepatic fat >5% of total liver weight, and 3-12% have NASH. Twenty percent of people with NAFL will develop NASH. NASH can progress to irreversible liver diseases such as cirrhosis and hepatocellular carcinoma (HCC). Up to 25% of adults with NASH may have cirrhosis [1][2][3][4][5][6]. While liver transplant is the ultimate solution for cirrhosis or HCC, new drugs are emerging that target molecules and pathways such as stress kinases, de novo lipogenesis, lipid oxidation and transport, lipotoxicity, inflammation and fibrogenesis to prevent or treat steatosis and steatohepatitis in liver [4,[7][8][9]. Hypertension and cardiovascular diseases (CVD) due to increased risk of arteriosclerosis, hyperglycemia due to hepatic gluconeogenesis, and hyperinsulinemia due to insulin resistance cause the metabolic syndrome in NAFL disease [1,3]. Therefore, an effective therapeutic management is required for NAFL disease to prevent progression to NASH and its resulting complications. While the mechanism of developing NAFL/NASH is not fully understood, the hepatic metabolic stress response via JNK activation has been identified as a common pathway in numerous models of liver injury [10][11][12][13][14][15]. The relationship of JNK activation pathway to oxidative stress, lipotoxic stress and cell death, inflammation, and cytokines, fibrogenesis, de novo lipogenesis, lipolysis, lipid oxidation and transport are being studied to determine the mechanistic significant of MAPKs in the development of NASH ( Figure 1) [6,[16][17][18][19][20]. In addition, the development of NAFL/NASH is influenced by many common diseases whose primary etiology is not in liver, such as type II diabetes, hyperinsulinemia, obesity, changes in sex hormone regulation, adipokines and adipose tissue lipolysis, dysfunctional muscle metabolism, bile acid dyshomeostasis, gut dysbiosis, and Carbohydrate and free fatty acid overload to hepatocytes activate stress kinase cascade to upregulate de novo lipogenesis genes to adapt metabolic stress. Dial-up feedforward activation of stress kinase cascade through P-JNK-SAB interaction attenuates β-oxidation and lipid oxidation genes. Damage signals, receptors, and extracellular vesicles from hepatocytes recruit inflammation, and activate hepatic stellate cells and fibrogenesis. Hepatic MAPK Family in Metabolic Stress and the Mechanism of Sustained Activation The liver is composed of 60% parenchymal cells, i.e., hepatocytes, and 30% to 35% non-parenchymal cells, i.e., Kupffer cells (KCs), hepatic stellate cells (HSCs) and liver sinusoidal endothelial cells (LSECs). Hepatocytes are the work horse of the liver and carry out a vast array of metabolic, regulatory, and toxicological functions. Hepatocytes express MAPKs which transduce extracellular and intracellular signals to regulate cell proliferation, differentiation, apoptosis, and metabolism [30]. Metabolic stress induced MAPK cascade activation in hepatocytes includes an upstream MAPK kinase kinases (MAP3K) such as ASK1, MLKs, TAK1, MAPK kinases (MAP2K) such as MKK4, MKK7, and terminal stress kinases such as JNK, and p38 [31][32][33][34]. Diets, specifically hyper-nutrition type diets such as high fat, high carbohydrate, activate hepatic JNK longer than standard fat and carbohydrate diet [35]. It is important to mention that activated JNK (P-JNK) level decreases between meals, and fasting overnight alleviates basal low level of P-JNK. Hypernutrition type diets activate JNK by metabolites, endoplasmic reticulum (ER) stress, and mitochondrial stress via the modification of acetylation/deacetylation status, transcriptional activation/inhibition, energy metabolism and lipid oxidation, and oxidative stress [36][37][38]. JNK and p38 stress kinase are self-regulated via dual-specificity phosphatase (DUSP) group, which is the vital mechanism to dampen and terminate kinase activation via direct inactivation or inhibiting upstream kinase activation [39,40]. Individual DUSP has different tissue expression, subcellular localization, and has differing preferences for the kinases in MAPK cascade such as DUSP22 in ASK1-MKK7-JNK signaling pathway, DUSP5 on ERK signaling pathway [41][42][43]. DUSP9, 10, 12, 14 and 26 in liver have been found to prevent the hepatic steatosis [44][45][46][47][48][49][50]. In addition, the scaffold function of DUSP has been noted and extended reviews on DUSPs are available for further reading [41]. Carbohydrate and free fatty acid overload to hepatocytes activate stress kinase cascade to upregulate de novo lipogenesis genes to adapt metabolic stress. Dial-up feedforward activation of stress kinase cascade through P-JNK-SAB interaction attenuates β-oxidation and lipid oxidation genes. Damage signals, receptors, and extracellular vesicles from hepatocytes recruit inflammation, and activate hepatic stellate cells and fibrogenesis. Hepatic MAPK Family in Metabolic Stress and the Mechanism of Sustained Activation The liver is composed of 60% parenchymal cells, i.e., hepatocytes, and 30% to 35% nonparenchymal cells, i.e., Kupffer cells (KCs), hepatic stellate cells (HSCs) and liver sinusoidal endothelial cells (LSECs). Hepatocytes are the work horse of the liver and carry out a vast array of metabolic, regulatory, and toxicological functions. Hepatocytes express MAPKs which transduce extracellular and intracellular signals to regulate cell proliferation, differentiation, apoptosis, and metabolism [30]. Metabolic stress induced MAPK cascade activation in hepatocytes includes an upstream MAPK kinase kinases (MAP3K) such as ASK1, MLKs, TAK1, MAPK kinases (MAP2K) such as MKK4, MKK7, and terminal stress kinases such as JNK, and p38 [31-34]. Diets, specifically hyper-nutrition type diets such as high fat, high carbohydrate, activate hepatic JNK longer than standard fat and carbohydrate diet [35]. It is important to mention that activated JNK (P-JNK) level decreases between meals, and fasting overnight alleviates basal low level of P-JNK. Hyper-nutrition type diets activate JNK by metabolites, endoplasmic reticulum (ER) stress, and mitochondrial stress via the modification of acetylation/deacetylation status, transcriptional activation/inhibition, energy metabolism and lipid oxidation, and oxidative stress [36][37][38]. The sustained activation of JNK via the self-amplified feed-forward activation loop has been discovered recently [51] and found to play a key role in several disease models such as drug induced acute liver injury and cell death, cytokine induced hepatic apoptosis, lipotoxicity, ER stress induced mitochondrial dysfunction and cell death, ischemia-reperfusion injury in heart, cardiotoxicity, neuronal activity in brain, neurotoxicity, and ovarian cancer treatment [52][53][54][55][56][57][58][59]. Initial activated P-JNKs translocate to mitochondria where P-JNKs phosphorylate mitochondrial outer membrane protein SH3 homology associated BTK binding protein, SH3BP5 (SAB). P-JNK binding to SAB initiates intramitochondrial release of tyrosine phosphatase SH2 phosphatase 1 (Shp1) from SAB leading to dephosphorylation of activated-Src at tyrosine 419, which occurs on and requires the platform, docking protein Biomedicines 2022, 10, 2035 3 of 16 4 (DOK4), located on the mitochondrial inner membrane. P-Src is required to maintain electron transport [51]. Decreased P-Src inhibits mitochondrial respiration and enhances reactive oxygen species (ROS) production especially when mitochondrial aerobic metabolism via tricarboxylic acid cycle (TCA cycle) is upregulated by Ca +2 flux from mitochondriaassociated membrane (MAM), ER and cytoplasm (Figures 1 and 2) [54]. ROS facilitates ASK1 N-terminal dimerization and activation by oxidation and removal of thioredoxin, which binds and inhibits ASK1 [60]. Activated ASK1 activates MKK4/7 which in turn activates JNKs to amplify and increase the level of P-JNK [30,61,62]. Interestingly, P-MKK4 is associated with P-JNK on mitochondria [63]. Importantly, SAB is a pivotal molecule determining the level of P-JNK to cellular stress and damage [51,64]. Depletion or inhibition of ASK1, MKK4 or SAB prevents sustained P-JNK activation and prevents JNK mediated cellular stress but does not prevent initial JNK activation from the stress inducers such as reactive adducts in toxic drug metabolites, metabolic reprogram, IRE1α and PERK activation in unfolded protein or ER stress, membrane associated Src activated in lipid induced modification of plasma membrane plasticity and fluidity [52, [65][66][67][68]. Therefore, the feed-forward activation to sustain JNK activation, the JNK-SAB-ROS activation loop, is a key player in the mechanism of cellular stress and damage [30,51]. Hepatocytes: Hepatic Steatosis, Oxidative Stress, and Autophagy Hepatic fat is mostly triglyceride formed by esterification of free fatty acid from the diet, adipose tissue lipolysis and de novo lipogenesis. Fat accumulation in hepatocytes causes lipotoxicity and apoptosis [6,16,53] and ballooning degeneration of hepatocytes The JNK-interacting protein (JIP) family plays an integral role in MAPK cascade activation by creating proximity of upstream kinases and terminal kinases [69,70]. JIP1 is a key platform in MLKs mediated MKK7 and JNKs activation [71,72]. However, JIP1 also mediates ASK1-MEK-JNK signaling [73] and more than one JIP could be involved in [74,75]. We believe MLKs activation is an important signal for initial JNK activation at least in the acute liver injury models because initial JNK activation occur in the liver of acetaminophen treated ASK1 KO mice [76]. Role and function of JIP2, 3 and 4 are yet to be examined. Stress kinases target various intracellular molecules such as transcriptional and non-transcriptional targets via kinase interacting motif (KIM) and substrates are predicted to be phosphorylated serine/threonine-proline (SP) sites [30,61,62]. As an example, JNK targeted GCLC are ubiquitinated and degraded leading to decreased GSH level and increased oxidative stress as occur in the acute acetaminophen hepatotoxicity [64,77]. In NASH, P-JNK targeting of transcriptional factors, such as SREBFs to increase lipogenesis and cholesterol synthesis, NCoR1 to repress PPARα and fatty acid oxidation, leads to cumulative lipotoxic stress [78][79][80]. Therefore, P-JNK-SAB ROS activation loop is a key mediator in sustained activation of JNK. Hepatocytes: Hepatic Steatosis, Oxidative Stress, and Autophagy Hepatic fat is mostly triglyceride formed by esterification of free fatty acid from the diet, adipose tissue lipolysis and de novo lipogenesis. Fat accumulation in hepatocytes causes lipotoxicity and apoptosis [6,16,53] and ballooning degeneration of hepatocytes which are associated with oxidative stress in steatosis/steatohepatitis [78]. In the course of disease progression, high calorie diet increases mitochondrial function and metabolism accompanied with stress kinase activation [78,81]. Interaction of activated P-JNK with mitochondrial protein SAB impairs mitochondrial respiration and produces ROS leading to amplify feed-forward activation of JNK, known as the JNK-SAB-ROS activation loop [30,51,82]. Decreased hepatic SAB (SH3BP5) expression by Sab knockdown or knockout decreases ROS and decreases the stress kinase JNK activation in established experimental models of steatosis and steatohepatitis [78]. Notably, increased expression of SAB in steatotic liver plays a pivotal role in the higher activation of stress kinases and progression to steatohepatitis [64,78]. In line with this observation, depletion of hepatic JNK1/2 prevents diet induced lipogenesis and steatosis [79]. In addition, WW domain containing transcription regulator 1 (Wwtr1/TAZ) induced hepatic NADPH oxidase 2 (NOX2/Cybb) expression mediates the oxidative DNA damage in diet induced NASH and HCC [7]. NOX2 is a superoxide generating enzyme, which delivers activated oxygen into phagocytic vacuole in inflammatory cells such as granulocytic neutrophil in NASH [83]. The mechanism may be associated with microbiome dysregulation in the gut and translocation to liver [84]. Nevertheless, the inter-regulation of MAPK family and TAZ expression and activity in the progression of NASH needs to be explored. The critical role of JNK activation in the progression of NASH is supported by studies of MAP3Ks such as ASK1 and MLKs, and DUSPs. For instance, ASK1 −/− mice have reduced hepatic steatosis. CASP8 and a FADD-like apoptosis regulator (CFLAR) disrupts the Nterminus-mediated dimerization of ASK1 and favors ASK1 degradation and prevents JNK activation and hepatic steatosis [85]. CFLAR peptide is proposed as a potential therapeutic agent. In addition, upregulation of TRAF1 promotes hepatic steatosis through enhanced activation of ASK1-mediated JNK/P38 activation [86]. However, contradictory data is reported in hepatocyte specific ASK1 deleted mice which is cross-bred with albumin-Cre mice or in mice treated with ASK1 inhibitor. Deletion of hepatic ASK1 decreases P-JNK/P-38, impairs autophagosome formation in the liver and increase the liver triglyceride and fibrosis [87]. Hepatocyte specific ASK1 expression in mice prevents hepatic steatosis [87]. It should be noted that the content of diet, age and background strain of mice, housing environment such as temperature and microbiome affect the animal model of NASH [88], and direct or indirect effect of ASK1 and CFLAR on NASH is required to reconcile. MLK2 and 3 are redundant and abundant in the liver. MLK −/− mice prevents high fat diet (HFD) induced hepatic steatosis and triglyceride accumulation. The mechanism is explained by reduced JNK/p38 activation in HFD fed MLK −/− mice, and in mice treated with MLK3 inhibitor URMC-099 [89]. However, further studies are required to examine the possible activation of ASK1 in HFD fed MLK KO mice, and vice versa to identify the specific or overlapping function in progression to NASH. Nevertheless, JNK activation is an important mechanism in development of hepatic steatosis. The dual role of JNK1/2 in HFD induced steatosis and steatohepatitis has been elucidated hepatic specific JNK1/2 depletion studies [79] and settled the earlier studies using gene specific KO mice [91]. Increased JNK activation with the progression of hepatic steatosis reduces peroxisome β-oxidation by suppressing peroxisome proliferator-activated receptor α (PPARα) activity, in part through the upregulation of nuclear receptor corepressor 1 (NCoR1) [79]. Moreover, the hepatic metabolic stress accompanied by increased JNK activation and increased flux of acetyl-CoA from mitochondria to cytosol favors acetylation of transcription factors and histones leading to de novo lipogenesis via transcriptional factors SREBFs and the activation of downstream lipogenesis genes [92]. In addition, the carbohydrate sensor ChREBP is upregulated following ingestion of a carbohydrate-enriched meal, leading to the increased expression of genes regulating lipogenesis and fatty acid esterification that promote liver steatosis [93]. Direct association of JNK activity and ChREBP expression and activation is unknown. P38 isoforms (α, β, γ/δ) expression and activity are difficult to reconcile in several studies of NASH. A recent study using liver-specific p38α KO mice suggested that hepatic p38α protects mice from steatohepatitis in a diet feeding model [94]. However, macrophage p38α promotes progression of steatohepatitis [95]. It is important to stress that pharmaceutical inhibition or knockdown of p38 causes liver injury [96,97] and isoform specific and conditional studies may be required to examine the hypothesis. Autophagy, induced by extra-and intra-cellular stress, is the important part of liver homeostasis by removing damaged organelles (mitophagy, ER-phagy, pexo-phagy), lipid droplets (lipophagy) and protein aggregates, and recycling nutrients to preserve the cellular energy, integrity, and survival [98,99]. The mammalian target of rapamycin (mTOR) is the one major inhibitor of autophagy [100]. Growth factors and insulin repress autophagy via PI-3K-AKT-mTOR activation pathway [101,102]. During starvation, specifically glucose or amino acid deprivation, autophagy pathway in liver is upregulated by interfering mTOR activation via activation of cAMP-AMPK-TSC1/2 inhibitory pathway [101,102]. In dietinduced NASH, autophagy, especially lipophagy, occurs [103][104][105][106]. Increased lipophagy in NASH may reflect decreased mTOR activity due to P-JNK inhibition of insulin receptor signaling in NASH. However, the direct involvement of the P-JNK and P-JNK activation loop in a lipophagy mechanism is unknown. Nevertheless, pharmaceutical enhancement of autophagy by rapamycin or carbamazepine which inhibits mTOR reduced hepatic steatosis in NASH model [107], and severe hepatic steatosis occurs in mice with hepatic deletion of Atg5 [104]. Viral expression of Atg7 increases autophagy, reduces ER stress, and improves insulin sensitivity [108]. More importantly, ER stress in NASH may upregulate autophagy pathway because tunicamycin induces autophagy and IRE1α-JNK pathway is involved [109,110]. Though, JNK1 promotes autophagy through phosphorylation of BCL2, which releases BECN1 from BCL2 to contribute in autophagosome [111], JNK1 can phosphorylate RPTOR at Ser863 which is necessary to assemble MTOTC1 complex to block autophagy [112]. Direct evidence that JNK mediates these effects is needed to fully prove in NASH model. Sex differences also exist in NAFLD [113]. The prevalence and severity of NAFLD are higher in men than in women during the reproductive age (age ≤ 50-60 years); however, women after menopause (age ≥ 50-60 years) have higher rates of NAFLD [113]. The hepatic ERα-p53-miR34a signaling axis decreases SAB protein level in women of reproductive age [64]. This is explained in hormone mediated suppression of SAB. SAB expression is reduced in adult females, compared to male and postmenopausal females [64]. Increased hepatic SAB expression increases JNK activation in steatosis and steatohepatitis [78]. Additional factors such as sex difference in metabolic homeostasis contributed by skeletal muscle, adipose tissue, and thyroid hormones [114][115][116] are beyond the scope of current review. Clinical and epidemiological studies have shown that postmenopausal women receiving hormone replacement therapy had a lower prevalence of NAFLD compared to postmenopausal women not receiving replacement therapy [113]. While these findings suggest that estrogen is protective, further studies are needed to define the hepatic molecular mechanisms of sex difference in NAFLD. Sex, age, reproductive status, and synthetic hormone usage are needed to include in future clinical investigation and gene association studies of NAFLD [113]. A meta-analysis of recent studies has also shown that the PNPLA3, also known as adiponutrin, rs738409 [G] allele is associated with an increased risk of diet related hepatocellular carcinoma [117]. The PNPLA3 Ile148Met variant is resistant to degradation and disrupts ATGL lipolysis activity [118]. In addition, levels of PNPLA3 and CGI-58 in lipid droplets determine the lipolysis activity of ATGL. A recent study demonstrated that cJUN inhibits RORα-mediated PNPLA3 expression [119], leading to downregulation of the ATGL activity. cJUN is a JNK activated transcriptional factor [62]. Thus, the relationship between hepatic stress kinase activity and hepatic lipase expression and activity needs to be further explored. An allele in PNPLA3 (rs738409[G], encoding Ile148Met) is associated with increased liver fat, hepatic inflammation, and fibrosis [120]. P-JNK activates transcriptional co-repressor NCoR1 and suppresses PPARα activation in NASH, affecting decreased expression of genes involved in fatty acid transport and oxidative degradation in mitochondria and peroxisomes [79]. Consistent with the hepatocyte-specific JNK deletion model, Elafibranor (GFT505, Genfit), a PPARα/β agonist, Saroglitazar, a PPARα/γ agonist, and lanifibranor, a pan-PPAR agonist normalize serum lipid profiles, insulin resistance and improve NASH [121]. PPAR agonists, one of the most advanced classes of anti-NASH molecules, are in phase II or III clinical studies and may need to improve the efficacy and safety [122]. In addition, long-term studies in rodents showed an association of PPARα agonists with hepatic carcinogenesis and the result could be species-specific effect [123,124]. In line with this observation, hepatic JNK KO increases the cholangiocyte proliferation, and intrahepatic cholangiocarcinoma [125]. JNK KO activates the transcription factor PPARα and its target genes related to hepatic cholesterol and bile acid synthesis resulting in cholestasis. Therefore, a pharmaceutical target that directly interferes and dampens the sustained P-JNK activation is required to overcome the effect of complete deletion of JNK. Non-Hepatocytes: Immune Cells, Sinusoidal Endothelial Cells, Hepatic Stellate Cells NASH is the hepatic steatosis with liver cell inflammation and innate-immune system activation involving Kupffer cells (KC) and production of pro-inflammatory cytokines in the liver. Extracellular vesicles, fatty acids, cholesterol released from steatotic hepatocytes and lipopolysaccharides (LPS) activate KCs and hepatic stellate cells (HSCs) via damage-associated molecular pattern receptors, known as Toll-like receptor 4 (TLR4) and TNF-related apoptosis-inducing ligand death receptor (TRAIL-R2) [126]. Mice without TLR4 in KCs are protected against steatosis and NAFLD progression [126]. KCs activated by the hepatocyte-derived mitochondrial DNA produce pro-inflammatory chemokines such as TNF-α, monocyte chemotactic protein-1 (MCP-1), transforming growth factor-β (TGF-β), and tissue inhibitors of metalloproteinase, leading to fibrosis of the liver [127]. Moreover, chemokines and chemokine receptors involved in leukocyte recruitment such as CXCL8/CXCR1; CXCL1 & 3/CXCR2; CCL3-5/CCR5 and the chemokines CXCL9-11 and CCL2 (MCP1) are upregulated in steatohepatitis [128]. Importantly, bacteria toxins, cytokine and chemokines activate JNK and JNK activation is required for synthesis of proinflammatory cytokines [129]. Blood lymphocytes, monocytes and macrophages, and tissue residence immune cells express JNK, SAB and MAP kinase cascades ubiquitously [130]. Therefore, JNK-SAB-ROS activation loop may play a role in sustained activation of P-JNK, and immune cell function and migration are regulated by P-JNK activity. Deletion of SAB or inhibition of JNK kinase activity by SP600125 prevented TNF induced sustained P-JNK activation and cell death [52]. Consistently, conditional mononuclear cell JNK KO mice are prevented from diet-induced NASH [131]. However, a recent study on the differential role of recruited monocytes in steatohepatitis has been discovered [132] and role of MAP kinase signaling in subpopulation of macrophage needs to be revisited. In addition, CD62E (E-Selectin) and CD44 which recruit leukocytes into inflammation sites are upregulated in NASH patients [128]. P-selectin produced from platelets and endothelial cells by TNF, IL1, or LPS upregulate E-selectin expression in liver sinusoidal endothelial cell (LSEC) [133]. Similar to hepatocytes, LSECs express MAP kinase cascade and SAB, and lipotoxic stress also occurs in LSECs contributing to NOX1 expression and ROS generation [134]. The gut microbiota also seems to contribute to liver endothelial dysfunction. Restoration of a healthy microbiota via fecal transplantation normalizes portal hypertension by improving intrahepatic vascular resistance and endothelial dysfunction in rats [135]. Gut microbiome dysregulation (dysbiosis) is one of the important factors in the progression of NAFLD to advanced fibrosis and cirrhosis [84]. A low fiber, high fat, and high carbohydrate diet changes gut microbiome which in turn changes gut and systemic dietary metabolites such as acetate and cytokines. Gut bacteria and their metabolites translocate to the liver through a disrupted gut barrier and induce hepatic inflammatory reaction [84]. Further diet induced NASH studies are required to explore the effect of microbiome on JNK activation loop in LSEC. JNK activation in HSCs in response to TGF-β and platelet-derived growth factor (PDGF) activates Smad2/3 leading to α-smooth muscle actin (αSMA) expression, migration of resident HSCs and myofibroblasts, and fibrosis in NASH [17]. Notably, follistatin like 1 (Fstl1), glycoprotein, is secreted from HSCs/myofibroblast induced by TGF-β1. Fstl1 binds to TGF-β1 and negatively regulates TGF-β1 signaling in lung development. Interestingly, Fstl1 neutralizing antibody attenuates CCL4 induced liver fibrosis [136]. Further studies are required to reconcile results from developmental and disease models. Importantly, we found that JNK, SAB and MAP kinase cascade express in HSC, but how JNK activation is involved in HSC function and liver fibrosis is yet to be examined. Perspective of the Sustained JNK Activation Loop in Therapeutic Development Chronic hepatic inflammation causes fibrosis leading to liver cirrhosis. One-carbon metabolism, mitochondrial stress, ER stress, DNA damage, inflammation, and obesity are involved in carcinogenesis in NAFLD. Patients who develop cirrhosis related to NASH are at risk for hepatocellular carcinoma and/or end-stage liver failure and liver transplantation. Currently there is no drug approved by US Food and Drug Administration (FDA) for the treatment of NASH. Lifestyle alterations remain the only treatment. Steatosis, inflammation, ballooning, and fibrosis were improved in those achieving >5% weight loss, with even greater improvement in patients achieving >10% weight loss. Several molecules are in clinical trials (www.clinicaltrials.gov, accessed on 18 July 2022). However, the study that aims to reduce level of P-JNK activation via targeting sustained-JNK-activation-loop is very sparse and mentioned here, and prospective targets are discussed. Oxidative stress and antioxidant supplement: Deletion of Nrf2 results in rapid progression of NASH [137]. In addition, NADPH oxidases (NOXs), which links NAFL progression to NASH and HCC, are membrane-bound enzymatic complexes generating ROS and are abundant in liver associated with inflammation and immune responses [138]. Evidence supports that in early steatosis phase of diet induced NASH oxidized protein adducts increase in liver [78], suggesting that ROS produced in both hepatocytes and non-hepatocytes contribute the progression of NASH. Nevertheless, antioxidants, such as vitamin E prevent NASH [139]. The effectiveness of Vitamin E supplementation is currently under study in clinical trials of NAFLD. Further studies may be indicated because of possible association of prostate cancer and insulin resistance with the long-term usage of Vitamin E [140]. JNK and JNK inhibitors: JNK inhibitors inhibit the kinase activity of JNK isoforms in both hepatocytes and non-hepatocytes. SP600125, the most well tested JNK inhibitor, inhibits JNK kinase activity via inhibition of self-activation or MKK4 by competitive reversible inhibition at ATP binding site, but not at a JNK substrate binding site [141]. New and more specific JNK inhibitors, JNK-IN-8 and JNK-IN-10, are irreversible inhibitors [142]. Chemical inhibitors may potentially target a wide variety of JNK substrates in hepatocytes, KCs, HSCs, sinusoidal endothelial cells, and immune cells in liver. This non-selective inhibition precludes the development of JNK inhibitors as drugs for clinical use. The antisense oligo nucleotide targeting to JNK (JNK-ASO) to reduce JNKs expression is a selective and potential therapeutic agent and can efficiently target to hepatocytes [143]. However, hepatic JNK1/2 embryonic KO studies demonstrated that hepatocyte proliferation at 48 h is reduced in an experimental model of liver regeneration [144], though overall regeneration after 72 h is not different. Importantly, prolonged hepatic JNK deficiency increased cholangiocyte proliferation and intrahepatic cholangiocarcinoma [125]. JNK-SAB interaction and targeting SAB: SAB is a mitochondrial outer membrane protein. The interaction of P-JNK and SAB inhibits the mitochondrial respiration [51]. SAB expression is increased in human and murine NAFL and NASH [78]. Increased level of SAB amplifies and sustains JNK activation via the JNK-SAB-ROS activation loop. Blocking JNK-SAB interaction by a SAB peptide selectively prevents the sustained amplification of P-JNK and cell death in various animal models [51- 59,78]. SAB-ASO treatment, which decrease SAB expression and JNK activation, effectively prevents NASH progression and reverses NASH score [78]. Notably, complete depletion of SAB prevents sustained P-JNK activation but does not interfere the initial activation of JNK and stress response signaling required for cellular response and survival such as JNK/ATF2 dependent early phase induction of DUSP1 and 10, and late phase transcriptional activation of DUSP4 and 16 [30]. JNK-JIP1 interaction and inhibitor: JNK-interacting protein-1 (JIP1) is a scaffolding protein where upstream MAPKs activate JNK. BI-78D3 is able to compete with the Ddomain of JIP1 for JNK binding and thus inhibits JNK activation. BI-78D3 effectively prevent CCL4 induced acute liver injury [145], yet the effect of BI-78D3 in NASH models need to be examined. ASK1 and ASK1 inhibitor: Apoptosis signaling-regulating kinase 1 (ASK1) activity directly involves P-JNK-SAB-ROS activation loop through redox sensitive thioredoxin [146,147]. ASK1 is a MAP3 kinase which activates downstream terminal kinases both JNK and p38. JNK activity on mitochondria reversely corresponds to mitochondrial respiration [51], but function of p38 on mitochondria is not known. ASK1 inhibitor, Selonsertib (GS-444217) ameliorates NASH and improved fibrosis in preclinical studies and in a short-term clinical trial [148] but was not effective in late-phase clinical trials [149]. Selonsertib, which blocks ASK1 activation but not MLKs, may not be enough to revert the progression of NASH in human disease. Notably, hepatic JNK1 and JNK2 are cross-activated by the upstream MAP3Ks, ASK1, MLK2/3 and TAK1. MLKs and MLK inhibitor: The stress kinase mixed lineage kinase is also a MAP3K activating MKK4/7 and then JNK/p38. MLK3-JNK mediates the hepatic extracellular vesicle release [150,151]. Genetic or pharmacological inhibition of MLK3 results in reduction of the potent C-X-C motif chemokine ligand 10 (CXCL10) in extracellular vesicles (EV) derived from LPC-treated hepatocytes. NASH-inducing diet fed MLK3 −/− mice have reduced CXCL10 levels in their plasma EVs and, hepatoprotection against injury and inflammation. Furthermore, pharmacological MLK3 inhibitor, URMC099, reduces circulating CXCL10 and attenuates murine NASH [151]. It is important to stress that circulating EVs derived from hepatocytes, immune cells and platelets can be a biomarker for NAFL resolution in response to weight loss surgery [152,153]. Preclinical studies of URMC099 on Alzheimer's disease, Parkinson's disease and NASH model are encouraging [154,155]. Efficacy in human clinical trials has not yet been examined. Conclusions The understanding the mechanism of kinase activation and signal regulation will generate therapeutic target molecules to develop a safe and effective treatment. However, limitations and caveats do exist because of similarity of the kinase activation and interaction with its substrates through a pattern of amino acid sequence, motif, creating unwanted off target side effects. Finding the targetable proximity molecules in the P-JNK activation loop, Table 1 such as CFLAR in ASK1 activation, could be an alternative strategy. Advances in our current understanding of molecular mechanisms using improved animal models and small molecule screening could support the new pharmaceutical development especially target specific peptides or antisense oligos, which are the promising technologies for the future prevention and treatment of NASH. Table 1. Regulators of JNK activation in liver diseases. MAPKs, JNK and p38, are activated by upstream MAP kinase cascade which activity is dampened by interacting proteins or proteasomal degradation, or enhanced by membrane associated signal activation or reactive oxygen species (ROS). DUSP family phosphatases inhibition of MAP kinase cascade is critical in preventing susceptibility of hepatocyte toxicity and steatosis. Sustained activation of JNK via SAB-ROS-MAP3K overcomes the cellular protective mechanism and causes hepatocyte toxicity and lipogenesis. Kinases Proximity molecules in the context of study Reference Data Availability Statement: The study did not report any data. Conflicts of Interest: The authors declare no conflict of interest.
2022-08-24T15:03:34.500Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "e1d125fbed551022a97db33e43d490fffd23df72", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9059/10/8/2035/pdf?version=1661000518", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19b3bf4a27eb36698c3d56612039631355a28eff", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
91614894
pes2o/s2orc
v3-fos-license
Cognitive capacity limits are remediated by practice-induced plasticity in a striatal-cortical network Humans show striking limitations in information processing when multitasking, yet can modify these limits with practice. Such limitations have been attributed to the capacity of a frontal-parietal network, but recent models of decision-making implicate a striatal-cortical network. We adjudicated these accounts by implementing a dynamic causal modelling (DCM) analysis of a functional magnetic resonance imaging (fMRI) dataset, where 100 participants completed a multitasking paradigm in the scanner, before and after engaging in a multitasking (N=50) or an active control (N=50) practice regimen. We observed that multitasking costs, and their practice related remediation, are best explained by modulations in information transfer between the striatum and the cortical areas that represent stimulus-response mappings. Neither multitasking nor practice modulated direct frontal-parietal connectivity. Our results support the view that limits in cognitive capacity are striatally driven, and moderated by the interplay of information exchange from the putamen to the pre-supplementary motor area. Although human information processing is fundamentally limited, the points at which task difficulty or complexity incurs performance costs are malleable with practice. For example, practicing component tasks reduces the response slowing that is typically induced as a consequence of attempting to complete the same tasks concurrently (multitasking) ​(Telford 1931; Ruthruff, Johnston, and Van Selst 2001; Strobach and Torsten 2017)​. These limitations are currently attributed to competition for representation in a frontal-parietal network ​(Watanabe and Funahashi 2014, 2018; Garner and Dux 2015; Marti, King, and Dehaene 2015)​, in which the constituent neurons are assumed to adapt response properties in order to represent the contents of the current cognitive episode ​(Duncan 2010, 2013; Woolgar et al. 2011)​. Despite recent advances, our understanding of the network dynamics that drive multitasking costs and the influence of practice remains unknown. Furthermore, although recent work has focused on understanding cortical contributions to multitasking limitations, multiple theoretical models implicate striatal-cortical circuits as important neurophysiological substrates for the performance of single sensorimotor decisions ​(Caballero, Humphries, and Gurney 2018; Bornstein and Daw 2011; Joel, Niv, and Ruppin 2002)​, the formation of stimulus-response representations in frontal-parietal cortex ​(Hélie, Ell, and Ashby 2015; Ashby, Turner, and Horvitz 2010)​, and performance of both effortful and habitual sensorimotor tasks ​(Yin and Knowlton 2006; Graybiel and Grafton 2015; Jahanshahi et al. 2015)​. This suggests that a complete account of cognitive limitations and their practice-induced attenuation also requires interrogation into the contribution of striatal-cortical circuits. We seek to address these gaps in understanding by investigating how multitasking and practice influence network dynamics between striatal and cortical regions previously implicated in the cognitive limitations that give rise to multitasking costs ​(Garner and Dux 2015)​. We previously observed that improvements in the decodability of component tasks in two regions of the frontal-parietal network pre-supplementary motor area (pre-SMA/SMA), the intraparietal sulcus (IPS) and one region of the striatum (putamen) predicted practice-induced multitasking improvements ​(Garner and Dux 2015)​. This implies that practice may not divert performance from the frontal-parietal system, as had been previously assumed ​(Chein and Schneider 2012; Yin and Knowlton 2006; Kelly and Garavan 2005; Petersen et al. 1998)​, but rather, may alleviate multitasking costs by reducing competition for representation within the same system. Moreover, our finding that the putamen showed changes to task decodability that predicted behavioural improvements comparable to what was observed for pre-SMA and IPS implies that rather than stemming from overload of an entirely cortical network ​(Dux et al. 2006; Marti, King, and Dehaene 2015; Marois and Ivanoff 2005)​, multitasking costs are manifest by limitations within a distributed striatal-cortical system. This raises the question of how interactions between these brain regions give rise to multitasking costs and how can these be mitigated with practice: Do multitasking costs reflect over-taxation of striatal-cortical circuits? Or are they a consequence of competition for representation between cortical areas? Alternately, do multitasking costs stem from limitations in both striatal-cortical and corticocortical connections? Does practice alleviate multitasking costs via modulating all the interregional couplings that give rise to multitasking behaviour, or by selectively increasing or reducing couplings between specific regions? Our aim was to arbitrate between these possibilities by applying dynamic causal modelling ​(DCM, Friston, Harrison, and Penny 2003)​ to an fMRI dataset (N=100) collected while participants performed a multitasking paradigm before and after practice on the same paradigm (N=50) or on an active control task (N=50) ​(Garner and Dux 2015)​. We sought to first characterise the modulatory influence of multitasking on the network dynamics between the pre-SMA, IPS and putamen, and then to understand how practice modulated these network dynamics to drive multitasking performance improvements. multitasking paradigm before and after practice on the same paradigm (N=50) or on an active control task (N=50) (Garner and Dux 2015) . We sought to first characterise the modulatory influence of multitasking on the network dynamics between the pre-SMA, IPS and putamen, and then to understand how practice modulated these network dynamics to drive multitasking performance improvements. Results As all results unrelated to the dynamic causal modelling analysis are described in detail in Garner and Dux (2015) , we recap the relevant findings here. Participants completed a multitasking paradigm (Figure 1a) while being scanned with functional magnetic resonance imaging (fMRI), in a slow event-related design. For the multitasking paradigm, participants completed both single-and multi-task trials. For the single-task trials, participants made a 2-alternative discrimination between either one of two equiprobable shapes (visual-manual task), or between one of two equiprobable sounds (auditory-manual task). Participants were instructed to make the correct button-press as quickly and as accurately as possible. On multitask trials, the shape and sound stimuli were presented simultaneously, and participants were required to make both discriminations (visual-manual task and auditory-manual task) as quickly and as accurately as possible. Between the pre-and post-practice scanning sessions, participants were randomly allocated to a practice group or an active-control group (also referred to as the control group). The practice group performed the multitask paradigm over multiple days whereas the control group practiced a visual-search task (Figure 1b). For both groups, participants were adaptively rewarded for maintaining accuracy while reducing response-time (see methods section for details). Our key behavioural measure of multitasking costs was the difference in response-time (RT) between the single-and multi-task conditions. Performing the component tasks as a multitask increases RT for both tasks, relative to when each is performed alone as a single task. The effectiveness of the paradigm to assess multitasking was confirmed with multitasking costs being clearly observed in the pre-practice session (main effect of condition, single-vs multi-task, F(1, 98) = 688.74, MSE = .026, p<.0001, η p 2 = .88, see extended Figure 1a). Critically, the practice group showed a larger reduction in multitasking costs between the pre-and post-practice sessions than the control group (significant session (pre vs. post) x condition (single-task vs multitask) x group (practice vs control) interaction; F(1, 98) = 31.12, MSE = .01, p < .001, η p 2 = .24, Figure 1c). Specifically, the practice group showed a mean reduction (pre-cost -post-cost) of 293 ms (95% CI [228,358]) whereas the control group showed a mean reduction of 79 ms (95% CI: [47,112]). These findings did not appear to be due to a speed/accuracy trade-off as the group x session x condition interaction performed on the accuracy data was not statistically significant (p=.06). We sought to identify the brain regions that could be part of the network that 1) showed increased activity for both single tasks, as could be expected by brain areas containing neurons that adapt to represent the current cognitive episode, 2) showed sensitivity to multitasking demands (i.e. increased activity for multitask relative to single-task trials), and 3) showed specificity in response to the training regimen, i.e. showed a group x session interaction (see Garner and Dux 2015 for details) . This criteria isolated the pre-SMA/SMA, the left and right inferior parietal sulcus (IPS) and the left and right putamen. For the current study, in the interest of parsimony over the number of areas (nodes) in our models, and given we had no strong reason to assume lateralized differences in the function of the underlying network, we opted to include only the pre-SMA/SMA and the remaining left hemisphere regions as our ROIs ( Figure 1d). Figure 1: Task, protocol, behaviour and regions of interest. a) Multitasking paradigm: The task was comprised of two single-task (S) conditions and one multitask (M) condition. Each S was a 2 alternative-discrimination between either one of two equiprobable stimuli. The stimuli could either be shapes (visual-manual task), or sounds (auditory-manual task). On M trials, participants were required to complete both Ss (visual-manual and auditory-manual). On all trials, participants were requested to perform the task(s) as quickly and as accurately as possible. b) Protocol: At both the pre-and post-practice sessions, all participants completed the multitasking paradigm while structural and functional MRI images were taken. Participants were then allocated to either the practice or the active-control group. The practice group subsequently performed the multitask paradigm over three sessions, whereas the control group practiced a visual-search task with three levels of difficulty, under a comparable reinforcement regimen. c) Multitasking costs to response time [mean(Ms) -mean(Ss)] for the practice and control groups, at the preand post-practice sessions, presented as individual data points, boxplots and densities. d) Regions of interest identified by our previous study (Garner and Dux, 2015); the Supplementary Motor Area (blue), the Intraparietal Sulcus (red), and the Putamen (yellow). Extended Figure 1a: Dot, box and density plots for mean response-times (RT) for the single-(S) and multitasks (M) for the practice and the control groups at the pre-training session. Network dynamics underlying multitasking We sought to identify how multitasking modulates connectivity between the IPS, pre-SMA/SMA and the putamen (although our anatomically defined mask included all of SMA, 78% of participants showed peak activity in pre-SMA, defined as coordinates rostral to the vertical commissure anterior (Kim et al. 2010) , so we hereon refer to the region as pre-SMA, note: the within group percentages were also comparable; practice = 83 %, control = 73 %). To achieve this, we first applied DCM to construct hypothetical networks that could underlie the observed data. These models were then grouped into families on the basis of characteristics that addressed our key questions. This allowed us to conduct random effects family-level inference (Penny et al. 2010) to determine which model characteristics were most likely, given the data. Specifically we asked; 1) which region drives inputs to the multitasking network (putamen or IPS family, Figure 2a)? and 2) does multitasking modulate striatal-cortical couplings, corticocortical couplings or both ( Figure 2b)? Lastly, we conducted Bayesian Model Averaging (BMA) within the winning family to make inference over which specific parameters were modulated by multitasking (i.e. is the posterior estimate for each connection reliably different from 0?). The model space (Extended Figure 2a) which underpins our theoretically motivated hypothetical networks contained bidirectional endogenous connections between all three regions, owing to evidence for the existence of anatomical connections between the putamen, IPS and pre-SMA (Cavada and Goldman-Rakic 1989;Luppino et al. 1993;Haber 2016;Alexander, DeLong, and Strick 1986;Wise et al. 1997) , as well as endogenous self-connections. As we had no a priori reason to exclude a modulatory influence of multitasking on any specific direction of coupling, we considered all 63 possible combinations of modulation (see extended Figure 2b). First we asked which region in the network received driving inputs that are modulated by multitasking demands. As the IPS shows sensitivity to sensory inputs across modalities (Vossel, Geng, and Fink 2014;Anderson et al. 2010;Grefkes and Fink 2005) , and as the striatum receives sensory-inputs from both the thalamus (Alloway et al. 2017) and from sensory cortices (Saint-Cyr, Ungerleider, and Desimone 1990; Guo et al. 2018;Reig and Silberberg 2014) , both IPS and putamen were considered as possible candidates. We therefore fit each of the 63 modulatory models twice, once allowing driving inputs to occur Figure 2a) relative to the IPS family. Therefore, the data are best explained by models where multitasking modulates driving inputs to the putamen. The winning putamen input family were retained for the next stage of family level comparisons. We then asked whether the data were better explained by models that allowed multitasking to modulate striatal-cortical connections, corticocortical connections or all (Figure 2b). We therefore grouped the models from the putamen input family into three groups. .03). Having determined that multitasking is indeed supported by both striatal-cortical and corticocortical couplings, we next sought to infer which specific parameters were modulated by multitasking; i.e. do we have evidence for bidirectional endogenous couplings between all regions? Or a subset of endogenous couplings? With regard to multitasking related modulations; are all couplings modulated, or a subset of striatal-cortical and corticocortical connections? To answer this we conducted BMA over the all family to obtain the posteriors over each of the endogenous (A) and modulatory coupling (B) parameters. We looked for A parameters to retain by testing for which posteriors showed a probability of difference (Pp) from zero that was greater than .99 (applying the Sidak adjustment for multiple comparisons). As seen in the extended Figure 2c, we retain endogenous couplings from IPS to Put, Put to IPS, Put to pre-SMA, and pre-SMA to IPS (all Pps = 1) and reject endogenous couplings from IPS to pre-SMA (Pp = .98) and pre-SMA to Put (Pp = .66). We applied the same test to the B parameters and found evidence for a modulatory influence of multitasking on Put to IPS coupling (Pp = 1). We reject a modulatory influence of multitasking on the remaining parameters (all Pps <= .98), although we are more tentative with conclusions regarding Put to pre-SMA coupling, as this is the only one of the rejected set that showed reasonably strong evidence for a modulatory influence (Pp = .96), with posterior distributional characteristics that were similar to that for the retained modulatory parameter (i.e. a smaller standard deviation that was closer to that observed for Put to IPS coupling (σ = .06 for both), than for the rejected couplings (all σ > .08)), and with strong evidence for the corresponding endogenous parameter (Pp = 1, Figure 2c). It is interesting, that neither the probability of modulation to the pre-SMA to IPS (Pp = .82) nor the IPS to pre-SMA parameters (Pp = .88) survived correction for inclusion in the model, even though there was strong evidence for an endogenous connection between pre-SMA and IPS, and given that f ALL was the preferred family. It is possible that while these connections are modulated, the strength of modulations varies more across individuals that for the connections for which strong evidence was obtained. To sum up (Figure 2d), the influence of multitasking is best explained by a network where information is propagated, via Put, to the IPS and the pre-SMA. Information is shared back to the Put via IPS, and from pre-SMA to IPS. Multitasking demands specifically increases the rate of information transfer sent from the Put to IPS, and possibly from Put to pre-SMA. Hence, we can reject the idea that multitasking costs are solely due to limitations in a cortical network, and rather reflect the taxation of information sharing between the Put and the other relevant cortical areas, namely the IPS and the pre-SMAs. The influence of practice on the network underpinning multitasking Next we sought to understand how practice influences the network that underpins multitasking on both single-and multi-task trials, for both the practice and control groups. For example, it may be that practice influences all the endogenous couplings in the network, or a subset of them. Furthermore, if practice only modulated a subset of couplings, would it only be on striatal-cortical couplings, or corticocortical, or both? By comparing the practice group to the control group, we sought to identify which modulations are due to engagement with a multitasking regimen, and which are due to repeating the task at the post-session (and potentially due to engagement with a practice regimen that did not include multitasking). To address these questions, we constructed DCMs that allowed practice (i.e. a pre/post session factor) to modulate all the possible combinations of couplings in the multitasking network defined above (4 possible connections, therefore M i = 15, see extended Figure 3a). We then fit these DCMs separately to the single-task data and to the multitask data, concatenated across pre-to post-sessions. Comparable to above, we decided to leverage information across models (proportional to the probability of the model, see extended Figure 3b) and conducted random-effects BMA across the model space to estimate posteriors over the parameters. This method can be more robust when the model space includes larger numbers of models that share characteristics, as it helps overcome dependence on the comparison set by identifying the likely features that are common across models (Penny et al. 2010) . We compare the resulting posteriors over parameters to determine for each group, those which deviate reliably from zero for single-task trials, for multitask trials, and also whether they differ between groups (applying the Sidak correction for each set of comparisons). The results from the analysis of posteriors over parameters can be seen in Figure 3. All practice related modulations influenced striatal-cortical couplings. For single-task trials, in the practice group, the practice factor modulated coupling from IPS to Put (Pp = .99), which was also larger than that observed for the control group (Pp practice > control = .99). No other modulatory couplings achieved the criteria for significance (all Pps <= .96). For the control group, the practice factor modulated both Put to IPS (Pp = .97) and Put to pre-SMA (Pp = 1) couplings. In both cases, the modulatory influence of practice on this coupling was larger for the control group than for the practice group (Pps control > practice = .99, 1). The remaining coupling parameters did not achieve statistical significance (Pps < .8). Thus practice on single-task trials specifically modulates couplings from cortex (IPS to striatum), whereas repeating a task at the post-session (control group) modulates couplings projecting from striatum to cortex. For multitask trials, both groups showed practice related increases to modulations of the putamen to pre-SMA coupling (practice group Pp = .99, control Pp = 1). Perhaps counterintuitively, these were larger for the control group than for the practice group (Pp control > practice = 1), although we dissect this relationship further in the analysis below. The remaining modulatory parameters and group differences did not achieve statistical significance (all Pps <= .93). Therefore, changes from putamen to pre-SMA connections underpin reductions in multitasking costs. Interestingly, practice does not modulate corticocortical activity to improve multitasking performance. Figure 3: The modulatory influence of practice on the multitasking network. a) Posteriors over parameters were estimated for the practice (P, in orange) and control (C, in violet) groups for single-task trials (S) and for multitask trials (M). Posteriors that deviated reliably from 0 (>0) are in darker shades, whereas those that did not significantly deviate from 0 are in lighter shades. Stars indicate where there were statistically significant group differences. b) Proposed influences of practice on modulatory coupling within the multitasking network for single-task (S) and multitask (M) trials, for the practice and control groups. For multitask trials, the arrows are shaded to indicate the strength of the effect (i.e. the darker the arrow, the larger the modulation to that parameter). Extended Figure 3a: Model space considered for the modulatory influence of practice (with models M =1,..., 15) Extended Figure 3b: Expected and exceedance model probabilities for single-task (top 4 panels) and for multitask (bottom 4 panels) data for the practice (left column) and control (right column) groups. Models are ordered from 1-15 on the x-axis in the same order as presented in Extended Figure 3a. Connectivity decreases from Putamen to pre-SMA correlates with lower multitasking costs Having ascertained that practice modulates striatal-cortical coupling for single tasks and for multitasks, we next sought to understand whether practice related modulations to striatal-cortical coupling were interdependent with behavioural performance improvements. To achieve this, we first calculated the percent reduction of RT in the single task, RT S from pre-to post-practice): Δ where VM = visual-manual, AM = auditory-manual and ST = single task. The lower the score, the larger the reduction in RT at post, relative to pre. We then performed Spearman's correlations with the participant specific B parameters from the connections shown to be modulated by practice on single task trials (IPS to Put, Put to IPS, Put to pre-SMA), applying the Sidak adjustment for multiple comparisons. There was a statistically significant relationship between Put to pre-SMA and percent reduction of RT on single trials Δ RT S (r s (93) = .25, p = .015). Interestingly, examination of the correlation data ( Figure 4) shows that individuals who showed the largest decreases in Put to pre-SMA coupling showed the largest % reductions in RT performance. Figure 4: Showing correlation between % reduction in single task (S) response time (RT), relative to pre-practice, and practice related modulations to Putamen to SMA coupling. Individuals who showed the largest decreases in Put to pre-SMA coupling showed the largest % reductions in RT performance. This relationship also held when we tested for the interdependence between practice related reductions in multitasking costs and modulations of Put to pre-SMA coupling. Percent reduction in multitasking (M) costs, M Cost were calculated as: Δ And as defined in Eq (2). RT S Similarly to single task performance, those who showed decreased modulations in Put to pre-SMA coupling, also showed the lowest multitasking costs at post, relative to pre-practice performance (r s (93) = .26, p=.01, see Figure 5). Thus, reductions in the speed of single-task and multitask performance are both related to decreased rates of information sharing from Put to pre-SMA. Figure 5: Showing correlation between % reduction in multitask costs (MT), relative to pre-practice, and practice related modulations of Putamen to SMA coupling. Reductions in the speed of single-task and multitask performance are related to decreased rates of information sharing from Put to pre-SMA. Discussion We sought to understand how multitasking demands modulate underlying network dynamics, and how practice changes this network to reduce multitasking costs. We asked whether multitasking demands modulated connectivity within a frontal-parietal cortical network, as has been previously assumed (Dux et al. 2006;Marti, King, and Dehaene 2015;Marois and Ivanoff 2005;Sigman and Dehaene 2008;Erickson et al. 2007;Hesselmann, Flandin, and Dehaene 2011;Tombu et al. 2011;Jiang 2004;Watanabe and Funahashi 2014) , or whether multitasking modulates the striatal-cortical connections recently implicated as critical in single sensorimotor decision making tasks (Caballero, Humphries, and Gurney 2018;Yartsev et al. 2018;Badre and Nee 2018) . Specifically, having previously identified that practice-related improvements correlate with activity changes in the pre-SMA, the IPS and the putamen (Garner and Dux 2015) , we applied DCM to ask how multitasking modulates connectivity between these regions. We show that, under multitask relative to single-task conditions, the network is driven via inputs to the putamen, and that multitasking specifically modulates striatal-cortical connectivity, namely; putamen to IPS, and potentially, putamen to pre-SMA. We did not observe evidence that connections between pre-SMA and IPS were modulated by multitasking demands. Therefore, multitasking costs appear to reflect capacity limitations of a different network than has been implied by previous studies focusing on cortical (frontal-parietal) brain regions only. Rather, our results accord with models of single-task decision making that implicate a distributed network that also includes the striatum, in addition to cortical areas. Our results build upon this work by specifically showing that multitasking performance is supported by increased rates of information sharing from striatal to cortical areas. These results strongly imply that limitations in striatal-cortical information sharing underpin multitasking limitations. We then asked how practice modulated the network dynamics of the proposed multitasking network. Comparable to the finding above, we observed that practice influenced striatal-cortical connectivity. For single-tasks, the control group showed increased couplings from the putamen to IPS, and from the putamen to the pre-SMA. In contrast, the practice group showed increased coupling from the IPS to the putamen. For both groups, the practice factor modulated putamen to pre-SMA connectivity on multitasking trials, but more so for the control group. Subsequent correlation analysis showed that those who had decreased coupling between these regions showed the largest performance improvements. Again, these results show that multitasking limitations are better characterised as stemming from constraints in information transfer from striatal to cortical regions, rather than from limitations in a frontal-parietal network. We now consider the implications of our key findings in turn. Network dynamics limit to cognitive performance in multitasking -implications We found that during multitasking, the underlying network is driven by inputs to the putamen, and that while information is propagated between cortical and subcortical areas, multitasking specifically modulates coupling from the putamen to IPS, and more hesitantly, from putamen to pre-SMA. The IPS is assumed to contribute to the representation of stimulus-response mappings (Pho et al. 2018;Goard et al. 2016;Bunge et al. 2002) , and the pre-SMA is assumed to both arbitrate between competitive representations of action-plans (Nachev et al. 2007) . Thus both regions potentially constitute key nodes in the cortical representation of a stimulus-response conjunction. We propose that multitasking limitations stem from constraints on the rate at which the striatum can, on the basis of incoming sensory information, sufficiently excite the appropriate cortical representations of stimulus-response mappings to reach a threshold for action. This leads to the intriguing possibility that previous observations that cognitive control operations are underpinned by a frontal-parietal network (Duncan 2013;Watanabe and Funahashi 2014;Dux et al. 2009;Cole et al. 2013) may actually have been observing the cortical response to striatally mediated excitatory signals. In fact, our findings are in line with a recent application of meta-analytic connectivity modelling showing that when frontal-parietal regions associated with cognitive control operations are used as seed regions, the left and right putamen are likely to show significant co-activations across a range of sensorimotor and perceptual tasks (Camilleri et al. 2018) . Taken together, these data suggest that the striatum, or at least the putamen, should be included in the set of brain regions that contribute to cognitive control, at least during sensorimotor decision-making. It was perhaps surprising that the network did not receive driving inputs via IPS, given its association to the dorsal attention system (Corbetta and Shulman 2002;Vossel, Geng, and Fink 2014;Anderson et al. 2010) . Furthermore, although pre-SMA showed endogenous coupling with IPS during multitasking, we did not find evidence for reciprocal information transfer between these regions. Additionally, neither multitasking nor practice was not shown to modulate coupling between pre-SMA and IPS. This suggests that although information transfer occurs from pre-SMA to IPS during multitasking, coupling activity between these nodes is not a bottleneck of information processing that gives rise to multitasking costs, and thus may also not be critical for sensorimotor translation. We think our current findings are in keeping with recent demonstrations that while monkey parietal activity correlates with sensorimotor decision-making, it is not impaired by inactivation of the same neurons (Katz et al. 2016) . In contrast, ablation of the rat striatum does impair sensorimotor decision-making (Yartsev et al. 2018) . As pre-SMA has been recently proposed as contributing to the integration of sequential elements into a higher-order representation (Cona and Semenza 2017) , presumably representing the joint probability of the current and upcoming elements. We speculate that endogenous pre-SMA to IPS coupling reflects a biasing of IPS towards likely upcoming choice representations, presumably reinforced by corresponding sensory information transmitted from the putamen. An interesting question that falls out of this interpretation begs what would this be for, if not to perform the current sensorimotor translation? Since recent models of parietal activity indicate timescales of self-excitation that extend beyond the duration of the current sensorimotor translation (Park et al. 2014) , one possibility is that the IPS aggregates information over time to guide behaviour in novel but similar contexts. Another possibility, implicated by our findings regarding the influence of practice (see below), is that the IPS aggregates information to support striatal activation of motor plans in the context of well-learned behaviours. Implications of practice induced plasticity in remediating multitasking costs An advantage to the current work is that we can both model the dynamics of the network that underpins multitasking limitations, and also identify which couplings are modulated by practice, for both single-tasks and for multitasks. By comparing this to modulations observed in the control group, we can make inroads to identifying which couplings not only correspond to multitasking limitations, but also those that may be critical in determining the extent of their presence and remediation due to practice. We observed that for single-task trials, the practice group showed practice related increases in IPS to putamen coupling, whereas the control group, who were repeating the task after having practiced a regimen not expected to improve multitasking (Garner, Tombu, and Dux 2014;Verghese et al. 2017;Garner, Lynch, and Dux 2016) , showed increases in putamen to IPS coupling, and putamen to pre-SMA coupling. We interpret the control group as showing modulations that occur as a consequence of being at an earlier stage of practice (i.e. repeating the task for a second time) than the practice group (who are repeating the task for the 5th time). Taken this way, the current results reflect that initially, practice of single sensorimotor tasks causes an increase in striatal excitation of the cortex, whereas in the longer term, practice causes cortical areas that encode the stimulus-response mapping (specifically IPS) to exert a stronger excitatory influence on the striatum (i.e. putamen more specifically). This is in keeping with recent theories of automatic behaviours positing that the striatum acts to reinforce cortical representations early in learning, and that automatised behaviour involves a shift from striatal to cortical control (Hélie, Ell, and Ashby 2015; Ashby, Turner, and Horvitz 2010) . However, our results speak against the suggestion from these theories that automatised behaviour is mediated fully by frontal-parietal (i.e. corticocortical) connections (Hélie, Ell, and Ashby 2015) , as we observed no practice related modulations of corticocortical connectivity. For multitask trials, we observe that being at an earlier stage of practice causes increases in the rate of information transfer from putamen to pre-SMA, and that later stages are associated with a smaller increase of information transfer between these regions. Furthermore, those with the lowest rates of information transfer showed the greatest reductions in both multitask performance costs and single-task response times. We believe these results reflect a trajectory of practice induced changes in putamen to pre-SMA coupling. Therefore, practice reduces multitasking costs in concert with an initial increase, and then a decreased rate of information transmission from putamen to pre-SMA. Rodent studies consistently demonstrate that when a task is novel, firing in the dorsolateral striatum corresponds to the full duration of a trial, whereas as the behaviour becomes habitual, firing patterns transition to coincide with the beginning and end of chunked action sequences (Jin and Costa 2010;Smith and Graybiel 2013;Thorn et al. 2010;Barnes et al. 2005;Jog et al. 1999) . Given this, we speculate that our current results comparably reflect that those individuals with the smallest rate of striatal firing (i.e. those that are transitioning towards bracketing activity), show most benefits for multitasking performance. As participants from the practice group showed increases of IPS to putamen coupling on single-tasks, it may be that the shift towards striatal bracketing of task performance is mediated by increased excitation from cortical stimulus-response representations that have been cached over long term learning. Critically, our results clearly show that multitasking costs can be alleviated by changing the rate of information transfer from the striatum to the pre-SMA. Interestingly, the current pattern of results, where the practice-induced plasticity on multitask trials is the same for both the practice and the control groups, and practice-induced plasticity on single-task trials is different between groups, suggests that practice does not remediate multitasking costs by improving something extra to the component tasks, such as task-coordination or attention allocation (Strobach and Torsten 2017) , but rather, practice changes the circuitry underlying single-task processing, which benefits performance under multitask conditions, thereby reducing information sharing along pathways assumed to be engaged in the excitation of action plans (Forstmann et al. 2008;Haber 2016;Badre and Nee 2018) . Further considerations It is worthwhile to consider why the previous fMRI investigations into the neural sources of multitasking limitations did not implicate a role for the striatum. As far as we can observe, our sample size, and thus statistical power to observe smaller effects is substantially larger than previous efforts (our N=100, previous work N range: 9-35, (Stelzel et al. 2006;Borst et al. 2010;Tombu et al. 2011;Marois et al. 2006;Jiang 2004;Szameitat et al. 2002;Sigman and Dehaene 2008;Hesselmann, Flandin, and Dehaene 2011;Jiang, Saxe, and Kanwisher 2004;Szameitat et al. 2006;Nijboer et al. 2014;Erickson et al. 2007;Dux et al. 2009Dux et al. , 2006 ). One fMRI study has reported increased striatal activity when there is a higher probability of short temporal overlap between tasks (Yildiz and Beste 2015) . Moreover, meta-analytic efforts into the connectivity of the frontal-parietal network during cognitive control tasks implicate the putamen (Camilleri et al. 2018) . Lesions of the striatum and not the cerebellum have been shown to correspond to impaired multitasking behaviours (Thoma et al. 2008) , and intracranial EEG has revealed that fluctuations in oscillatory ventral striatal activity predicts performance on the attentional blink task (Slagter et al. 2017) ; a paradigm which is assumed to share overlapping limitations with those revealed by sensorimotor multitasks (Garner, Tombu, and Dux 2014;Marti, King, and Dehaene 2015;Tombu et al. 2011;Zylberberg et al. 2010;Jolicoeur 1998;Arnell and Duncan 2002) . Therefore, our findings do converge with more recent efforts that do indeed implicate a role for the striatum in cognitive control. We extend these findings to demonstrate how the striatum and cortex interact to both produce and overcome multitasking limitations. Of course, we have only examined network dynamics in a few areas of a wider system that correlates with multitasking (Garner and Dux 2015) , and we are unable to know whether we observe an interaction in these specific regions because the interaction exists nowhere else, or because the interactions are more readily observable between these regions. For example, it could be that cognitive control is mediated by multiple cortical-striatal loops (Badre and Nee 2018;Haber 2003Haber , 2016 . Future work should examine the striatal-cortical interactions that underpin other, dissociable forms of cognitive control, such as response-inhibition (Bender et al. 2016) , to determine whether the pattern observed here is specific to multitasking, or is generalisable to other cognitive control contexts. Additionally, for the current study, we utilised simple sensorimotor tasks. The networks underpinning the translation of more complex sensorimotor mappings may well invoke more anterior regions of interest than we observed here (Dux et al. 2006;Badre and Nee 2018;Crittenden and Duncan 2014;Woolgar et al. 2011) . It remains to be determined whether the current observations generalise to those scenarios. Conclusions We probed the network dynamics underlying multitasking limitations, and asked whether they stem from information processing limitations in the previously hypothesised frontal-parietal network (Watanabe andFunahashi 2014, 2018;Garner and Dux 2015;Marti, King, and Dehaene 2015) , or whether they were better characterised as stemming from limits in a striatal-cortical network, as suggested by models probing the neural mechanisms underlying single sensorimotor task performance (Caballero, Humphries, and Gurney 2018;Bornstein and Daw 2011;Joel, Niv, and Ruppin 2002) . Using an implementation of DCM, we found evidence that multitasking demands motivate increased rates of information sharing between the putamen and cortical sites, suggesting that performance decrements are due to a limit in the rate at which the putamen can excite appropriate cortical stimulus-response representations. We also observed that these limits are attenuated when practice moderates the rate of information transfer between the striatum and cortical sites assumed to encode representations of action plans. We also find evidence that increased rates of information transfer from cortical nodes to the striatum, gained from practice over multiple days, may be a key mechanism for moderating the degree of information transfer required for the striatum to sufficiently excite cortical action representations under conditions of high cognitive load. These results provide clear empirical evidence that cognitive control operations are striatally mediated, and that limitations in cognitive control operations are moderated by information exchange between the putamen and pre-SMA. Participants The MRI scans of the participants (N=100) previously analysed in (Garner and Dux 2015) were included in the present analysis, apart from the data of 2 participants, for whom some of the scans were corrupted due to experimenter error. The University of Queensland Human Research Ethics Committee approved the study as being within the guidelines of the National Statement on Ethical Conduct in Human Research and all participants gave informed, written consent. Experimental Protocols Participants attended six experimental sessions: a familiarization session, two MRI sessions and three behavioural practice sessions. Familiarization sessions were conducted the Friday prior to the week of participation, where participants learned the stimulus-response mappings and completed two short runs of the task. The MRI sessions were conducted to obtain pre-practice (Monday session) and post-practice (Friday session) measures. These sessions were held at the same time of day for each participant. Between the two MRI sessions, participants completed three behavioural practice sessions, where they either practiced the multitasking paradigm (practice group) or the visual-search task (control group). Participants typically completed one practice session per day, although on occasion two practice sessions were held on the same day to accommodate participants' schedules (when this occurred, the two sessions were administered with a minimum of an hours break between them). Participants also completed an online battery of questionnaires that formed part of a different study. Behavioural Tasks All tasks were programmed using Matlab R2010a (Mathworks, Natick, MA) and the Psychophysics Toolbox v3.0.9 extension (23). The familiarization and behavioural training sessions were conducted with a 21-inch, Sony Trinitron CRT monitor and a Macintosh 2.5 GHz Mini computer. Multitasking Paradigm For each trial of the multitasking paradigm, participants performed either one (single-task condition) or two (multitask condition) sensorimotor tasks. Both involved a 2-alternative discrimination (2-AD), mapping the two stimuli to two responses. For one task, participants were presented with one of two white shapes that were distinguishable in terms of their smooth or spikey texture, presented on a black screen and subtending approximately 6° of visual angle. The shapes were created using digital sculpting software (Scluptris Alpha 6) and Photoshop CS6. Participants were required to make the appropriate manual button press to the presented shape, using either the first or index finger of either the left or right hand (task/hand assignment was counterbalanced across participants). For the other task, participants responded to one of two complex tones using the first or index finger of the hand that was not assigned to the shape task. The sounds were selected to be easily discriminable from one another. Across both the single-task and multitask trial types, stimuli were presented for 200 ms, and on multitask trials, were presented simultaneously. Familiarisation Session During the familiarization session, participants completed two runs of the experimental task. Task runs consisted of 18 trials, divided equally between the three trial types (shape single-task, sound single-task, and multitask trials). The order of trial type presentation was pseudo-randomised. The first run had a short inter-trial-interval (ITI) and the trial structure was as follows; an alerting fixation dot, subtending 0.5° of visual angle was presented for 400 ms, followed by the stimulus/stimuli that was presented for 200 ms. Subsequently a smaller fixation dot, subtending 0.25° of visual angle, was presented for 1800 ms, during which participants were required to respond. Participants were instructed to respond as accurately and quickly as possible to all tasks. For the familiarization session only, performance feedback was then presented until the participant hit the spacebar in order to continue the task. For example, if the participant had completed the shape task correctly, they were presented with the message 'You got the shape task right'. If they performed the task incorrectly, the message 'Oh no! You got the shape task wrong' was displayed. On multitask trials; feedback was presented for both tasks. If participants failed to achieve at least 5/6 trials correct for each trial type they repeated the run until this level of accuracy was attained. The second run familiarized participants with the timing of the paradigm to be used during the MRI sessions -a slow event-related design with a long ITI . The alerting fixation was presented for 2000 ms, followed by the 200 ms stimulus presentation, 1800 ms response period and feedback. Subsequently an ITI, during which the smaller fixation dot remained on screen, was presented for 12000 ms. MRI Sessions Participants completed six long ITI runs in the scanner, with 18 trials per run (6 of each trial type, pseudo-randomly ordered for each run), for a total of 108 trials for the session. Trial presentation was identical to the long ITI run presented at the familiarization session, except that feedback was not presented at the end of each trial. Practice Sessions All participants were informed that they were participating in a study examining how practice improves attention, with the intention that both the practice and control groups would expect their practice regimen to improve performance. The first practice session began with an overview of the goals of the practice regimen; participants were informed that they were required to decrease their response time (RT), while maintaining a high level of accuracy. The second and third sessions began with visual feedback in the form of a line graph, plotting RT performance from the previous practice sessions. For each session, participants completed 56 blocks of 18 trials, for a total of 1008 trials, resulting in 3024 practice trials overall. To ensure that participants retained familiarity with the timings of the task as presented in the scanner, between 2 and 4 of the blocks in each session used long ITI timings. The practice group performed the multitasking paradigm, as described above (see Familiarization Session), except that performance feedback was not displayed after each trial. Over the course of practice, participants from this group performed 1008 trials of each trial type (shape single-task, sound single-task, multitask). Participants in the control group went through the identical procedures to the practice group, except that they completed a visual search task instead of the multitasking paradigm. Participants searched for a 'T' target amongst 7, 11, or 15 rotated 'L's' (to either 90°or 270°). Participants indicated whether the target was oriented to 90°or 270°, using the first two fingers of their left or right hand (depending upon handedness). Over the course of the three practice sessions, participants completed 1008 trials for each set size. For both groups performance feedback showed mean RT (collapsed across the two single-tasks for the practice group, and over the three set-sizes for the control group), and accuracy, for the previous 8 blocks, total points scored, and the RT target for the subsequent 8 blocks. If participants met their RT target for over 90 % of trials, and achieved greater than 90 % accuracy, a new RT target was calculated by taking the 75 th percentile of response times recorded over the previous 8 blocks. Furthermore, 2 points were awarded. If participants did not beat their RT target for over 90 % trials, but did maintain greater than 90 % accuracy, 1 point was awarded. MRI Data Acquisition Images were acquired using a 3T Siemens Trio MRI scanner (Erlangen, Germany) housed at the Centre for interslice gap = .5 mm), providing whole brain coverage. We synchronized the stimulus presentation with the acquisition of functional volumes. MRI Data Analysis fMRI data were preprocessed using the SPM12 software package (Wellcome Trust Centre for Neuroimaging, London, UK; http://www.fil.ion.ucl.ac.uk/spm ). Scans from each subject were corrected for slice timing differences using the middle scan as a reference, realigned using the middle first as a reference, co-registered to the T1 image, spatially normalised into MNI standard space, and smoothed with a Gaussian kernel of 8 mm full-width at half maximum. Dynamic Causal modelling To assess the causal direction of information flow between brain regions, we applied Dynamic Causal Modelling (DCM), which maps experimental inputs to the observed fMRI output, via hypothesised modulations to neuronal states that are characterised using a biophysically informed generative model (Friston, Harrison, and Penny 2003) . Parameter estimates are expressed as rate constants (i.e. the rate of change of neuronal activity in one area that is associated with activity in another), and are fit using Bayesian parameter estimation. DCM Implementation Implementation of DCM requires definition of endogenous connections within the network (A parameters), the modulatory influence of experimental factors (B parameters), and the influence of exogenous/driving inputs into the system (e.g. sensory inputs, C parameters) (Friston, Harrison, and Penny 2003) . We implemented separate DCM's to investigate i) the modulatory influence of multitasking on the pre-practice data, and ii) the modulatory influence of practice on the pre-to post-practice data. To make inferences regarding the modulatory influence of multitasking, we defined our endogenous network as comprising reciprocal connectivity between all three of our ROIs, on the basis of anatomical and functional evidence for connections between all three of them (Cavada and Goldman-Rakic 1989;Luppino et al. 1993;Haber 2016;Alexander, DeLong, and Strick 1986;Wise et al. 1997) . To address our theoretically motivated question regarding the locus of modulatory influence of the multitasking, we first implemented all 63 possible combinations of the modulatory influence of the multitasking (i.e. allowing each combination of connections to be modulated by the multitasking factor) and then grouped the model space into 3 families: those that allowed any combination of corticocortical modulations, but not striatal-cortical ( corticocortical family, with 3 models in total M 1-3 = 3), those that allowed the reverse pattern ( striatal-cortical family, with 15 models in total, M 4-18 , and those that allowed modulations to both types of connections ( both family , with 45 models in total, M 19-63 ). As both the IPS and the putamen receive sensory inputs (Vossel, Geng, and Fink 2014;Anderson et al. 2010;Grefkes and Fink 2005;Alloway et al. 2017;Saint-Cyr, Ungerleider, and Desimone 1990;Guo et al. 2018;Reig and Silberberg 2014) , we implemented the full set of models [M 1-63 ] with inputs to either the IPS, or to the putamen. Thus we fit a total of 126 (2x63) models to the pre-practice data. To make inferences regarding the modulatory influence of practice on both single and multi-task conditions, we carried out the following for both the single-task and the multitask data (see below for details on data extraction): based on the endogenous connectivity and locus of driving input identified by the preceding analysis, we then fit the 15 possible modulatory influences of the practice factor (i.e. pre-to post-practice). Extraction of fMRI Signals for DCM Analysis The brain regions of interest (ROIs) were selected by the following steps: first we identified regions that that showed increased activity for both single tasks at the pre-training session, second, we sought which of these showed increased activity for multitask trials relative to single task trials. Lastly, we asked which of these regions also showed a practice (pre vs post) by group interaction (Garner and Dux 2015) . The left and right intraparietal sulcus (IPS), left and right putamen, and the supplementary motor area (SMA) were implicated by this interaction. In the interest of reducing the complexity of the model space, and in the absence of evidence to indicate lateralized differences, we included only regions in the left hemisphere and the SMA in the current analysis. For each region, we restricted the initial search radius by anatomically defined ROI masks, and extracted the first eigenvariate of all voxels within a sphere of 4 mm radius centered over the participant specific peak for the initial contrast (increased activity for both single tasks, as in the previous study), adjusted for the effects of interest (p < .05, uncorrected). Note: to analyse the modulatory influence of practice on single-task data, we regressed out activity attributable to the multi-task condition at this step. To analyse the modulatory influence of practice on multitask data, we comparably regressed out the single-task data at this step. We created the anatomical masks in standard MNI space using FSL. For the IPS we used the Juelich Histological atlas and for the putamen and the SMA we used the Harvard-Oxford cortical and subcortical atlas. Each time-series was concatenated over the 6 pre-training runs for the first analysis concerning the multitasking network, and over the 6 runs from both pre-and post practice sessions [total runs = 12] for the analysis of the influence of practice, and adjusted for confounds (movement and session regressors) in both cases. Bayesian Model Comparison and Inference over Parameters As our hypotheses concerned the modulatory influence of our experimental factors on model characteristics, rather than any specific model per se , we implemented random effects bayesian model comparison between model families (Penny et al. 2010) , with both family inference and Bayesian model averaging (BMA) as implemented in SPM 12. We opted to use a random effects approach that uses a hierarchical Bayesian model to estimate the parameters of a Dirichlet distribution over all models, to protect against the distortive influence of outliers (Stephan et al. 2009) . For each family comparison we report i) the expectation of the posterior probability (i.e. the expected likelihood of obtaining the model family k , given the data p(f k |Y)), and ii) the exceedance probability of family k being more likely than the alternative family j , given the data p(f k > f j |Y), see (Penny et al. 2010) ). Upon establishment of the winning family, we sought to identify, post-hoc, which specific parameters were likely, given the data, and when relevant, where there was evidence for group differences. To achieve this, we calculated the posterior probability (Pp) that the posterior density over the given parameter has deviated from zero (or in the case of group differences, whether the difference between posterior estimates has deviated from zero), using the SPM spm_Ncdf.m function. To correct for multiple comparisons, we reported Pp's as having deviated from zero when the likelihood exceeded that set by the Sidak correction (1 -α ) 1/m where m = the number of null hypotheses being tested. To make inference regarding the
2020-04-01T02:16:05.668Z
0001-01-01T00:00:00.000
{ "year": 2019, "sha1": "062bae33291b720f383aa7b677da742f3322b4f1", "oa_license": "CCBY", "oa_url": "http://minerva-access.unimelb.edu.au/bitstream/11343/245252/1/PMC7458802.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "062bae33291b720f383aa7b677da742f3322b4f1", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [] }
247521314
pes2o/s2orc
v3-fos-license
Current global perspectives on silicosis—Convergence of old and newly emergent hazards Abstract Silicosis not a disease of the past. It is an irreversible, fibrotic lung disease specifically caused by exposure to respirable crystalline silica (RCS) dust. Over 20,000 incident cases of silicosis were identified in 2017 and millions of workers continue to be exposed to RCS. Identified case numbers are however a substantial underestimation due to deficiencies in reporting systems and occupational respiratory health surveillance programmes in many countries. Insecure workers, immigrants and workers in small businesses are at particular risk of more intense RCS exposure. Much of the focus of research and prevention activities has been on the mining sector. Hazardous RCS exposure however occurs in a wide range of occupational setting which receive less attention, in particular the construction industry. Recent outbreaks of silicosis associated with the fabrication of domestic kitchen benchtops from high‐silica content artificial stone have been particularly notable because of the young age of affected workers, short duration of RCS exposure and often rapid disease progression. Developments in nanotechnology and hydraulic fracking provide further examples of how rapid changes in technology and industrial processes require governments to maintain constant vigilance to identify and control potential sources of RCS exposure. Despite countries around the world dealing with similar issues related to RCS exposure, there is an absence of sustained global public health response including lack of consensus of an occupational exposure limit that would provide protection to workers. Although there are complex challenges, global elimination of silicosis must remain the goal. Developments in nanotechnology and hydraulic fracking provide further examples of how rapid changes in technology and industrial processes require governments to maintain constant vigilance to identify and control potential sources of RCS exposure. Despite countries around the world dealing with similar issues related to RCS exposure, there is an absence of sustained global public health response including lack of consensus of an occupational exposure limit that would provide protection to workers. Although there are complex challenges, global elimination of silicosis must remain the goal. INTRODUCTION Silicosis is an irreversible, fibrotic lung disease explicitly caused by the inhalation of respirable crystalline silicon dioxide (RCS). It continues to be among the most lethal of occupational diseases and is a major public health challenge internationally. Although the cause of silicosis is undisputed, millions of workers worldwide continue to be exposed to hazardous levels of RCS. This review provides global and regional perspectives of the epidemiology of silicosis, sources of exposure and barriers that have hampered global elimination. Research and prevention strategies have historically focused on the mining sector. In recent years, there have been significant outbreaks of silicosis related to the use of high-silica content artificial (engineered) stone material to produce domestic benchtops. [1][2][3] These outbreaks illustrate the potential for silicosis to rapidly emerge in new occupational settings. This narrative review also provides additional insights from countries that have experienced notable artificial stone silicosis outbreaks including Spain, Australia and Israel. GLOBAL PICTURE At the outset, the public health impact and complexities in preventing silicosis at a local and global level make it the most glocal of occupational diseases. On one hand, the prevalence of silicosis all over the world makes it a global disease. In 2017, the Global Burden of Disease (GBD) study identified 23,695 incident cases of silicosis (age-standardized incidence rate [ASIR] = 0.30 per 100,000), which represents 39% of the 60,055 incident cases of pneumoconiosis ( Figure 1). 4 Silicosis' very name, coined in 1871, only reached medical consensus through the International Labour Organization (ILO) conference in Johannesburg (South Africa) in 1930, 5 which led to an ILO convention in 1934. In 1958, an ILO agreement defined the chest radiograph features of the disease, and in 1995 an ILO/World Health Organization (WHO) Global Programme for the Elimination of Silicosis was established and subsequently reaffirmed. 6 The implementation of global silicosis policies has however generally been disappointing and more limited than what had been envisaged. 7 On the other hand, international law has shaped silicosis as a local disease. Its circular definition in the 1934 ILO convention defined it as a disease occurring in 'industries or processes recognised by national law or regulations as involving exposure to the risk of silicosis'. By its explicit institutional legal definition, silicosis epitomizes the medicolegal character of 'occupational disease', which can vary across countries. Second, by crystalline silica being the main mineral component in the earth crust, silicosis affects all sectors-not only the traditional industrial ones such as construction and building, but also ancient craftsmanship (stonecutting), modern technologies (dental prostheses), farming or fashionable productions (kitchen benchtop fabricated from artificial stone) and clothes (stone-washed jeans). 4 The global ubiquity of silica has however never been translated into a universal public health issue. Only in specific contexts have local physicians been aware of the hazard linked to RCS exposure. The mining industry, on which medical research, prevention and compensation through social welfare have historically focused, provides an obvious exception. However, even in this sector, the visibility of the disease has never been complete nor consistent. For instance, reluctant to acknowledge silicosis in the mining sector, the United Kingdom focused on 'Coal Workers' Pneumoconiosis' after World War 2, and the United States built legislation around 'Black Lung' in 1969. Currently, the situation is much worse in coal-producing regions where public debate on pneumoconiosis is actively suppressed (e.g. China, Russia). In the 20th century, trade unions were a primary force advocating for recognition and prevention of silicosis while more recently new actors have emerged including non-governmental organizations. The use of new information and communication technologies, which drive 'popular epidemiology', enables reporting of individual cases, particularly in China. This rapidly changing technology will continue to update and broaden the role that individual whistle-blowers (including physicians, radiologists, unionists) will play in advocating for the prevention of silicosis, in very diverse national arenas such as political (e.g. parliamentary commissions), administrative (labour ministries, social insurance bureaucracies), judiciary and the media. More than the national divide between countries, workers' status on the job market has always been key to determining their exposures. Skilled workers (including experienced miners), employed on a stable basis, are submitted to moderate but lifelong silica dust exposures. These workers tend to benefit-even imperfectly-from the implementation of national prevention and compensation schemes. However, workers with insecure jobs, often immigrants, working in small businesses in the 'informal sector', are often subjected to more intense exposures, as a result of limited regulatory protections. The carcinogenic effect of silicon dioxide was recognized by the International Agency for Research on Cancer (IARC) in 1987, re-evaluated and confirmed in 1997 and 2012. 8 Recently, there has been further interest in other health effects of silica exposure. The 'sarcoid-like' and autoimmune pathologies which have been affecting 9/11 World Trade Center rescuers 9-14 and artificial stone workers 3,15 in particular have unexpectedly led to this overdue renewed focus. Furthermore, a recent meta-analysis of eight studies of silicosis and tuberculosis (TB) yielded a pooled relative risk of 4.01 (95% CI: 2.88, 5.58), providing robust evidence for a strongly elevated risk of TB with radiological silicosis, with a low disease severity threshold. 16 This is the first systematic review of the epidemiological evidence for an association identified at least a century ago. An occupational exposure limit (OEL) is that maximum airborne concentration of a toxic substance that a worker can be exposed to over a period of time (typically 8 h) without suffering any harmful consequences. The limit is determined from the available experimental studies, and toxicological and epidemiological data. Governments can adopt enforceable exposure limits as a tool aid for the protection of workers. Currently, there is no international agreement on a protective and enforceable RCS OEL. OELs for RCS vary significantly between countries, ranging between 0.025 mg/m 3 and as high as 0.35 mg/m 3 over an 8-h work shift. [17][18][19] Most low-and middle-income countries have no legislated exposure limit. A pooled analysis of 10 large silica-exposed cohorts noted for a worker exposed from age 20 to 65 at an RCS level of 0.1 mg/m 3 , the excess lifetime risk (through age 75) of lung cancer was 1.1%-1.7%, above the background risk of 3%-6%. 20 A quantitative risk assessment of RCS exposure at a level of 0.05 mg/m 3 over a 45-year period of work indicated that 19 of every 1000 people are at risk of lung cancer mortality, 54 of lung disease other than cancer and 75 of radiographic silicosis. 21 Since 2009, the American Conference of Governmental Industrial Hygienists (ACIGH) has recommended an RCS OEL of 0.025 mg/m 3 . 19 There are sampling and analytical challenges at that low level; however, it is a protective level that few countries have yet to adopt. ASIA The epidemiology of silicosis in Asia is described by the GBS Study. 4 In 2017, the regional incidence of silicosis were East Asia 15,980, Southeast Asia 656, Central Asia 18 and South Asia 2823. Globally, the region of East Asia had the highest overall ASIR of 0.78 per 100,000. The regional incidents in Asia are all higher than those reported in 1990. 4 At a national level, the highest increase in average annual percentage change in ASIR was noted to be in Singapore, and globally the highest number of incident cases were in China (9066) and India (1464). 4 In Asia, there are a wide range of industries associated with exposure to RCS including quarrying, mining, mineral processing, foundry work, brick and tile making, refractory processes and construction (including work with stone, F I G U R E 1 Age-standardized incidence rates (ASIR) for silicosis in 2017 reported by the Global Burden of Disease Study. Source: Department of Occupational and Environmental Health, Tongji Medical College, Huazhong University of Science and Technology (reproduced with permission) concrete, brick and some insulation boards). 17 A recent review of chest x-rays of 529 workers in sandstone mines of Rajasthan, India, alarmingly noted 52% had features of silicosis including 7.5% with progressive massive fibrosis. 22 Twelve percent of those with silicosis also had TB. Although prevention efforts have been taken for many decades, silicosis is still a public health issue in Asia. The rapid economic and industrial development and the large demand for coal energy and metal materials have resulted in more people to be exposed to RCS. From recent reports, more than 23 million workers in China and more than 10 million in India are exposed to RCS. Importantly, more cases of silicosis are emerging in new industries or new technological fields including jewellery and glass production, and use of nanomaterials. 23,24 Several strategies have been included in the 'Healthy China 2030 Action Plan' to address occupational health issues. The pneumoconiosis prevention and control plan requires that at least 95% of dust-exposed workers undergo health surveillance. 25 Due to rapid technological changes, more focus is required in identifying and responding to new RCS exposure industries. Additional resources are also required for workers' health education, provision of respiratory protective equipment and increasing awareness of occupational health issues. AFRICA In the GBD estimates for Africa, silicosis comprised 32% of all pneumoconiosis, and new cases increased by 124% from 1990 to 2017. 4 While the global ASIR decreased by 0.4% per year, Western sub-Saharan Africa (SSA) and North Africa showed an increase. The top five countries contributing new cases were Egypt, South Africa, Ethiopia, Democratic Republic of Congo (DRC) and Algeria. It is estimated that 5% of global pneumoconiosis deaths are from Africa, 20% being silicotics. 4 The global pneumoconiosis mortality rate was 0.7%, with the highest (1.3/100,000 persons) found in Southern SSA, which also had the third highest (28.9/ 100,000 persons) disability-adjusted life year (DALY) rate. For South Africa, the epicentre of gold mining in Southern SSA, the silicosis prevalence of 6% remains static, 26,27 although the incidence is declining, probably due to the contraction of the mining industry ( Figure 2). 28 Earlier studies reported a similar prevalence in older in-service miners (20%) and ex-miners (25%), 29 although higher prevalence figures for silicosis (42.5%) and silico-TB (25.7%) have been reported in Lesotho ex-miners. 30 In Egypt, reported silicosis prevalence was between 18.5% and 45.8%, 31 while Zambian copper belt miners had lower levels of silicosis (5%-8.8%). 32 Copper and cobalt miners in the DRC are reported to have a 1.1% cumulative silicosis incidence over 25 years. 33 In artisanal and small-scale gold Zimbabwean miners, 11.2% have silicosis and 4.0% TB. 34 A recent meta-analysis demonstrated a three-fold increased risk of TB among silicotics in this region. 16 The primary source of exposure to silica occurs in formal mining of gold, although platinum, copper, cobalt and phosphate mining also contribute. 26,31,33 Artisanal gold mining in the informal sector is another source. 34 Non-mining sectors include construction, manufacturing and agriculture, including well-digging. 35 High exposures have also been reported in ceramics, foundries, refractories (and brick making) and construction work. 28 South Africa is the only African country with a national programme for silicosis elimination. Barriers to silicosis elimination in formal mining include under-reporting of routinely collected data and silicosis underdiagnosis especially in ex-miners. While silica levels are reported to be declining in South Africa, its accuracy remains unclear. 28 In non-mining sectors, systematic baseline information on silica exposure and silicosis is deficient. 36 Although statutory exposure limits (0.05-0.10 mg/m 3 ) exist, new cases persist due to poor control measures and inadequate enforcement. 28 Future strategies should focus on improved reporting and link with other data sources such as compensation claims. For non-mining, better baseline exposure and silicosis incidence data are required. Linking of sentinel cases, especially for accelerated silicosis, to workplaces and surveys of long service workers in high-risk jobs should be pursued. 28 Supportive actions for sustainable dust reduction should be accompanied with improved monitoring using improved RCS exposure standards. For artisanal and smallscale miners, targeted programmes that include education, indigenous solutions and medical screening services are vital. 34,37 EUROPE The GBD study presents an encouraging decreasing trend in silicosis incidence from 1990 to 2017 (ASIR: 0.33 vs. 0.20 in Central Europe, 0.09 vs. 0.07 in Eastern Europe, 0.12 vs. 0.04 in Western Europe). 4 Despite this decline, the heterogeneity of incidence patterns between countries and the existence of new sources of exposure to crystalline silica has raised concerns and led national health agencies to update medical and epidemiologic knowledge about the RCS risk over the last decade. 38- 40 The global history of silicosis illustrates how European welfare systems have contributed to shaping a restricted image of the RCS health hazards. Approaches to silicosis in Western Europe have mainly focused on mining, making silicosis even more invisible elsewhere, despite well-known hazards in other activities (e.g. foundries, denture production). 7 The disbanding of coal mining and other extractive activities, albeit with notable exceptions, may suggest the disappearance of RCS hazards in Europe. There is a high probability that silicosis is being overlooked given the absence of a comprehensive and sensitive health surveillance system to prevent and detect silicosis (and other possible diseases caused by RCS) in exposed sectors, particularly in construction. 41,42 Since the 2000s, the popularity of so-called 'artificial' (engineered) stone has resulted in many workers developing silicosis in several European countries. 1,[43][44][45][46] Workers diagnosed have tended to be young and healthy males, and the characteristics of artificial stone silicosis indicate that it often rapidly evolves into progressive massive fibrosis. 47 The emergence of artificial stone silicosis has highlighted challenges for occupational preventive measures, epidemiologic surveillance and welfare systems in which the underrecognition of the occupational origin of chronic diseases is a long-lasting public issue. 48 Currently, there is advocacy for a complete prohibition of high-silica content materials. 49 Current guidelines by the French National Authority for Health in relation to health professional related RCS dust exposure may under-emphasize the hazards of RCS. 50 First, it recommends chest radiography (despite its lower sensitivity to detect early lung lesions) rather than chest HRCT for surveillance of high-risk groups. 50,51 Furthermore, it de-emphasizes 'other' silica diseases by suggesting them as being too rare to constitute a public health priority. 52 In 2017, a Directive of the European Parliament classified 'work involving exposure to RCS dust' as carcinogenic, yet it has missed an important opportunity in revising exposure standards, by defining a permissible exposure value for RCS dust to be 0.1 mg/m 3 (as an average over 8 h) in occupational settings. 53 This level is considered to be not protective for silicosis according to several studies. 38,54 Interestingly, this directive was issued 30 years after the IARC had established the carcinogenicity of crystalline silica for the first time. 8 The current context is characterized by pervasive industrial interests that advocate less constraining exposure standards and hamper compensation of diseased workers. 55,56 SOUTH AMERICA In South America, mineral extraction, manufacturing and construction industries are important as they generate taxes and employment, and provide raw materials. However, when these activities are conducted without proper technology and control measures, they lead to degradation of the physical environment, and increase the risk to worker's health as a result of elevated levels of RCS. In Brazil, about 500,000 workers are employed in mining, 2,300,000 in manufacturing and 3,800,000 in construction. According to Fundacentro (Jorge Duprat Figueiredo Foundation for Occupational Safety and Medicine), there are numerous occupational activities associated with increased risk of silicosis. These include foundry of iron, steel or other metals using sand moulds; extractive industries (mining, quarrying and processing of mineral-bearing stones); rock drilling in construction (tunnels, dams and roads); and sandblasting (shipping and metallurgical industry). 57 Brazil is also one of the world's largest producers of gemstones. A 2017 study of workers in the Minas Gerais region identified 48.3% of semi-precious stone craftsmen with silicosis and RCS exposure levels to be up to 29 times greater than the Brazilian 8-h OEL of 0.1 mg/m 3 . 58 A study of silicosis-associated mortality in Brazil between 1980 and 2017 indicated an increasing slope until 2006, with a decline thereafter. Mortality trends varied according to age groups, with a sharper decline observed in individuals aged 20-49 years from 2011 onwards while the decline in individuals aged 50-69 years occurred from 2005 onwards. However, individuals at 70 years or older displayed increasing mortality rates throughout the entire period. The decrease in deaths mostly occurred in municipalities that regulated economic activities. 57 Silicosis is the most common diffuse interstitial lung disease associated with occupational dust inhalation in Brazil and also the most important fibrogenic pneumoconiosis. Silicosis has a high prevalence especially in those over 60 years, followed by the age group between 40 and 59 years. It most commonly affects men (95.4%). 59 In this study, the risk factors for silicosis included inadequate ventilation in the underground galleries combined with dry drilling and duration of RCS exposure, while it was inversely associated with education. In Latin America, especially in countries with significant mineral extraction, silicosis has become a serious public health problem. Silicosis represents 30.3% of newly diagnosed cases of all pneumoconiosis. 4 In Argentina, according to the database of Occupational Risk Superintendence (April 2015-March 2017) of 1502 cases of respiratory diseases, 34 workers were registered with silicosis, which represented 2.3% of the total. 60 In 2005, the Chilean Institute of Public Health found that the main industries with exposure to RCS were mining and construction, followed by manufacturing. It is estimated that 5.4% of formal and informal workers in Chile are likely to be exposed to RCS. 61 Working in enclosed and poorly ventilated spaces is particularly hazardous. The use of crushers and other processes that produce high dust levels increases the risk. In cities with mining and other RCS exposure activities, these occur at altitudes above 3000 m. In Peru, Bolivia, Chile and other countries in the region, it is important to consider altitude and its adverse clinical impact on exposed workers. In 2007, the number of miners in Peru who worked at altitudes above 2500 m represented 84.5% of the total miner population. In 2008, 26% (n = 840) of workers studied had silicosis. 62 In Brazil, the development of the National Program to eliminate silicosis began in 2002, but significant numbers of new cases continue to be reported through surveillance systems. Formal workers are covered by the Brazilian Social Security system, unlike informal workers. As a result of a legal decision by the Ministry of Labour and the Ministry of Health, workers exposed to particulate dust must undergo annual chest radiograph examination and spirometry every 2 years. The National Program for the Elimination of Silicosis aims to eliminate silicosis in Brazil by 2030. NORTH AMERICA Silicosis remains an important occupational lung disease in North America. There are data on exposures and cases in the United States and Canada, but little surveillance has been reported from Mexico. There are an estimated 2.3 million workers in the United States exposed to RCS, including 1.85 million construction workers and 320,000 general industry and maritime workers. 63 The reporting system for occupational injuries and illnesses in the United States fails to capture many cases, leading to a poor understanding of silicosis incidence and prevalence. 64 The only existing surveillance system for silicosis is based on data from two states, whose data demonstrate that manufacturing and construction are associated with the greatest number of silicosis cases. Estimates extrapolating data from these states using capture-recapture analysis determined that there were likely 3600-7300 cases of silicosis per year between 1987 and 1996 in the United States. 65 Using a broad case definition of silicosis, health insurance claims data derived from a population of adults aged 65 years or greater revealed a 16-year prevalence of silicosis of 20.1-39.5 per 100,000 beneficiaries. 66 Mortality data revealed 2163 decedents with silicosis listed as the underlying or contributing cause of death between 1999 and 2014, 66 although it is likely that these data represent under-reporting. In Canada, there are no national data on the incidence or prevalence of silicosis. In the province of Alberta, 67 where silicosis is a notifiable disease, health insurance data revealed 861 cases with at least one reported diagnosis of 'silicosis' during a period of 10 years from 2000. These results were based on raw data and not a secondary review of primary imaging and clinical information. Data from 2000 through 2009 showed that only 29 workers' compensation claims were accepted for silicosis in Alberta. Data from Quebec's compensation system revealed 351 compensated cases of silicosis between 1988 and 1998. 68 Of note, workers who participated in regular surveillance had milder disease at the time of compensation. The sources of exposure in North America do not vary greatly between countries. Excessive exposure to RCS has been well documented in the US construction industry with median exposures to RCS ranging from 0.75 to 3.2 mg/m 3 among painters, laborers, bricklayers and operating engineers. The probability for overexposure to RCS in the industry was estimated to be 64.5%-100%. 69 Significant exposures are also found in general industry and maritime occupations. 70 In Alberta, industries with the highest potential for overexposure were sand and mineral processing; commercial building construction; aggregate mining and crushing; and abrasive blasting and demolition. 67 Hydraulic fracturing is prevalent in both the United States and Canada, but not in Mexico despite significant gas reserves in the Burgos Basin. 71 In an exposure monitoring study with full-shift personal breathing zone samples collected from 11 hydraulic fracture sites in five US states, more than 50% of samples exceeded the permissible exposure limit (PEL), with some RCS concentrations 10-20 times higher than the PEL, mainly in jobs with close proximity to sand-moving machinery. 72 In the United States, mining has long been associated with excessive exposures to RCS. More recently, there has been an increase in severe and rapidly progressive pneumoconiosis in Central Appalachian coal miners. 73 Rapidly progressive pneumoconiosis has been linked to silica exposure based on lung pathology showing features of accelerated silicosis along with classic silicotic nodules. Mining coal from thinner seams is associated with more rock cutting above and below the coal seam, likely leading to increased RCS exposure and more virulent disease. [74][75][76] The fabrication of artificial stone has been a source of many new cases of silicosis throughout the world, but only a handful of cases have been reported thus far in North America. The populations involved tend to be vulnerable and less likely to be engaged in compensation claims and surveillance. 77 Throughout North America, as in the rest of the world, the barriers to the elimination of silicosis have been the lack of enforcement of dust exposure limits coupled with inadequate medical surveillance and compensation programmes. The United States recently began a programme aimed at inspecting target industries to ensure compliance with regulatory standards. 78 This is intended to improve enforcement of updated silica standards, including a PEL of 0.05 mg/m 3 time-weighted average which went into effect in 2016-2018. Canada has an OEL for RCS of 0.025 mg/m 3 2009; previously, the OEL was 0.1 mg/m 3 for quartz and 0.05 mg/m 3 for cristobalite. AUSTRALASIA The GBD study indicated the silicosis ASIR in Australia to be 0.06 in 2017, and 0.07 in New Zealand. 4 New Zealand was however noted to have second highest average annual percentage increase in ASIR between 1990 and 2017. 4 Although the number of workers diagnosed with silicosis had been relatively low for several decades in the Australasian region, since 2017, there has been an alarming increase in cases occurring in the stone benchtop industry. 39,79 Between the 1970s and mid-2010s, industries most frequently associated with silicosis were foundries, brickworks/ furnace construction, mining/quarries and excavation/ tunnelling. 80,81 A 2003 review of 1467 compensated silicosis cases in the state of New South Wales (NSW) indicated a significant decline in incidence. 81 While 63% of the cases were compensated before the end of the 1960s, only 9% were compensated between 1979 and 2000. 81 Retrospective analysis of national mortality data between 1979 and 2002 noted that the crude mortality rates for silicosis showed a sustained decline, from 1.8 per million in 1982-1984 to 0.5 per million in 1997-1999. 80 An accurate understanding of the epidemiology of silicosis in Australia has however been limited, due to reliance on worker's compensation and mortality statistics. 82 Despite the small number of identified cases of silicosis, a cross-sectional survey of the Australian working population in 2013 noted a significant prevalence of RCS exposure, with 6.6% deemed to be exposed, including 3.7% at a high level. 83 Miners and construction workers were most likely to be highly exposed when performing tasks with concrete and cement or working near crushers. 83 A study of construction industry workers in New Zealand noted 56% of samples exceeded 0.025 mg/m 3 . 84 In recent years, there has been a major increase in the number of workers diagnosed with silicosis. 3,[85][86][87] In NSW, the annual number of certified silicosis cases increased from nine in 2015-2016 to 107 in 2019-2020. 88 This surge has primarily been related to the stone benchtop industry and handling of artificial stone. 39 Artificial stone was introduced to Australia in the early 2000s and has rapidly grown in popularity, to the point that it now accounts for almost half of the Australian benchtop market. 89 The stone benchtop industry in Australia is characterized by small and micro businesses, with 75% operating with five or fewer employees. 39 Processing of artificial stone without water dust suppression (dry cutting) has been noted to have been a widespread practice. 3,87 Following recognition of the initial cases of artificial stone silicosis, Australian governments have offered enhanced screening for workers. As of September 2021, 236 (22.4%) out of 1053 stonemasons assessed in Queensland were diagnosed with silicosis, including 32 with progressive massive fibrosis. 90 Similar results have been noted in Victoria with 108 workers confirmed to have silicosis during the first year of screening. 91 All Victorian workers diagnosed were male with a mean age of 42 years and 62% had been born in a country other than Australia. 3 Twenty-six percent had worked in the stone benchtop industry for less than 10 years, consistent with the accelerated form of silicosis. 3 These enhanced assessments have demonstrated poor chest radiograph and spirometry sensitivity for screening silica-exposed stone benchtop workers. In Victoria, initial results indicated 23 of 65 (35%) workers with simple silicosis had 'normal' chest radiographs (ILO category 0) but had consistent chest computed tomography features. 3 Interestingly, mean forced expiratory volume in 1 s and forced vital capacity percentage predicted values were noted to be over 80% for both simple and complicated silicosis. 3 Similar to international experience, follow-up of patients with artificial stone silicosis has suggested rapid progression of disease and some have required lung transplantation. 85,87,92 These patients with progressive silicosis have highlighted inadequacies in treatments available for this disease. 93 As a response to the emergence of silicosis, an investigation by an Australian Government Taskforce noted that there has been inadequate dust control measures, ineffective health monitoring and insufficient enforcement of existing occupational health and safety laws applied to the stone benchtop industry. 39 The Taskforce recommended restriction of artificial stone fabrication to businesses licenced by government. The Taskforce also recommended a total importation ban on artificial stone products by July 2024 should there has been no measurable improvement in regulatory compliance or if preventive measures have been deemed inadequate to protect workers. 39 ISRAEL Since the early 2010s, Israel has been one of the main countries which has experienced a dramatic outbreak of silicosis associated with the use of artificial stone material. The experience in Israel provides an example of a failure to identify and control a new source of RCS exposure and how that can rapidly lead to an outbreak of silicosis. In Israel, kitchen and bathroom benchtops are mainly manufactured from artificial stone, with approximately 3500 workers currently involved in cutting and processing activities. In 2012, Kramer et al. first reported 25 workers with artificial stone-associated silicosis. 2 These patients had a shared history of exposure to the same commercial brand of decorative, artificial stone and performed a similar work task of dry cutting the stone in the production of domestic benchtops. This 2012 report was a major warning due to the worldwide use of this material; however, to what extent this warning has been headed is unclear. 2 The Israeli Institute for Occupational Safety and Hygiene requires that the maximum exposure limit to silica dust is 0.1 mg/m 3 for respirable dust (≤7 μm) and 0.3 mg/m 3 for floating dust. Importantly, these standards refer to current exposure and ignore cumulative exposure. Investigation of induced sputum from 116 individuals exposed mainly to artificial stone dust from small workshops nationwide found that over onethird (36.8%) of exposed workers had no previous diagnosis of silicosis, and 63.2% of these had confirmed silicosis. 94 In 2015, the US National Institute of Occupational Safety and Health (NIOSH) issued a Hazard Alert for the stone benchtop manufacturing and installation industry, which highlighted the need for monitoring RCS exposure levels and use of engineering controls such as water dust suppression, automated cutting tools and local exhaust ventilation. 95 In Israel, and elsewhere, in recent years, 96 computer numeric-controlled stone cutting machinery, with some utilizing high-pressure water for cutting, has become more commonly used to reduce aerosolized dust particles. The effectiveness of this process in reducing dust levels, however, remains untested. Analysis of survival post lung transplantation for artificial stone silicosis in Israel has not been demonstrated to be reduced compared to similar patients undergoing transplantation for idiopathic pulmonary fibrosis. 97 Extremely high levels of silica content were noted in workers' explanted lungs. 97,98 CONCLUSION Despite silicosis being one of the oldest described lung diseases, the occurrence of over 20,000 new cases per year indicates that the disease remains very present. The high incidence of silicosis in regions of Asia, Africa and South America is particularly concerning and the recent emergence of silicosis in the benchtop production industry has clearly demonstrated that even high-income countries are not immune from this preventable occupational disease. Progress towards elimination of silicosis seems to have stagnated over the last 20 years. Ongoing failures by governments, industries and employers to tackle the health risks of silica dust, with sufficient and sustained determination, have resulted in millions of workers continuing to be exposed to dusty conditions. Undoubtably, there have been and continue to be major obstacles to achieve elimination of silicosis. This is particularly the case in developing countries where there are other major health issues to be tackled, a situation that has been significantly exacerbated by the COVID-19 pandemic. 99 The abundance of silicon dioxide in the earth's crust and its presence in an extremely wide range of industrial settings make silicosis a global health issue that requires a collaborative global response. Unquestionably, silicosis can be eliminated through the prevention of occupational dust exposure. Low levels of impetus to control occupational dust is contributed by there being a lack of immediate adverse health effect from exposure and perceived difficulty in the implementation of preventative practices, especially in small businesses. At the most fundamental level, there needs to be increased awareness and surveillance of the risks associated with silica dust exposures and more effective methods of control. To contribute towards global silicosis elimination strategies, there is an urgent need for countries worldwide to adopt more protective RCS OELs exposure standards, specifically of 0.025 mg/m 3 . However, any limit is only protective if it is enforced in all industrial sectors where RCS exposure may occur. Master's degree in Occupational and Environmental Health. His research interests have included the areas of workrelated asthma, laryngeal dysfunction and artificial stoneassociated silicosis. He was a member of the National Dust Diseases Taskforce which published recommendations for the prevention, early identification, control and management of occupational dust diseases in Australia. Catherine Cavalin is a permanent CNRS fellow researcher in sociology at IRISSO (Interdisciplinary Research Institute in the Social Sciences, Paris-Dauphine University, PSL). She works on the diversity of health statuses and social health inequalities, which includes gender, labour and exposure to toxicants at work, as well as interpersonal violence. She particularly investigates the categories on which statistics are based, the nosological categories that frame medical knowledge, the borders between occupational and environmental health and associated public health policies. She was one of the contributors of the report published in 2019 by the French health agency (ANSES) on silica hazards. Professor Dr. Weihong Chen is chief and professor at Department of Occupational Health and Environmental Health in School of Public Health, University of Science & Technology in Wuhan, China. Major research interests include the epidemiology study on adverse health effects and preventive strategy of particulate matter including ambient PM2.5, industrial crystalline silica and coal mine dust; the pathogenic mechanism of ambient pollutants to cause cardiopulmonary diseases; and also the evaluation on protective effects of personal respiratory protection equipment. Dr. Robert A. Cohen is Professor of Medicine at Northwestern University and is Clinical Professor of Environmental and Occupational Health Sciences at the University of Illinois Chicago, School of Public Health. His research interests include respiratory disease in mineral dust-exposed workers. He has served as a consultant to the Respiratory Health Division of the US National Institute of Occupational Safety and Health and the Mine Safety and Health Administration. He has worked internationally in the area of medical surveillance for coal mine dust and silica-exposed workers on projects in Ukraine, Colombia, Argentina and Australia. Dr. Elizabeth Fireman is head of the Occupational/ Environmental Lung Diseases Research Laboratory at the Tel Aviv Medical Center, Israel, and head of the Israeli National Laboratory Service for Interstitial Lung Diseases. She is Associate Professor of Occupational Environmental Health, Sackler School of Public Health, Tel Aviv University, Israel. She identified the first Israeli case of chronic beryllium disease (CBD) masked as sarcoidosis, followed by nationwide identification of all exposed workers with CBD misdiagnosed as having sarcoidosis. She is actively involved in research and laboratory workup of workers who sustained artificial stoneinduced silicosis from 2010 to the present. Dr. Leonard H. T. Go is a research assistant professor at the University of Illinois Chicago (UIC) School of Public Health and a respiratory physician at Northwestern University Feinberg School of Medicine. He is the Project Director of the Black Lung Clinic programme at UIC, and his research interests centre on mineral dustrelated lung diseases. Dr. Antonio Le on-Jiménez is head of Pulmonology, Allergy and Thoracic Surgery Department at the Puerta del Mar University Hospital in C adiz, Spain. He leads the artificial stone (AS) silicosis outpatient clinic and is a member of the research group of the Andalusian Comprehensive AS Silicosis Plan. His research has focused since 2016 on AS silicosis and he is the principal investigator of the clinical trial 'Efficiency of Pirfenidone for the Reduction of Pulmonary Metabolic, Inflammatory and Fibrogenic Activity in Patients with Silicosis due to Artificial Stone and Progressive Massive Fibrosis'. Professor Alfredo Menéndez-Navarro is Professor of the History of Science at the University of Granada, Spain. His main research field is the history of occupational health and particularly focuses on the medical and social framing of occupational diseases in contemporary Spain. He investigates the emergence of medical and social concerns on asbestos-related diseases and silica hazards, as well as the processes of underrecognition of these risks. He is member and has served as Secretary of the ICOH Scientific Committee on the History of Prevention of Occupational and Environmental Diseases. Dr. Marcos Ribeiro is an Associate Professor of the Pulmonology Department and is responsible for the Environmental and Occupational Health Sciences Section at Londrina State University Parana Brazil. His research interests include respiratory disease in mineral dust-exposed workers. Professor Paul-André Rosental is a Professor in modern history at Sciences Po in Paris. He has directed an ERC
2022-03-19T06:23:27.385Z
2022-03-18T00:00:00.000
{ "year": 2022, "sha1": "9ff7cfb35c2950f41257c87de267c609780edd4f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "e4b52ff8b8b20a9de0eb9fc92fcba160408c5225", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
134992677
pes2o/s2orc
v3-fos-license
Effects of sea animal colonization on the coupling between dynamics and activity of soil ammonia-oxidizing bacteria and archaea in maritime Antarctica Qing Wang, Renbin Zhu, *, Yanling Zheng, Tao Bao, Lijun Hou,* 1Anhui Province Key Laboratory of Polar Environment and Global Change, School of Earth and Space Sciences, University of Science and Technology of China, Hefei 230026, P.R China 2State Key Laboratory of Estuarine and Coastal Research, East China Normal University, Shanghai 200062, P. R China *Corresponding author: Email: zhurb@ustc.edu.cn; or ljhou@sklec.ecnu.edu.cn; Tel: 0086-551-3606010; Fax: 0086-551-360758 Introduction Nitrification, the oxidation of ammonia to nitrate through nitrite, plays a pivotal role in the global biogeochemical nitrogen cycle (Nunes- Alves, 2016).As the first and ratelimiting step of nitrification, ammonia oxidation (the aerobic oxidation of ammonia to nitrite) is performed by phylogenetically and physiologically distinct groups of ammoniaoxidizing archaea (AOA) and ammonia-oxidizing bacteria (AOB) (Belser and Schmidt, 1978;Könneke et al., 2005).AOA and AOB have been investigated using the amoA gene as a functional marker in a wide variety of environments, including soils (Di et al., 2009;Gubry-Rangin et al., 2017;Leininger et al., 2006;Ouyang et al., 2016;Shen et al., 2012), sediments (Li et al., 2015;Zheng et al., 2013), estuaries (Dang et al., 2008;Mosier et al., 2008;Santoro et al., 2011), the oxic and suboxic marine water column (Baker et al., 2012;Bouskill et al., 2012), plateau permafrost (Zhang et al., 2009;Zhao et al., 2017), and in subarctic and arctic soils (Alves et al., 2013;Daebeler et al., 2017).Results indicated that the relative abundance and functional importance of AOA vs. AOB vary greatly in natural ecosystems.Environmental drivers, including substrate concentration, oxygen availability, pH, and salinity, might be responsible for the different AOA and AOB abundances and distribution (Alves et al., 2013;Bouskill et al., 2012;Le Roux et al., 2008;Wang et al., 2015).The abundance, diversity, and activity of ammonia oxidizers have been explored in tundra soils of the Antarctic Peninsula (Jung et al., 2011;Yergeau et al., 2007) and the Antarctic Dry Valleys (Ayton et al., 2010;Magalhães et al., 2014;Richter et al., 2014) and in Antarctic coastal waters (Kalanetra et al., 2009;Tolar et al., 2016).However, there is still a large gap in our understanding of factors that control AOA vs. AOB prominence, and the relationships between nitrification rates and ammonia-oxidizer dynamics need to be explored in Antarctica. In maritime Antarctica, a large number of sea animals, such as penguins or seals, settle on coastal ice-free tundra patches.Tundra vegetation including mosses, lichens, and algae, penguin colonies, and their interactions form a special ornithogenic tundra ecosystem (Tatur et al., 1997).The soil biogeochemistry of an ornithogenic tundra ecosystem has become a research hotspot under penguin-activity disturbance (Otero et al., 2018;Riddick et al., 2012;Simas et al., 2007;Zhu et al., 2013Zhu et al., , 2014)).Previous studies indicated that sea animals significantly affect the tundra N and P cycles (Lindeboom et al., 1984;Simas et al., 2007;Zhu et al., 2011), and the total N and P excreted by seabird breeders and chicks are 470 Gg N yr −1 and 79 Gg P yr −1 in Antarctica and the Southern Ocean, accounting for 80 % of the N and P from total global seabird excreta (Otero et al., 2018).Uric acid is the dominant N compound in penguin guano, and during its mineralization, different N forms, such as NH 3 , NH + 4 , and NO − 3 , can be produced via ammonification, nitrification, and deposition, following the changes in soil pH and the C : N ratio (Blackall et al., 2007;Otero et al., 2018;Riddick et al., 2012).The alteration of soil biogeochemistry under the sea-animalactivity disturbance might have an impact on the abundance and diversity of the AOA and AOB involved in the nitrogen cycle.Increased bacterial abundance, diversity, and activity have been detected in penguin or seal colony soils (Ma et al., 2013;Zhu et al., 2015).Penguin or seal colonies have been confirmed as strong sources for greenhouse gas N 2 O (Zhu et al., 2008(Zhu et al., , 2013)), a by-product of microbial ammonia oxidation (Santoro et al., 2011).However, the effects of sea animal colonization on AOA and AOB community structures have not been thoroughly investigated in the maritime Antarctic tundra. In the present study, we investigated the abundance, potential activity, and diversity of soil AOA and AOB in five tundra patches, including a penguin colony, a seal colony, the adjacent animal-lacking tundra, tundra marsh, and background tundra, where soil biogeochemical properties were subjected to the differentiating effects of sea animal activ-ities.Our objectives were (a) to examine the abundance, diversity, and community structure of soil AOA and AOB using the amoA gene as a functional marker; (b) to investigate potential links between amoA gene abundance, AOA and AOB community structures, potential activity, and environmental variables; and (c) to assess the relative contribution of these two distinct ammonia-oxidizing groups to nitrification. Study area The study area is located on the Fildes Peninsula and Ardley Island in the southwest of King George Island (Fig. 1), having oceanic climate characteristics.The mean annual air temperature is about −2.5 • C, with a range of daily mean temperature from −26.6 to 11.7 • C, and mean annual precipitation is about 630 mm, mainly in the form of snow.The Fildes Peninsula (about 30 km 2 area) is a host to important sea animal colonies.Based on annual statistical data, a total of over 10 700 sea animals colonize this peninsula in the austral summer.On the western coast there are established seal colonies including elephant seal (Mirounga leonine), Weddell seal (Leptonychotes weddellii), fur seal (Arctocephalus gazella), and leopard seal (Hydrurga leptonyx) (Sun et al., 2004).Ardley Island, with an area of 2.0 km in length and 1.5 km in width, is connected to the Fildes Peninsula via a sand dam.This island belongs to an important ecological reserve for penguin populations in western Antarctica.A great majority of breeding penguins, including Adélie penguins (Pygoscelis adeliae), gentoo penguins (Pygoscelis papua), and chinstrap penguins (Pygoscelis antarcticus), colonize the east of this island in the austral summer.Seal excrements or penguin droppings rich in nitrogen and phosphorus are transported into local tundra soils by ice and snow melting water during the breeding period (Sun et al., 2000(Sun et al., , 2004)).Mosses and lichens dominate local vegetation.However, the vegetation is almost absent in penguin or seal colonies because of over-manuring and animal trampling.A more detailed description of the study area can be found in Zhu et al. (2013). Tundra soil collection In the summer of 2014/2015, soil samples were collected from the following tundra patches, as illustrated in Fig. 1. i. Penguin colony and penguin-lacking tundra sites: the tundra on Ardley Island was categorized into three areas from east to west according to the distance to the penguin nesting sites (i.e., the intensity of penguin activity) -the eastern active penguin colony with nesting sites, PS (i.e., high penguin-activity area), where penguins have the highest density and a high-frequency presence during the breeding period; the adjacent penguinlacking tundra areas, PLs (i.e., low penguin-activity ar- eas) in the middle of Ardley Island, where penguins occasionally wander and have a typically low density; and the western tundra marsh, MS, moderately far from penguin nesting sites (i.e., a slight penguin-activity area), where penguins rarely frequent the sites.In total, 14 soil samples were collected from Ardley Island to study the effects of penguin colonization on the abundance, activity, and community structures of soil AOA and AOB.Specifically, samples PS1-PS5 were collected sequentially from the center of the colony in the PS.Samples PL1-PL4 and MS1-MS5 were randomly collected in the PL and MS. ii.The seal colony and its adjacent tundra sites, SSs: these sites are on the western coast of the Fildes Peninsula. According to the distance to seal wallows (i.e., the intensity of seal activity), samples SS1-SS5 were collected in sequence to investigate the effects of seal colonization.Site SS1 was closest to the seal colony (i.e., a high seal-activity site), whereas SS5 was the farthest from the seal colony (i.e., a low seal-activity site). iii. Background tundra sites, BSs: three soil samples were collected from an upland tundra at about 40 m a.s.l. and with no sea animals around.The tundra surface is cov-ered with mosses or lichens with a 10-15 cm organic clay layer (Zhu et al., 2013). At each sampling site, soil was collected aseptically using a clean scoop from the top 5-10 cm at the four corners of a 1 m 2 subarea, and combined into one sample.Appropriate precautions were taken to avoid cross-site or humanmade contamination.Immediately after collection, each sample was divided into two portions: one was stored in sterile plastic containers at −80 • C for the analysis of the microbial community structures, and the other portion was stored at close to the in situ temperature to determine the geochemical characteristics and potential ammonia oxidation rates.All of the analyses were conducted within 1 month. General analysis of soil characteristics Soil pH was determined by mixing the soil and 1 M KCl solution (1 : 3 ratio).Soil moisture was measured by oven drying at 105 • C to a constant weight.Total carbon (TC), total nitrogen (TN), and total sulfur (TS) contents in the soils were determined through a CNS (carbon, nitrogen, sulfur) analyzer (vario MACRO, Elementar, Germany).The samples were digested in Teflon tubes using HNO 3 -HCl-HF-HClO 4 digestion at 190 (Gao et al., 2018;Zhu et al., 2011). Measurement of soil potential ammonia oxidation rate The potential ammonia oxidation rate (PAOR) in tundra soil was determined using the chlorate inhibition method (Kurola et al., 2005;Xia, 2007).Sodium chlorate was used to inhibit NO − 2 from being oxidized into NO − 3 .Briefly, 5 g fresh tundra soil was incubated in 20 mL of 1 mM phosphate-buffered saline with 1 mM of (NH 4 ) 2 SO 4 and NaClO 3 in the dark at 15 • C.After moderately shaking for 24 h, the 5 mL of 2 M KCl was used to extract the nitrite.The optical density for the supernatant after centrifugation was determined spectrophotometrically at 540 nm.The standard curve obtained from NaNO 2 (0-2.5 µmol L −1 ) was used to calculate the PAOR in the tundra soils. Sequencing and phylogenetic analysis The amplification products were sent to Sangon Company (Shanghai, China) for purification, cloning, and sequencing (Y.L. Zheng et al., 2014).The sequences were edited using DNAstar (DNASTAR, Madison, WI, USA) and then aligned by MUSCLE (Edgar, 2004) using the UPGMB (unweighted pair group method with arithmetic mean) clustering method with the ClustalX program.The sequences with 97 % identity were grouped into one OTU (operational taxonomic unit) using the mothur program (version 1.23.0;Schloss et al., 2009) by the furthest-neighbor approach (Y.L. Zheng et al., 2014).The closest reference sequences were identified at NCBI (http://www.ncbi.nlm.nih.gov/BLAST/, last access: 5 August 2018) using the BLASTn tool (Madden, 2002), and phylogenetic trees were constructed by the neighbor-joining method using the Molecular Evolutionary Genetics Analysis (MEGA) software (version 5.03, https://www.megasoftware.net/, last access: 5 August 2018).The sequences reported in this study have been deposited in GenBank under accession numbers MH318029 to MH318568 and MH301331 to MH302505. Quantitative real-time PCR The AOB and AOA amo A gene copy numbers for tundra soils were determined in triplicate using quantitative realtime PCR (qPCR) on an ABI 7500 Sequence Detection System (Applied Biosystems).The specific details were given by Y. L. Zheng et al. (2014).The strong linear inverse relationship confirmed the consistency of the qPCR assay between the threshold cycle and the log value of gene copy numbers (R 2 = 0.997 for AOA; R 2 = 0.999 for AOB).The amplification efficiencies for AOA and AOB were 99.8 % and 90.4 %, respectively.Melting curve analysis had only one observable peak at a melting temperature (Tm) (84.9 • C for AOA and 89.6 • C for AOB) (Fig. S1 in Supplement).Negative controls were subjected to exclude any possible carryover or contamination in all experiments. Statistical analysis The Shannon-Wiener index, Simpson index, and the richness estimator Chao 1 were calculated by the mothur program (version 1.23.0;Schloss et al., 2009).The coverage was the percentage of the number of observed OTUs divided by the Chao 1 (Table S1 in the Supplement).The Kruskal-Wallis test and Wilcoxon signed rank test were conducted for the comparison between amoA gene abundance and PAOR from five tundra patches using SPSS Statistics 17 (IBM Corp, Armonk, NY, USA).Correlations between ammonia-oxidizer gene abundance, PAOR and environmental variables were obtained by Spearman correlation analysis.The relationships between the ammonia-oxidizer community structure and environmental variables were explored using canonical correspondence analysis (CCA) in the software Canoco for windows (version 4.5; Microcomputer Power, Ithaca, NY, USA) because the maximum gradient length of both AOA and β-AOB was longer than four SD (AOA: 4.406; AOB: 18.326).All environmental parameter values were transformed into ln(x + 1) before statistical analyses.The OTU richness (defined at 3 % distance) served as the species input, and sev-eral simulations of manual forward selection were performed with 499 Monte Carlo permutations to build the optimal models.The scaling in the final CCA biplots was focused on inter-sample relations. Soil chemistry and sea animal activities Almost all the tundra soils were slightly acidic, and the mean pH ranged from 5.3 to 6.6 at each tundra patch (Table 1).In PS and SS, soil properties including TC, TN, TS, TP, NH + 4 -N, and NO − 3 -N levels showed high heterogeneity due to the deposition of penguin or seal excreta.In the seal colony tundra soils, the highest TC, TN, TP, TS, and NH + 4 -N levels occurred at the sites (SS1-2) close to the seal wallows.In the tundra soils on Ardley Island, the highest TP, TS, and NH + 4 -N levels occurred in the soils close to the eastern penguin nesting sites (PS1-5).PS and SS had generally lower C : N ratios than PL, MS, and BS.Soil mean TN, TS, and NH + 4 -N levels were higher in PS, SS, PL, and MS than in BS.Soil NH + 4 -N contents were 1-2 orders of magnitude higher in PS and SS than in PL, MS, and BS, with means of 176.9 and 137.6 mg NH + 4 -N kg −1 , respectively.The highest NO − 3 -N contents occurred in SS.Phosphorus levels were significantly greater (p<0.05) in PS (10.6-32.9mg g −1 ) than in other types of tundra soils (mean < 6.0 mg g −1 ).Overall, penguin or seal activities altered the local soil biogeochemical properties through the deposition of their excreta, leading to generally low C : N ratios in tundra soils. Gene abundances under sea animal colonization AOB amoA gene abundances were significantly higher (by approximately 2-4 orders of magnitude) than AOA amoA gene abundances (Wilcoxon test, n = 22, P = 0.002) in the penguin and seal colony and the adjacent tundra soils, PS, SS, and PL.However, amoA gene abundances were similar in the MS and BS soils (Fig. 2a).Overall, the abundances of AOB and AOA amoA genes were significantly negatively correlated (r = −0.93,P = 0.002) across all the tundra patches (Fig. S2).The AOA amoA gene abundances showed a heterogeneous distribution in the abundances among the different tundra patches, and they were 2 orders of magnitude lower in PS and SS relative to those in BS and MS.Maximum AOA amoA gene abundance appeared in BS, followed by MS and PL, whereas the PS and SS soils had the lowest AOA amoA gene abundances.The log values of soil AOA amoA gene abundances showed a significant positive correlation (r = 0.52, P <0.001) with C : N ratios (Fig. 3a), but their abundances showed a significant negative correlation with NH + 4 -N contents (r = −0.52,P = 0.013) (Table 2). Potential ammonia oxidation rates under sea animal colonization PAORs ranged from 8.9 to 138.8 µg N kg −1 h −1 in all the soil samples (Table 1).The PAOR was slightly higher in SS (mean 76.1 µg N kg −1 h −1 ) than in PS (mean 64.7 µg N kg −1 h −1 ) but significantly higher than in PL, MS, and BS (mean 12.0-21.8µg N kg −1 h −1 ).Overall the PAOR was significantly higher in animal colony soils (mean 70.4 µg N kg −1 h −1 for SS and PS) than in non-animal colony soils (mean 15.7 µg N kg −1 h −1 for PL, MS, and BS; Kruskal-Wallis test, χ 2 = 11.6,P = 0.02) (Fig. 2c).The greatest PAOR occurred at the sites PS1 nearest the penguin nests (88.8 ± 2.7 µg N kg −1 h −1 ) and SS1 close to seal wallows (138.8 ± 0.8 µg N kg −1 h −1 ).The PAOR followed the distribution changes of AOB amoA gene abundances but showed the opposite trend to the AOA amoA gene abundances.A significant positive correlation (r 2 = 0.77, P <0.001) was observed between the PAOR and the AOB amoA gene abundance when the data from all the tundra patches were combined, whereas no correlation occurred between PAOR and AOA amoA gene abundance (Fig. 4).The higher abundance of AOB compared to AOA in PS, SS, and PL and their correlation with the PAOR suggested that AOB populations might contribute more to the PAOR than the AOA populations in penguin or seal colonies.In addition, PAOR significantly negatively correlated with soil C : N ratios (r = −0.73,P <0.001) (Fig. 3d) but significantly positively correlated with TS contents (r = 0.47, P <0.05) and TP contents (r = 0.43, P <0.05) (Table 2). Community structure of AOA and AOB under sea animal colonization The PCR products were insufficient to construct the clone libraries for the AOA amoA gene from SS and PS because of the low AOA abundance in the soils, as was the case with the AOB amoA gene from MS and BS.Overall, 10 AOA and 14 AOB amoA gene clone libraries were successfully constructed.The 543 AOA sequences and 1175 AOB quality sequences were generated from the respective sites.Within each individual site, 1-6 AOA OTUs and 6-15 AOB OTUs were identified, as defined by < 3 % divergence in nucleotides.The AOA and AOB OTU numbers for each library are presented in Table S1.These numbers might be higher if more clones were sequenced, based on the rarefaction curves (Figs.S3 and S4).AOB amoA gene diversity was generally higher compared to AOA, based on the indices of Shannon-Wiener and Simpson.Specifically, AOA amoA gene diversity was higher in PL and MS than in BS, whereas AOB amoA gene diversity was higher in SS and PS compared with that in adjacent animal-lacking tundra soils (Table S1). The 543 AOA amoA gene sequences had 76 %-100 % sequence similarity to each other and 95 %-100 % identity with the corresponding top hit amoA sequences deposited in Gen-Bank.Phylogenetic analysis showed that the AOA amoA sequences were grouped into 16 unique OTUs, representing 100 % of all the AOA amoA OTUs identified, and these sequences were affiliated with two Nitrososphaera clusters (Fig. 5a): cluster I contained 11 OTUs and 264 clones, and 57.9 % of AOA amoA sequences were from PL, 41.3 % from SS and only 0.8 % from MS.In cluster II, there are five unique OTUs and 279 clones, and 58.8 % of them were from BS, 38.3 % from MS, and only 2.9 % from PL. Almost all the AOA phylotypes retrieved from PL and SS were related to Nitrososphaera cluster I, whereas the AOA phylo-types retrieved from MS and BS were distributed in cluster II (Fig. S5a).Seal or penguin activities led to the predominant existence of AOA phylotypes related to cluster I but very low relative abundances in AOA phylotypes related to cluster II, which were almost completely excluded in SS and PL.Almost all AOA phylotypes in BS and MS were related to Nitrososphaera cluster II, whereas the relative abundances of AOA phylotypes related to cluster I were very low or undetectable. The 1175 AOB amoA gene sequences shared 87 %-100 % sequence identity to each other and 93 %-100 % identity with the closest matched GenBank sequences.Phylogenetic analysis showed that the AOB amoA sequences could be grouped into 38 unique OTUs, representing 58.5 % of all the AOB amoA OTUs identified, and they were grouped into four Nitrosospira clusters according to the evolutionary distance of the phylogenetic tree (Fig. 5b): cluster I contained 11 OTUs and 226 clones, and 67.7 % of AOB amoA sequences were from PS, 23.5 % from SS, 8.4 % from PL, and only 0.4 % from MS. Clusters II and III contained 17 unique OTUs and 521 clones.The sources of the OTUs in cluster II were similar to those of cluster I, with 69.8 % from PS, 29.9 % from SS, and 0.3 % from PL.For cluster III, 79.2 % of the sequences were from PL, 19.8 % from SS, and 1.0 % from MS. Cluster IV contained nine unique OTUs and 370 clones from PL (50.0 %), SS (36.8 %), and MS (13.2 %).All the AOB phylotypes retrieved from PS were related to dominant Nitrosospira clusters I and II, whereas AOB phylotypes related to clusters III and IV were completely excluded because of penguin colonization (Fig. S5b).The AOB phylotypes retrieved from SS were distributed in clusters I, II, III, and IV (16 %-38 % for each cluster).Almost all the AOB phylotypes retrieved from PL and MS were related to Nitrosospira clusters III and IV. Relationships of the ammonia-oxidizer community structure with environmental variables The relationships of the AOA and AOB communities with environmental variables were analyzed using CCA.The environmental variables explained 62.1 % of the total variance in the AOA amoA genotype compositions and 71.5 % of the cumulative variance of the genotype-environment relationships in the first two CCA dimensions (Fig. 6a).Overall, the AOA community structures significantly correlated with C : N (F = 2.59, P = 0.022) and TC (F = 2.07, P = 0.048) in tundra soils (Table 3), and the combination of the two factors explained 39.6 % of the variation.High soil C : N and TC concentrations increased the AOA richness in MS and BS.Although other environmental parameters, including TP, pH, NH + 4 -N, and NO − 3 -N were not statistically significant (P >0.05), these variables additionally explained 47.3 % of the variation.As illustrated in Fig. 6b, the first two dimensions explained 26.6 % of the total variance in the AOB compositions and 54.3 % of the cumulative variance of the AOB genotype-environment relationships.The composition and distribution of AOB communities correlated significantly with C : N ratios (F = 1.844,P = 0.002) and NH + 4 -N (F = 1.823,P = 0.002), and the two factors combined yielded 21.9 % of total CCA explanatory power.The others including TP, NO − 3 -N, and pH accounted for 27.1 % of the variance.Penguin or seal activities significantly increased the AOB richness in SS and PS through higher NH + 4 -N and P input from sea animal excrement, whereas AOB richness was closely related to the soil C : N in PL and MS. Effects of sea animal colonization on AOA and AOB abundances In this study, soil AOA amoA gene abundances were 2 orders of magnitude lower in PS and SS relative to BS and MS; however, AOB amoA gene abundances were approximately 2-3 orders of magnitude higher in PS and SS than in MS and BS, indicating that sea animal activities increased the AOB population size but decreased AOA abundances in tundra soils (Figs. 2 and 3).Overall, the AOA amoA gene abundances obtained here were similar to the abundance range reported in the soils of the Antarctic Dry Valleys and arctic tundra soils; however, the AOB amoA gene abundances were 2-3 orders of magnitude higher in PS and SS than in Antarctic Dry Valleys (Alves et al., 2013;Magalhães et al., 2014).In contrast to previous studies indicating that AOA were more abundant than AOB in some terrestrial or marine ecosystems (Beman et al., 2008;Lam et al., 2007;Wuchter et al., 2006;Yao et al., 2011) and in soils from the Antarctic Peninsula (Jung et al., 2011), our qPCR estimates showed that the AOB amoA copy numbers were much greater than those of AOA amoA in PS, SS, and PL because of sea animal activities.However, their abundances were very similar to each other in BS and MS.The ratios of AOB to AOA abundance were strongly affected by sea animal activities, which were indicated by soil C : N ratios (Fig. 2c).A shift in the relative abundance of AOA and AOB was recorded previously for the Antarctic Dry Valleys, with a greater abundance of AOB compared with that of AOA for Battleship Promontory www.biogeosciences.net/16/4113/2019/Biogeosciences, 16, 4113-4128, 2019 (Magalhães et al., 2014).The results for PS, SS, and PL are also in agreement with those detected in subglacial soils (Boyd et al., 2011). The ratios of AOB to AOA showed significant correlations with C : N, NH + 4 -N, and TP when all the data were combined in the five tundra patches (Table 2).This suggested that C : N, NH + 4 -N, and TP are key factors in determining a predominance of AOB over AOA.In Antarctica, the productivity of terrestrial ecosystems is strongly limited because of the extremely low nitrogen levels (Park et al., 2007).However, the physiochemical properties for tundra soils were strongly influenced by the deposition of penguin or seal excreta under the effects of local microbes (Tatur et al., 1997).Sea animals provide considerable external N inputs for their colony soils and adjacent tundra soils through direct input of their excreta and atmospheric deposition via ammonia volatilization (Lindeboom, 1984;Sun et al., 2002;Blackall et al., 2007;Zhu et al., 2011;Riddick et al., 2012).In addition to ammonium, phosphorus can typically be found in penguin guano (Sun et al., 2000).Generally low C : N ratios and significantly elevated NH + 4 -N and TP concentrations occurred in PS and PL due to penguin or seal activities (Table 1).These conditions allow high abundance of AOB amoA genes, which explains the strong correlations between AOB abundances and C : N, NH + 4 -N, and TP in the sea animal colony soils (Table 2).This agreed with the high bacterial abundance previously documented in penguin or seal colony soils and ornithogenic sediments (Ma et al., 2013;Zhu et al., 2015). The AOA abundance showed a significant negative correlation with NH + 4 -N levels in tundra patches ( oligotrophic environments (Martens-Habbena et al., 2009;Stieglmeier et al., 2014).High NH + 4 -N concentrations might partially inhibit AOA populations (Hatzenpichler et al., 2008).This result is similar to that reported for some agricultural soils with increased fertilization and grassland soils with increased grazing (Fan et al., 2011;Prosser and Nicol, 2012;Pan et al., 2018), supporting the conclusion that AOA and AOB generally inhabit different niches in soil, distinguished by the NH + 4 concentration and availability (Verhamme et al., 2011;Wessén et al., 2011). Effects of sea animal colonization on soil potential ammonia oxidation rates The PAOR ranged from 9 to 139 µg N kg −1 −1 , lower than nitrification rates measured in most agricultural soils (83-1875 µg N kg −1 h −1 ) (Fan et al., 2011;Ouyang et al., 2016;Daebeler et al., 2017).One reason might be the selection of a 15 • C incubation temperature, which was lower than the incubation temperatures used in other studies.Generally, the gross nitrification rate and amoA abundance increased significantly when the incubation temperature was higher than 15 • C (Daebeler et al., 2017;Zhao et al., 2014).Our measurements indicated that there were significant differences (P = 0.02) in the PAOR across different tundra patches, and the PAORs in SS and PS were about 10 times higher than those in BS and MS.A significant correlation was obtained between the PAOR and C : N, TP, and TS (Table 2).Overall, ammonia oxidation activity was modulated by soil biogeochemical processes under the disturbance of penguin or seal activities: generally low C : N ratios and sufficient input of the nutrients TP, TS, and NH + 4 -N from sea animal excrements. The higher AOB abundances (Fig. 2b) and significant negative correlation of AOA abundance with NH + 4 -N levels (Table 2) indicated that AOB might play a more important role in nitrification in tundra soils.In agreement with these results, AOB dominated nitrification in the areas where it was easy to achieve nitrogen input, whereas the relative contribution of AOA to nitrification was higher in the areas where the ammonium concentration remained low (Fan et al., 2011;Sterngren et al., 2015).Moreover, the cell-specific activity for AOB was 10 times higher than that for AOA due to the bigger cell size of AOB (Hatzenpichler, 2012;Prosser and Nicol, 2012).Therefore, AOB might play a more important role in nitrification in SS, PS, and PL with the input of NH + 4 -N from penguin or seal excrements. In addition, AOA might play a role that cannot be ignored in MS and BS, just like the prevalence of AOA among ammonia oxidizers in arctic soils (Alves et al., 2013;Daebeler et al., 2017).AOB groups were mostly undetectable in the analysis of MS and BS.Although unknown γ -AOB groups might not have been detected, the primer set used here covers the β-AOB groups typically found in soils (Alves et al., 2013).The BS and MS were moderately far away from pen-guin or seal colonies without the input of the nutrients from sea animal excrements, and their substrates can be provided only through the mineralization of organic matter from local tundra plants.The simple organic substrates and barren soil environment might favor AOA (Stopnišek et al., 2010;Habteselassie et al., 2013).Therefore, AOA showed relatively high abundance in MS and BS compared with PS and SS. Effects of sea animal colonization on genotypic diversity of soil AOA and AOB In this study, distinct AOA communities appear to inhabit different types of tundra patches, depending on sea animal activities (Fig. 5a).It was difficult to amplify the AOA amoA gene from SS and PS, whereas a high diversity of AOA amoA genes was observed in PL, MS, and BS.Phylogenetic analysis indicated that the AOA amoA sequences in cluster I were from PL and tundra soils close to seal wallows, while the sequences in cluster II were from BS and MS (Fig. S5).AOA in most extreme environments have lower levels of microbial diversity than in benign ecosystems because of the requirement for specific physiological adaptations which allow organisms to exploit the combination of physical and biochemical stressors (Cowan et al., 2015).Detected OTUs in cluster I had their closest matches mainly in the hyper-arid soils of Antarctic Dry Valleys (Magalhães et al., 2014), wetland soils (Y.K. Zheng et al., 2014), alpine meadow soils (Zhao et al., 2017), and some agricultural soils (Glaser et al., 2010).Cluster II was more prevalent in BS and MS, probably because of their stronger adaptation to barren soil environments.In cluster II, the sequences were affiliated with sequences recovered from cold environments, including the soils of the Tibetan Plateau (Xie et al., 2014) and Icelandic grassland soils (Daebeler et al., 2012).The compositions of soil AOA populations are likely not to be explained by individual physicochemical properties, and their community structures significantly correlated with tundra soil C : N, and TC, which was consistent with previous studies (Glaser et al., 2010;Wessén et al., 2011).AOB amoA gene diversity was higher than that of AOA, similar to results in the Antarctic Dry Valley soils (Magalhães et al., 2014).A high diversity of AOB amoA genes occurred in SS, PS, and PL compared to BS, indicating that penguin or seal activities had important effects on AOB genotypic diversity.Phylogenetic analysis indicated that the sequences in clusters I and II were mainly from PS and SS (Fig. 5b), and the detected OTUs in cluster I had their closest matches in mixed community culture systems, a meadowto-forest transect in Oregon Cascade Mountains (Mintie et al., 2003), and Dutch agricultural soils (M.C. Silva et al., 2012a) and reservoir sediments (A.F. Silva et al., 2012b).For clusters III and IV, the sequences were predominantly from PL and SS, and they were affiliated with sequences recovered from high-altitude wetland (Yang et al., 2014).Pre-vious studies have shown that multiple environmental factors affected the AOB communities (Dang et al., 2008;Mosier and Francis, 2008).In this study, the C : N ratios and NH + 4 -N concentrations seemed to be the most important factors influencing the AOB community structure, which was in accordance with the results from different environments (Bouskill et al., 2012;Jung et al., 2011;Li et al., 2015).Moreover, the TP also affected the AOB amoA community compositions (Zheng et al., 2013).Therefore, the AOB community compositions were impacted by the biogeochemical factors related to sea animal activities, such as low C : N ratios and a sufficient supply of the nutrients NH + 4 -N and TP from sea animal excreta. Conclusions The findings of this study concerning the abundance, potential activity, and diversity of tundra soil AOA and AOB provide insights into microbial mechanisms driving nitrification in maritime Antarctica.We confirmed the presence of AOA and AOB amoA genes in five different tundra patches and demonstrated that the spatial distribution heterogeneities of the tundra soil AOA and AOB communities were driven by penguin or seal activities.The soil AOB amoA copy numbers were generally higher than the AOA amoA copy numbers, following the higher PAOR in penguin or seal colonies and their adjacent tundra, compared with that in the background tundra and marsh tundra.Penguin or seal activities resulted in a significant shift in soil AOA and AOB community compositions.AOB amoA gene diversity was higher in SS and PS than in PL and MS, and the majority of the AOB sequences was closely related to Nitrosospira-like sequences.The AOA amoA gene had higher diversity in PL and MS than in BS, and it was associated with Nitrososphaera sequences recovered from barren soils.Soil AOB and AOA abundances, and their community compositions, were related to soil biogeochemical processes under the sea-animal-activity disturbance, such as soil C : N alteration and a sufficient supply of the nutrients NH + 4 -N, N and P from animal excreta.This study significantly enhances the understanding of ammoniaoxidizing microbial communities in the tundra environment of maritime Antarctica. Author contributions.RBZ, LJH, and QW conceived of the study together.QW conducted the laboratory analyses and statistical anal-yses with support from LJH and YLZ.TB retrieved samples during field work.The paper was written by QW with support by RBZ. Competing interests.The authors declare that they have no conflict of interest. Figure 1 . Figure 1.Study area and soil sampling sites.Panel (a): the red dot indicates the location of the investigation area in maritime Antarctica.Panel (b): location of the sampling sites on the Fildes Peninsula.The sampling soils from tundra patches included the active seal colony tundra soils SS (SS1-5) in the western coast of the Fildes Peninsula and the background tundra soils on the upland areas (BS1-3).Panel (c):the location of the sampling sites on Ardley Island.The sampling soils from tundra patches included the western tundra marsh soils (MS1-5), the eastern active penguin colony tundra soils PS (PS1-5), and the adjacent penguin-lacking tundra soils PL (PL1-4).Note: the map was drawn using CorelDRAW X7 software (http://www.corel.com/cn/,last access: 20 September 2019). Figure 2 . Figure 2. Comparisons of soil AOA and AOB amoA gene copy numbers (a), log ratio of AOB : AOA abundances (b), and potential ammonia oxidation rates (PAORs) (c) between five tundra patches.The error bars indicate standard deviations of the means. Figure 3 . Figure 3. Effects of soil C : N alteration on AOA and AOB abundances and potential ammonia oxidation rates (PAOR) at five tundra patches. Figure 4 . Figure 4. Correlation between potential ammonia oxidation rates (PAORs) and AOA and AOB amoA gene copy numbers in tundra soils of maritime Antarctica. Figure 5 . Figure 5. Neighbor-joining phylogenetic tree of AOA amoA (a) and AOB amoA (b).The phylogeny is based on nucleotide sequences.Bootstrap values ≥ 50 % (of 1000 iterations) are shown near the nodes.GenBank accession numbers are shown for sequences from other studies.OTUs were defined at 97 % similarity.Numbers in parentheses following each OTU indicate the number of sequences recovered from each sampling site. Figure 6 . Figure 6.Canonical correspondence analysis (CCA) ordination plots for the relationship between the AOA and AOB community structures with environmental variables.The circles with different colors represent the various sampling sites.The size of the circles corresponds to the OTU richness in individual samples.The black triangles represent amoA phylotypes.Environmental variables are represented by red arrows.The percentage of species-environment relation variance explained by the two principal canonical axes is represented close to the axes. Table 2 . Spearman correlations (n = 22) among ammonia-oxidizer populations, the ratios of AOA : AOB abundances, potential ammonia oxidation rates (PAOR), and environmental variables in the soils of maritime Antarctic tundra. Table 3 . Individual and combined contributions of soil biogeochemical properties to the AOA and AOB community structures in tundra patches. Table 2 ) , indicating that AOA might better adapt to low NH +
2019-08-03T01:50:43.444Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "4964b0668290f0ab3b5cb1bf80487e712e3bfa69", "oa_license": "CCBY", "oa_url": "https://bg.copernicus.org/articles/16/4113/2019/bg-16-4113-2019.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "994fcffefb378f84b5b8fea10163d2541a8bc186", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
268692709
pes2o/s2orc
v3-fos-license
ModWaveMLP: MLP-Based Mode Decomposition and Wavelet Denoising Model to Defeat Complex Structures in Traffic Forecasting Traffic prediction is the core issue of Intelligent Transportation Systems. Recently, researchers have tended to use complex structures, such as transformer-based structures, for tasks such as traffic prediction. Notably, traffic data is simpler to process compared to text and images, which raises questions about the necessity of these structures. Additionally, when handling traffic data, researchers tend to manually design the model structure based on the data features, which makes the structure of traffic prediction redundant and the model generalizability limited. To address the above, we introduce the ‘ModWaveMLP’—A multilayer perceptron (MLP) based model designed according to mode decomposition and wavelet noise reduction information learning concepts. The model is based on simple MLP structure, which achieves the separation and prediction of different traffic modes and does not depend on additional features introduced such as the topology of the traffic network. By performing experiments on real-world datasets METR-LA and PEMS-BAY, our model achieves SOTA, outperforms GNN and transformer-based models, and outperforms those that introduce additional feature data with better generalizability, and we further demonstrate the effectiveness of the various parts of the model through ablation experiments. This offers new insights to subsequent researchers involved in traffic model design. The code is available at: https://github.com/Kqingzheng/ModWaveMLP. Introduction Time series prediction, a cornerstone of time series analysis, entails forecasting future values using historical sequential data patterns and trends.In the specific context of traffic prediction (James 2022), which includes forecasting traffic flow, speed, and demand, its applications span route planning, vehicle scheduling, and congestion management (Lee et al. 2021;Fang et al. 2021;Li et al. 2023a). Neural networks have been extensively used in time series forecasting due to their powerful function-fitting capabilities (Zhou et al. 2022;Liu et al. 2021).Early on, convolutional neural network (CNN) captured spatial dependencies in grid-based traffic data, while recurrent neural network (RNN) learned temporal dynamics (Wu et al. 2018;Zhang et al. 2018).Presently, graph neural network (GNN) dominates this field, excelling in modeling complex spatiotemporal correlations (Jiang and Luo 2022;Jin et al. 2022).The transformer (Vaswani et al. 2017), a prevailing sequential data architecture, also showcases remarkable performance in traffic prediction tasks (Jiang et al. 2023a).Recently, influenced by the 'Large Model' trend, researchers favor Large Model for intricate tasks like traffic prediction. On the one hand,unlike high-dimensional image (Han et al. 2022) and natural language (Vaswani et al. 2017) data, traffic timing data is simpler-sequential numerical information recorded at distinct time points (Tran et al. 2020).Nonetheless, its periodic features and spatial dependencies lead to complex temporal extraction designs.Recent approaches employ intricate models such as graph neural networks and transformer (Yan, Ma, and Pu 2021;Chen et al. 2022) variants to capture these traffic features.However, this complexity entails larger parameters and greater hardware demands for training and inference, prompting the query: Are such elaborate models essential for this data?Doubts about transformers' effectiveness in time series prediction have surfaced.Works like MTS-Mixer (Li et al. 2023b) and the MLP-based traffic prediction model proposed by Oreshkin(Oreshkin et al. 2021;Shao et al. 2022a) challenge transformer and graph neural network-based models.These studies demonstrate the prowess of simpler MLP models in time series prediction (Zeng et al. 2023).Given MLP's simplicity, training efficiency, and inference efficiency, it might be a promising direction for traffic prediction, offering strong performance without the complexity of transformers and graph neural networks. On the other hand, in order to improve the performance of traffic prediction, new structures based on complex architectures have been designed to apply to unique characteristics, which further brings about model complexity and redundancy, such as new traffic features based on urban zones and delayed diffusion influenced by road factors (Jiang et al. 2023a).However, these features might not universally apply and the original data might lack corresponding information for these features (Zhang, Zheng, and Qi 2017).Therefore, manually selected features for prediction may lead to poor generalization of the model or even partial feature selection that increases the complexity of the model with no improvement in performance.In addition, previous researchers have ignored noise in traffic data, traffic data is sensor-collected, potentially tainted by noise (Tang et al. 2019;Chen et al. 2021).Extracting modes and reducing noise is also crucial.Therefore, how to design a unified structure to process and learn from all the information is a key issue. In this study, we term these features collectively as "traffic modes"-a fusion of single or multiple features.We employ residual separation to decompose these modes within the model, replacing manual feature design.We propose an MLP-based structure designed according to the mathematical idea of "differential".The model does not rely on the complex structure such as GNN and transformer to extract traffic modes, but uses the idea of residual separation to design the MLP structure to decompose and capture the different traffic modes and noise in the traffic sequence.The basic MLP module of the model has two branches, one for prediction and the other for subtracting the traffic mode information extracted by the current module, which is superimposed to decompose different traffic modes and extract the predicted values of different traffic modes.The model adds a noise reduction information module of wavelet decomposition, which gives the model the ability to decompose the traffic noise and anti-noise ability by learning the information of wavelet decomposition.In summary, the main contributions of this paper are summarized as follows: • We propose an mlp-based traffic prediction model called ModWaveMLP, which utilizes the residual separation idea to decompose and capture different traffic modes and noise in a traffic sequence for traffic prediction.• We design a mode decomposition learning module.Through the stacking of fully connected decomposition structure, this module can effectively decompose and learn the original traffic information and wavelet decomposition information.• Our model excels on real-world data, achieves state-ofthe-art results, effectively isolating various traffic modes and demonstrating strong noise resistance.Compared to GNN and transformer-based models, our method approach excels in traffic prediction, offering fresh insights for researchers exploring traffic forecasting in the era of "Large Model". Problem Statement Problem Formalization:Traffic prediction aims to predict the traffic data of a traffic system based on historical observations.Formally, given the traffic data tensor X observed on a traffic system, our goal is to learn a mapping function f from the previous T steps' data observation value to predict future T ' steps' traffic data Methods ModWaveMLP's framework, depicted in Fig. 1 Next, we use a multiplicative gate model to decode the input temporal data into effects that need to be removed or reduced later for restoration through a fully connected module.In contrast to the static modeling of daily and weekly cycles, we dynamically model hidden temporal cycles by splicing node embeddings and temporal information separately. T day(t) = Concat(T day(t) , T dynamic(t) ) (1) As shown on the left in Fig. 1, the input T is first mapped through a fully connected layer and activated by the relu function. F C() = RELU (Linear ()) (3) The processed results are output through two fullyconnected layer branches, The output of the backcast branch T backcast is the information to remove the time effect, and the information of the forecast branch T f orecast is the time information to be restored, and the information of the output of the backcast/forecast branch is divided/multiplied with the information of the other modules for the removal and restoration of the information of the time period mode. Information Aggregation Module We design a Information Aggregation Module, which models the node history information in the traffic network, to get enough information for the MDL block to perform mode decomposition.To model the dynamic history information dynamic by extrapolating the embedding information X N ×E embedding learned by the node with its own transpose. The dynamic graph node information tensor is multiplied with the ascending traffic history information X N ×N ×T unsqueene after T time steps to obtain the historical similarity information between nodes, which X N ×N ×T node history contains all the node history information learned by the current node.After RELU removes useless associations between nodes, nodes with similar historical information can learn the same traffic mode.Then we convert the information dimension to X N ×N T node history , which means we get all the historical information of the current node, along with other nodes.Finally we concatenate this informationX N ×N T node history with the original information X N ×T as well as the node embedding information X N ×E embedding to get the final node aggregation information Mode Decomposition Learning Block Traffic data comprises diverse modes influenced by factors such as node interactions and city zoning.These modes often overlap, requiring distinct feature extraction modules.To maintain generality and eliminate redundancy, we employ top-to-bottom mode decomposition and prediction, avoiding the need for multiple specialized modules.We design a MDL Block, which is the basic module in the model through which all the input information is decomposed and output.The MDL Block contains M fully connected decomposition structures for information separation and integration.The individual fully connected decomposition structure is very simple and contains one fully connected structure with L hidden layers and two fully connected prediction branches for information decomposition and information output.In order to gradually learn the information in the input data Y , the fully connected decomposition structure first feeds the input information into Y 0 the hidden layer mapping it to l ∈ [1, L) .The output of each hidden layer is transformed by the RELU activation function, and the final hidden layer H L is output to the information decomposition layer and the information output layer, where the information Ŷ m−1 m ∈ [1, M ) of the information decomposition layer is the information learned from the module decomposition.We subtract this information from the input Y m−1 of the structure to get the remaining information Y m after module learning, which can be further input to the next fully connected decomposition structure for the learning of the remaining information.At the same time, the output information of each structure is summed to get the final output information.By stacking M fully connected decomposition structure into MDL module, we have accomplished the separation and learning of different modes in traffic data.Fully connected decomposition structure works as follows: If the MDLblock is located in the information encoding module, the final output is multiplied with the time-gated The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24) forecast branch to restore the previously removed time effects.Besides step-by-step information decomposition, we enrich the information.Each decomposed Ŷm−1 is added to the original Y m−1 using a shortcut, progressively enhancing separated traffic modes.Stacking modules enriches each piece of information while predicting traffic modes with this progressively enriched information as output. Wavelet Decomposition Learning Module To remove noise from the raw data and provide more information for the MDL block, We design this module.Wavelet transform can detect traffic data at different scales and has the ability to detect features of traffic data (e.g., abrupt changes, spikes, and periodic cycles) and can be used to filter noise from traffic data to improve data quality.The wavelet decomposition is shown in Eq.where t is time point, f (t) is the original data, a is the scale and b is the translation. Noise is implied in the high-frequency (A i ) components after wavelet decomposition, and the noise reduction process, i.e., thresholding the high-frequency vectors(A ′ i ), ultimately reconstructs them(D i &A ′ i ) to be close to the original traffic data.We choose to use different wavelet basis functions for denoising and aggregate the data after different noise reduction methods, which enables the prediction model to learn the information after multiple methods of noise reduction.Based on previous wavelet transform studies, since different wavelet basis functions are suitable for prediction at different moments, we choose a combination of n wavelet basis functions, set the number of decomposition layers, and select the soft threshold reconstruction method.The original 2D data f (t) N ×T are decomposed and reconstructed several times using wavelets, and the multi-reconstructed 3D data f (t) N ×nT (n is the number of wavelet basis functions) are integrated as model input.More details on the 3D traffic tensor construction process and the selection of the wavelet basis functions can be found in Appendix Section A&D. Experiments Datasets The ModWaveMLP model is assessed using two traffic datasets: METR-LA and PEMS-BAY (Li et al. 2018).These datasets consist of traffic speed readings collected from loop detectors, aggregated in 5-minute intervals.METR-LA contains 34,272 time steps from 207 sensors deployed in Los Angeles County over 4 months.PEMS-BAY comprises 52,116 time steps from 325 sensors in the Bay Area over 6 months.Following prior research (Li et al., 2018), the datasets are divided into 70% for training, 10% for validation, and 20% for testing. Baselines We have selected a diverse range of baselines.For time series analysis, we included HA, VAR, SVR, and FC-LSTM (Sutskever, Vinyals, and Le 2014).HA, VAR, and SVR utilize statistical and machine learning approaches for prediction.In the realm of graphical neural networks, we chose DCRNN, STGCN, Graph Wavenet, MTGNN, and MegaCRN.DCRNN (Li et al. 2018) employs diffusion convolution, while STGCN, Graph Wavenet, MTGNN and MegaCRN (Yu, Yin, and Zhu 2018;Wu et al. 2019Wu et al. , 2020;;Jiang et al. 2023b) integrate spatio-temporal graph convolution.D 2 STGNN (Shao et al. 2022b) builds upon this foundation.In the context of attention or transformer-based techniques, we considered GMAN, ASTGCN, PDFormer, and STGRAT.GMAN and ASTGCN (Zheng et al. 2020;Guo et al. 2019) leverage spatio-temporal attention, and PDFormer and STGRAT (Jiang et al. 2023a;Park et al. 2020) extend this concept with spatio-temporal transformer structures.We also examined FC-GAGA and STID (Oreshkin et al. 2021;Shao et al. 2022a), which employ MLPlike strategies through fully connected and embedding layers.The hardware environment for the experiments and baselines is detailed in Appendix Section B. ModWaveMLP Architecture Details and Training Setup ModWaveMLP is stacked with 4 layers, embedding dimensionality of 96, and 128 hidden layer width for all fully connected layers.Wavelet decomposition comprises 4 layers with 'db1', 'db2', 'db3', 'db4' Daubechies wavelet functions and 'soft' thresholds(shown in Appendix Section D).A 3layer fully connected decomposition structure with 2 mode decomposition learning model blocks is employed. Performance Study Comparison results with baselines on METR-LA and PEMS-BAY datasets are summarized in Tab.1.Bold results indicate superiority.Key observations from these tables are: (1)Deep learning models outperform traditional methods like VAR, capturing hidden nonlinear information and spatial dependencies in traffic data.(2)Mod-WaveMLP achieves the best performance in all metrics, outperforming many GNN and transformer-based models.In the initial 15-minute and 30-minute predictions of PEMS-BAY, ModWaveMLP improves the MAE, RMSE, and MAPE by 20.97%, 27.31%, 25.58%, 9.35%, 18.47% to gradient issues.Further we remove the normalization and dropout operations and the final performance is still lower than ModWaveMLP. Robustness Study In this section, we introduce Gaussian noise to the training data to simulate data noise interference.Noise is added with varying means (-4 to 4) and variances (1 to 9), resulting in 81 noise groups.This aims to test the proposed model's robustness.We compare our model with FC-GAGA, another MLP-based model, in the same noisy environment.c, f showcase RMSE changes under interference, with ModWaveMLP's maximum RMSE at 7.0 and a deviation range of about 1.5, while FC-GAGA exhibits a maximum deviation of 12 and an overall deviation of 6. Mod-WaveMLP's error distribution is relatively smooth, adapting well to noise-induced disturbances.As noise mean and variance increase, ModWaveMLP maintains a smoother error distribution, while FC-GAGA's model error surges under local noise disturbance, emphasizing ModWaveMLP's robustness and its ability to handle diverse scenarios and noise attacks.The reliability of ModWaveMLP starts declining with higher noise variations but remains insensitive to variance changes.Model accuracy primarily responds to mean value alterations, indicating ModWaveMLP's resistance to local abrupt data changes due to its wavelet-based information learning. Case Study In this section, we analyze the correlation between traffic modes and dynamic nodes in ModWaveMLP's layers.We validate mode decomposition learning by visualizing model output and correlated nodes.We will take node 122 in the METR-LA dataset as an example, which is situated on the key Hollywood Freeway Fig. 4a.Fig. 3a shows ModWaveMLP's prediction curve alongside real values for node 122.Clear daily cycles are seen in traffic data during weekdays, with 5 cycles of peaks and valleys indicating traffic speed changes.Also influenced by the fact that people are resting on weekends, the traffic data on weekends have different traffic patterns than on weekdays.Due to the time gating mechanism module designed in MoD-WaveMLP, our model predictions fit this trend well.The final ModWaveMLP prediction is the average of layer predictions.Fig. 3b displays layer contributions for a 4-layer stack (scaled by ¼).Unlike typical stacks, all ModWaveMLP layers contribute.Layer 2 establishes a baseline, and Layer 1 enhances local changes.Layer 3's input is smoother, learning inter-node delays.Layer 4 refines baselines with recent data, countering abrupt changes like on 2012.6.15 and 2012.6.16.Ablation experiments confirm wavelet noise reduction's impact on short-term prediction correction.Fig.4 b-e depicts node weight distribution learned by embedding in the Node Information Aggregation Module across layers 1 to 4 for node 122.Each layer captures distinct node relationships, reflecting diverse information aggregation.Fig. 4 f-i details this in 3D plots for each layer.In Fig. 4b, layer 1 nodes encircle node 122 and Cahuenga Hill, suggesting data integration from nearby nodes.Fig. 4c shows layer 2 nodes along Hollywood Freeway, crucial for stable baseline prediction.Layers 3 to 4 (Fig. 4d,e) cluster nodes tightly around node 122, especially with east-west nodes from Ventura Freeway.This indicates iterative baseline updates focusing on closer nodes, refining neighboring information to correct noisy predictions.This study confirms our mode decomposition learning block's mode separation capability in the original data.Layers collaborate to enhance and refine predictions.Ablation experiments stress component importance, particularly wavelet decomposition and the time gate module, which significantly affect model performance when absent. Related Work Recently, deep learning has gained traction in addressing traffic prediction challenges.Initially, temporal prediction was the focus, centering on time-related aspects (Sutskever, Vinyals, and Le 2014).Convolutional Neural Networks (CNNs) were then applied to grid-based traffic data to capture spatial dependencies (Zhang, Zheng, and Qi 2017;Lin et al. 2020).Graph Neural Networks (GNNs) gained prominence for traffic prediction, leveraging their ability to model graph data (James 2022;Yu, Yin, and Zhu 2018;Song et al. 2020;Wu et al. 2020;Shao et al. 2022b;Li et al. 2022;Shang, Chen, and Bi 2021;Choi et al. 2022;Jiang et al. 2023b).The attention mechanism gained popularity for dynamic dependency modeling (Guo et al. 2019;Zheng et al. 2020).The success of the transformer in different tasks such as text and images has motivated researchers to design new structures, which are based on the new features of transportation that have been mined (Jiang et al. 2023a;Park et al. 2020;Ye et al. 2022). Unlike transformers' position coding for text generation, temporal prediction models usually learn periodicity without this coding (Vaswani et al. 2017;Sutskever, Vinyals, and Le 2014).Researchers found that simple linear layers could outperform transformer in long time prediction and uncovered the problem of channel independence in multivariate time series (Zeng et al. 2023;Li et al. 2023b).Further, researchers replaced multi-attention structures with MLP and introduced temporal and spatial embedding structures as alternatives to GNNs in traffic prediction (Das et al. 2023;Shao et al. 2022a;Oreshkin et al. 2021). Conclusion We propose a model based on the MLP structure designed according to the ideas of mode decomposition and wavelet noise reduction learning.Compared with previous researchers who manually design the corresponding model structure based on the characteristics of the traffic data, our model decomposes the modes in the traffic data through constant mode decomposition.Through the learning of wavelet noise reduction information,the model can remove the effect of noise on the traffic data.Comparison experiments on realworld dataset and baseline demonstrate the effectiveness of our model, which has better performance than the one based on GNN and transformer structure.Amidst the growing complexity of traffic prediction models, our study highlights the effectiveness of a straightforward MLP-based approach.While Large Model offer novel insights, they also introduce challenges like test data leakage.Our work demonstrates achieving accurate traffic prediction without complex structures, pointing to new future directions in large model design and traffic prediction research. Figure 1 : Figure 1: The overall architecture of the proposed ModWaveMLP Fullyconnected layers have weight decay of 1e-5.The model uses Adam optimizer, starting with a learning rate of 0.001 for 80 epochs.Learning rate anneals by a factor of 2 every 8 epochs starting from epoch 49.Each epoch has 800 batches of size 4, considering a history of 12 points and predicting 12 points (60 minutes) ahead.Training batches randomly select 4 time points from the training set, collecting histories for all nodes at each time point.For METR-LA (N =207 nodes) and PEMS-BAY (N =325 nodes), respectively.The training loss is Mean Absolute Error (MAE) averaged over nodes and forecasts within the horizon H=12.T +j − ŷi,T +j | .We use three metrics in the experiments: (1) Mean Absolute Error (MAE), (2) Mean Absolute Percentage Error (MAPE), and (3) Root Mean Squared Error (RMSE).Missing values are excluded when calculating these metrics. Figure 2 : Figure 2: Variation of Individual Metrics under Noise Interference.The x-axis is the noise variance, the y-axis is the noise mean, and z is the individual evaluation index. Figs. 2 a, b, and c.Fig. 2 e, f, and g display MAE, RMSE, and MAPE changes under mixed noise conditions for both models.The results in Figs. 2 a, d indicate a maximum MAE change of 0.6 for ModWaveMLP, compared to FC-GAGA's 1.2.Figs. 2 Figure 4 : Figure 4: Location of point 122 on the map, Maps of highest correlation of 122 node in each layer of embedding for layers 1, 2, 3 and 4. Black star is the forecasted node, and ModWaveMLP weight visualization of the 4 layers of embedding Table 1 : Traffic forecasting on the METR-LA and PEMS-BAY datasets( average over last time step of horizon, input window length 12).ModWaveMLP † indicates that the number of layers decomposed by the wavelet decomposition learning module is 5.The best results are bolded, suboptimal results are underlined(excluding ModWaveMLP variants results).Numbers marked with * indicate that the improvement is statistically significant compared with the best baseline (t-test with p-value < 0.05). Table 2 : Ablation study on METR-LAThe Thirty-Eighth AAAI Conference on Artificial Intelligence
2024-03-27T13:10:02.444Z
2024-03-24T00:00:00.000
{ "year": 2024, "sha1": "dfa0e7e38dd3ceffe21d54d5beacc173a30bbe7b", "oa_license": null, "oa_url": "https://ojs.aaai.org/index.php/AAAI/article/download/28753/29449", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a1b2ce777553738d3911dff9842aaae18d6b463e", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
255647792
pes2o/s2orc
v3-fos-license
Assessment of Rhizobium anhuiense Bacteria as a Potential Biocatalyst for Microbial Biofuel Cell Design The development of microbial fuel cells based on electro-catalytic processes is among the novel topics, which are recently emerging in the sustainable development of energetic systems. Microbial fuel cells have emerged as unique biocatalytic systems, which transform the chemical energy accumulated in renewable organic fuels and at the same time reduce pollution from hazardous organic compounds. However, not all microorganisms involved in metabolic/catalytic processes generate sufficient redox potential. In this research, we have assessed the applicability of the microorganism Rhizobium anhuiense as a catalyst suitable for the design of microbial fuel cells. To improve the charge transfer, several redox mediators were tested, namely menadione, riboflavin, and 9,10-phenanthrenequinone (PQ). The best performance was determined for a Rhizobium anhuiense-based bio-anode mediated by menadione with a 0.385 mV open circuit potential and 5.5 μW/cm2 maximal power density at 0.35 mV, which generated 50 μA/cm2 anode current at the same potential. Introduction Biofuel harvesting is emerging as an alternative solution in energetics to conventional batteries and holds great potential to design self-powered autonomous low-power electronic devices such as sensors and biosensors. The high demand for 'green-renewable' energy has led to a surge of research into various biofuel cell (BFC) concepts [1,2]. BFCbased technologies have the potential to transform the chemical energy of organic waste directly into 'bioelectricity' avoiding direct fuel combustion [3]. Microbial fuel cells (MFC) are bio-electrochemical devices that are driven by the metabolic activities of microorganisms [4,5]. The microorganisms can be used in the development of environmentally friendly energy generation systems. There is a great possibility to integrate BFCs within biosensing devices [6]. However, mostly the voltage generated by individual MFCs is too low to power electronic devices [7], and for this reason, they are connected in serial mode [8]. Some MFCs can be integrated into advanced power management systems [9,10]. In order to increase the power of BFC flow-through systems, they can be designed [11]. The anodic compartment of most BFCs is powered by organics [12], where the versatile catalytic properties of microorganisms can be well exploited [13]. The applicability of microorganisms 2 of 12 in MFC design is reported in numerous references [5,14]. The 'electroactivity' of microorganisms can be improved by c-type cytochromes (cyt c) [15], type IV pili/nanowire [16], chemically incorporated conducting polymers [17,18], and redox mediators [19,20]. Most MFCs are based on (i) mediated or (ii) direct charge transfer [21]. Shewanella putrefaciens [22], Geobacter sulfurreducens, Geobacter metallicreducens [23], Aeromonas hydrophila [24], and Rhodoferax ferrireducens [25] are frequently used in direct charge transfer based MFCs. In mediated electron transfer-based MFCs, redox mediators such as methylene blue [26], menadione [27,28], and riboflavin [29] are frequently used to increase the rate of fuel conversion into electricity [4,30]. Diffusion rates of organic fuel molecules can significantly limit MFCs power [31]. Therefore, sometimes it is reasonable to apply various mixing approaches [11]. The negatively charged phosphate and carboxyl groups, which are involved in the cell wall of Gram-negative bacteria, outweigh the positively charged amino groups. Therefore, such conditions determine the rather low surface zeta potential of the mentioned microbes [32], which is important for the interaction of cells with electrodes and consequently for charge transfer. Rhizobia are among the best-studied soil microbes associated with plants [33]. Rhizobia bacteria exhibit an oligotrophic lifestyle, belonging to either α-proteobacteria or β-proteobacteria. The interaction is reciprocal where plants selectively enrich beneficial bacteria attaching to their roots, and the Rhizobia bacteria infect and colonize plant epidermis, form bacteroids, and then start to fix atmospheric nitrogen; this process is essential for the development of plants. In most agroecosystems, Rhizobia are one of the common soil bacteria, especially if legumes are incorporated into the rotation [33]. Rhizobia-related bacteria are Gram-negative nitrogen-fixing bacteria that are found throughout Eurasia in an endosymbiotic relationship with legume plant nodules. Rhizobia-related species could serve as good candidates for MFC design because they are a non-pathogenic, facultative anaerobe group of microbes. Moreover, Rhizobium species have wide capacities to decompose organic substrates, root exudates including C4 dicarboxylates (malate, fumarate, and succinate) and some other short-chain fatty acids. In the native habitat, most of the extracted electrons are usually channeled to nitrogenase to fix atmospheric nitrogen to ammonia. It is used by legume hosts, and a small amount is released into the soil. Dicarboxylate decomposition and nitrogen fixation processes are tightly regulated by metabolic processes performed within the hosting plant. According to scientific research, it is well known that Rhizobia species are well-equipped with highly efficient metabolic systems capable of extracting electrons from organic compounds [34]. It should be noted that in laboratory conditions, in the absence of a specific host and conditions in the surrounding environment, these regulatory signals, which are inducing nitrogen fixation, will be absent, and all captured electrons will increase the reduction potential of the Rhizobia bacteria-based system. All the aforementioned traits indicate that microbes, which interact with plant roots in the rhizosphere, can be used to develop a new design of microbial fuel cells, and Rhizobium bacteria are among the most attractive candidates for this purpose [35]. In this study, we assessed Rhizobium anhuiense as a potential candidate for microbial fuel cell design. To improve the charge transfer, we have assessed the Rhizobium anhuiense-based bioanode with several redox mediators (menadione, riboflavin, and 9,10phenanthrenequinone (PQ)) and determined the open circuit potential, maximal power density, and some other electrochemical and electrical characteristics of the here designed bioanodes of MFCs. Cultivation and Preparation of Rhizobium anhuiense Bacteria Gram-negative Rhizobium anhuiense bacteria were obtained from the Lithuanian Research Center for Agriculture and Forestry (Vėžaičiai, Lithuania) microbial strain collection. Bacteria were cultivated in Norris medium [36], pH 7.0, composed of nitrogen-fixing bacteria strains. Prior to use in experiments, the Rhizobium ahnuiense bacteria were reinoculated on an inclined Norris medium supplemented with agar and left to grow for 48 h at 28 • C. Afterwards, 5 µL of an autoclaved Norris medium solution was used to fill the test tube with inoculums and carefully suspend it. The mixture was vortexed for several minutes to ensure that the majority of the bacteria got lifted from the solid medium. Then, the bacterial suspension was transferred and diluted in a fresh autoclaved Norris medium to obtain a density of colony-forming units (CFU) equal to 1 × 10 7 CFU mL −1 [37]. The bacteria count was established by measuring the optical density of the suspension at 600 nm (OD 600 ), which was measured to be in a range of 0.15-0.2, which corresponds to~2 × 10 7 CFU mL −1 . The bacterial suspension was left to grow for 24 h at 30 • C with constant shaking at 200 RPM to reach the stationary phase, where the OD 600 was about 1.0. The optical density (OD) of the bacterial culture at a 600 nm wavelength was measured with a spectrophotometer (Ocean Optics USB4000, Largo, FL, USA). Before the experiments, the cells were adequately washed from the broth culture three times with 0.1 M PBS, pH 7.0, and centrifuged at 4500 rpm for 5 min at 25 • C. For all investigations, washed cells were resuspended in the test solution to prepare bacterial suspensions containing~2 × 10 7 CFU mL −1 . Assessment of Zeta Potential The prepared samples were diluted 10-fold in a medium with appropriate ionic strength (0.01 mM, 0.05 mM, 0.1 mM, 0.3 mM, 10 mM, 50 mM, 100 mM, and 300 mM), and a pH of 5.0, 6.0, 7.0, or 8.0 was adjusted immediately prior to zeta potential measurement. The resulting suspension was used to fill clear, disposable 'zeta cells' (ATA Scientific, Caringbah, Australia) immediately prior to zeta potential measurements. The ionic strength was measured at pH 7.0, while pH investigations were conducted in a 0.1 mM PBS solution. The electrophoretic mobility of bacterial cells was measured with a zeta potential analyzer at 80 V (Zetasizer Nano series Nano-ZS; Malvern Industries Ltd., Malvern, UK) and converted to zeta potentials [38]. Measurements were performed at 25 • C in standard Norris medium pH 7.0. The sample was measured five times on two separate days to determine the reproducibility of the results. Between each measurement, the electrodes were rinsed with copious amounts of Milli-Q TM water, followed by the test bacterial suspension. Electrochemical Assessment of Microbial Biofuel Cell A single chamber MFC cell was assembled in which a two-electrode electrochemical system was utilized, where a graphite electrode was used as a working electrode and a large-area platinum wire of 0.5 cm 2 served as a counter electrode. Bacterial samples were immobilized on a graphite electrode by letting them lightly dry for two minutes. A polycarbonate membrane with 1 µm holes was used to separate the specimen from the surrounding environment to ensure attachment to the anode. The chamber was filled with 0.1 M PBS, pH 7.0, and the open current potential (OCP) was measured. Afterwards, various external resistors (10 MΩ, 1 MΩ, 390 kΩ, 220 kΩ, 180 kΩ, 130 kΩ, 100 kΩ, 68 kΩ, 56 kΩ, 33 kΩ, 10 kΩ, and 1 kΩ) were connected in parallel to the electrical circuit to imitate an external load and to assess the power density of the bacterial samples. The potential changes were recorded with a digital benchtop multimeter (UT8802E UNI-T) from TEM Electronic Components (Łódź, Poland). The power density was calculated by dividing electric power, which was calculated using Ohm's law, per exposed working electrode surface of 0.07 cm 2 . Results and Discussion The growth of Rhizobium anhuiense bacteria in the presence of 50 µM of menadione was evaluated by the determination of optical density after different time periods (Figure 1). An exponential function was applied for the assessment of cell growth: (1) The increase in Rhizobium anhuiense bacteria number was calculated according to Equation (2): Y = y 0 (in the absence of menadione) /y 0 (in the presence of menadione) . ( The time interval during which the cell number has been doubled was calculated according to Equation (3): Data presented in Figure 1 illustrate that the increase of Rhizobium anhuiense bacteria number (Y), which was calculated using Equation (2), was 40% faster in the case if menadione was absent in the cell growth media, but the time interval during which cell number has been doubled, which was calculated using Equation (3), was 20% shorter for Rhizobium anhuiense bacteria that were growing in the presence of 50 µM of menadione. Zeta potential measurements were performed to evaluate the electrical potential of the cell surface, which depends on environmental conditions [33]. The surface of the majority of Gram-negative bacteria at neutral pH is mostly negatively charged. This effect is predetermined by negatively charged phosphate and carboxylate groups containing lipopolysaccharides [34] and balanced by oppositely charged counterions present in the surrounding media, leading to the formation of the electrical double layer. 0.1 M PBS, pH 7.0, and the open current potential (OCP) was measured. Afterwards, various external resistors (10 M, 1 M, 390 k, 220 k, 180 k, 130 k, 100 k, 68 k, 56 k, 33 k, 10 k, and 1 k) were connected in parallel to the electrical circuit to imitate an external load and to assess the power density of the bacterial samples. The potential changes were recorded with a digital benchtop multimeter (UT8802E UNI-T) from TEM Electronic Components (Łódź, Poland). The power density was calculated by dividing electric power, which was calculated using Ohm's law, per exposed working electrode surface of 0.07 cm 2 . Results and Discussion The growth of Rhizobium anhuiense bacteria in the presence of 50 μM of menadione was evaluated by the determination of optical density after different time periods ( Figure 1). An exponential function was applied for the assessment of cell growth: The increase in Rhizobium anhuiense bacteria number was calculated according to Equation (2): The time interval during which the cell number has been doubled was calculated according to Equation (3): Data presented in Figure 1 illustrate that the increase of Rhizobium anhuiense bacteria number (Y), which was calculated using Equation (2), was 40% faster in the case if menadione was absent in the cell growth media, but the time interval during which cell number has been doubled, which was calculated using Equation (3), was 20% shorter for Rhizobium anhuiense bacteria that were growing in the presence of 50 μM of menadione. Zeta potential measurements were performed to evaluate the electrical potential of the cell surface, which depends on environmental conditions [33]. The surface of the majority of Gram-negative bacteria at neutral pH is mostly negatively charged. This effect is predetermined by negatively charged phosphate and carboxylate groups containing lipopolysaccharides [34] and balanced by oppositely charged counterions present in the surrounding media, leading to the formation of the electrical double layer. Menadione (MQ) is a redox cycling compound with the empirical formula C 6 H 4 (CO) 2 C 2 H(CH 3 ) and it is an analogue of 1,4-naphthoquinone having an additional methyl group. Also, it is known as a vitamin K 3 . Menadione is a pro-oxidant generating superoxide anion radicals. Rhizobia exposed to menadione responds by inactivation of free anion radicals generated by this exposure [36]. In this research, a low concentration of menadione acted as a stressor; therefore, it did not kill the bacteria but strengthened their resistance and increased their charge-transfer efficiency after the adaptation period. At ion concentrations below 1 mM, the zeta potential was independent of ionic strength (Figure 2A), and the bacteria displayed minor variation in their zeta potential in the presence of menadione; this effect was not dependent on the duration of incubation. When the ion concentration exceeded 1 mM, then the zeta potential of all samples increased along with the increasing ionic strength. No meaningful difference in zeta potential was observed between the presence or absence of various mediators and the duration of incubation. The influence of pH on the zeta potential of Rhizobium anhuiense was assessed. Considering that the natural living environment of the bacteria is~pH 7.0, and due to metabolic processes and secretion of various metabolites the pH is often changed to the acidic side, further investigations in the pH range between 5.0 and 8.0 were performed. The research revealed that these pH alterations have not significantly influenced the zeta potential of the bacteria. The zeta potential remained negative over a wide pH range ( Figure 2B). This reveals that the variation of pH does not influence electrostatic interactions between the bacteria and anode. The influence of the cultivation medium on the zeta potential of bacteria was also evaluated ( Figure 2C). A Norris medium, pH 7.0, also positively influenced the zeta potential. The average zeta potential value of bacteria increased to −25 mV compared to the average zeta potential value of bacteria, which is −60 mV in PBS-based solutions with low ionic strength. This phenomenon is most likely due to the presence of different dissolved ions in PBS and Norris medium. Bacterial cells with inherent negative charges are more favorable to adhere and subsequently immobilize to the positively charged electrode due to the electrostatic attraction. In this research, we observed a highly negative zeta potential in solutions with low ionic strength. Observation suggests that Rhizobium anhuiense possesses the inherent ability to adhere strongly to the surface of the anode. The action of single chamber MFC based on Rhizobium anhuiense is presented in the scheme in Figure 3. Open circuit potential (OCP) was determined at loads on external circuits, and power density was calculated using these measurement results. Power density curves and polarization were gathered for BFC based on Rhizobium anhuiense bacteria, which were treated with several redox mediators in different environments (Figures 4 and 5). Menadione is serving as an organic hydrophobic redox mediator enhancing charge transfer [28,29]. The efficiency of redox mediators is heavily influenced by the oxidation and reduction potentials of the used redox mediator. Redox mediators with a higher redox potential will more efficiently capture electrons from electron donors; however, electron transfer from redox mediators, which are characterized by a very high redox potential, to the charge transfer chain is not very efficient [39]. Electrical potential and power density of this designed MFC is shown in Figure 5. Riboflavin is known as an endogenous redox mediator, facilitating electron transfer rates, which promote the use of less of the generated power. Functional devices that have their energy supplied by fuel cells need to operate in conditions up to or equal to the power density maximum to function at high efficiency. Electrons transferred during this process are directly associated with the chemical reactions catalyzed by enzymes involved in the metabolic processes of microorganisms. Bacteriabased fuel cells are characterized by nonlinear power density. This nonlinearity can be used for efficient energy savings while being adapted to activate performance for a certain set of characteristics. A larger overall power density at a higher potential is a goal during the development of such systems because the maximum value of power density enables the generation of the greatest amount of electric current. Biosensors 2023, 13, x FOR PEER REVIEW 6 of 12 The action of single chamber MFC based on Rhizobium anhuiense is presented in the scheme in Figure 3. Open circuit potential (OCP) was determined at loads on external circuits, and power density was calculated using these measurement results. Power density curves and polarization were gathered for BFC based on Rhizobium anhuiense bacteria, which were treated with several redox mediators in different environments (Figures 4 and 5). Open circuit potential (OCP) was determined at loads on external circuits, and power density was calculated using these measurement results. Power density curves and polarization were gathered for BFC based on Rhizobium anhuiense bacteria, which were treated with several redox mediators in different environments (Figures 4 and 5). Application of a natural redox mediator-menadione-in MFC design enables an increase in both the voltage and power of the designed MFC, which also increases their applicability. In contrast, no positive effect was observed when riboflavin was applied instead of menadione. This effect can be related to the different solubilities of both these natural redox mediators, because menadione is soluble in a hydrophobic environment and enters the phospholipid membrane, whereas riboflavin is water-soluble and can be dissolved within the cell as well as within the extracellular environment. However, riboflavin is not well suited to shoot electrons through the cell membrane. Therefore, we can propose that menadione can more efficiently shuttle charge through the phospholipid Menadione is serving as an organic hydrophobic redox mediator enhancing charge transfer [28,29]. The efficiency of redox mediators is heavily influenced by the oxidation and reduction potentials of the used redox mediator. Redox mediators with a higher redox potential will more efficiently capture electrons from electron donors; however, electron transfer from redox mediators, which are characterized by a very high redox potential, to the charge transfer chain is not very efficient [39]. Electrical potential and power density of this designed MFC is shown in Figure 5. Riboflavin is known as an endogenous redox mediator, facilitating electron transfer rates, which promote the use of less of the generated power. Functional devices that have their energy supplied by fuel cells need to operate in conditions up to or equal to the power density maximum to function at high efficiency. Electrons transferred during this process are directly associated with the chemical reactions catalyzed by enzymes involved in the metabolic processes of microorganisms. Bacteria-based fuel cells are characterized by nonlinear power density. This nonlinearity can be used for efficient energy savings while being adapted to activate performance for a certain set of characteristics. A larger overall power density at a higher potential is a goal during the development of such systems because the maximum value of power density enables the generation of the greatest amount of electric current. Application of a natural redox mediator-menadione-in MFC design enables an increase in both the voltage and power of the designed MFC, which also increases their applicability. In contrast, no positive effect was observed when riboflavin was applied instead of menadione. This effect can be related to the different solubilities of both these natural redox mediators, because menadione is soluble in a hydrophobic environment and enters the phospholipid membrane, whereas riboflavin is water-soluble and can be dissolved within the cell as well as within the extracellular environment. However, riboflavin is not well suited to shoot electrons through the cell membrane. Therefore, we can propose that menadione can more efficiently shuttle charge through the phospholipid membrane towards the electrodes in comparison to the charge transfer ability of riboflavin ( Figure 5). It is known that Rhizobium anhuiense can function in both anaerobic and aerobic conditions. Therefore, the ability of Rhizobium anhuiense to generate power was additionally tested in anaerobic conditions by the protrusion and saturation of the system with nitrogen gas. However, under anaerobic conditions, a significant decrease in all power generation-related characteristics was determined (Figures 1 and 5). It is known that Rhizobium anhuiense can function in both anaerobic and aerobic conditions. Therefore, the ability of Rhizobium anhuiense to generate power was additionally tested in anaerobic conditions by the protrusion and saturation of the system with nitrogen gas. However, under anaerobic conditions, a significant decrease in all power generationrelated characteristics was determined (Figures 1 and 5). A comparison of different biofuel cells is presented in Table 1. It should be noted that the current density generated by this reported MFC, like that of most MFCs, is not particularly high; thus, to increase the density of the current, 3D electrode materials [40] and/or conducting polymer-based structures [1] can be used, allowing the magnitude of the current to be increased by several orders of magnitude [38]. Conclusions and Future Developments The development of microbial fuel cells based on electro-catalytic processes is among the novel topics that have recently emerged in the sustainable development of energy. However, not all microorganisms involved in metabolic/catalytic processes generate sufficient redox potential and/or are able to transfer electrons to electrodes. Therefore, in this research, we have assessed the applicability of microorganisms like Rhizobium anhuiense for microbial fuel cell design. In order to improve the charge transfer, we have tested several redox mediators, namely menadione, riboflavin, and 9,10-phenanthrenequinone. The best performance was determined for a Rhizobium anhuiense-based bioanode mediated by menadione with a 0.385 mV open circuit potential and 5.5 µW/cm 2 maximal power density at 0.35 mV, which generated 50 µA/cm 2 anode current at the same potential. These results indicate that the efficiency of the extracellular electron transfer is still not very high, and, therefore, the high internal resistance of some parts (e.g., the cell membrane) of the device remains one of the most significant challenges in the design of these MFCs. To increase the density of the generated current, 3D electrode materials and/or conducting polymer-based structures can be applied. It should also be noted that the optimal performance of extracellular electron transfer and the internal resistance of some parts of the electrochemical system remain two of the most significant challenges in the design of MFCs. Therefore, the improvement of charge transfer capabilities of the microorganism's electrical capabilities would be another good target for boosting MFC performance and it can be applied to increase the density of current-generated 3D electrode materials and/or conducting polymer-based structures.
2023-01-12T18:25:13.052Z
2022-12-31T00:00:00.000
{ "year": 2022, "sha1": "0302d0af8464f2787852fb02450cd0303b487e48", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6374/13/1/66/pdf?version=1672466693", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "654bdde695b0a55e9d9d24ceac49c81db3ac308e", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [] }
235200097
pes2o/s2orc
v3-fos-license
Optogenetic strategies for high-efficiency all-optical interrogation using blue-light-sensitive opsins All-optical methods for imaging and manipulating brain networks with high spatial resolution are fundamental to study how neuronal ensembles drive behavior. Stimulation of neuronal ensembles using two-photon holographic techniques requires high-sensitivity actuators to avoid photodamage and heating. Moreover, two-photon-excitable opsins should be insensitive to light at wavelengths used for imaging. To achieve this goal, we developed a novel soma-targeted variant of the large-conductance blue-light-sensitive opsin CoChR (stCoChR). In the mouse cortex in vivo, we combined holographic two-photon stimulation of stCoChR with an amplified laser tuned at the opsin absorption peak and two-photon imaging of the red-shifted indicator jRCaMP1a. Compared to previously characterized blue-light-sensitive soma-targeted opsins in vivo, stCoChR allowed neuronal stimulation with more than 10-fold lower average power and no spectral crosstalk. The combination of stCoChR, tuned amplified laser stimulation, and red-shifted functional indicators promises to be a powerful tool for large-scale interrogation of neural networks in the intact brain. Introduction During information processing, storage, and retrieval, the electrical activity of most brain circuits is organized in complex spatial and temporal patterns (Ni et al., 2018;Salinas and Sejnowski, 2001;Shahidi et al., 2019;Shamir and Sompolinsky, 2006). The coding rules intrinsic to these activity patterns are believed to underlie fundamental operating principles of brain networks (Averbeck et al., 2006;Panzeri et al., 2017;Yuste, 2015). Recent techniques for all-optical interrogation of neural circuits allow for the first time to causally investigate these coding rules with unprecedented spatial resolution in living mice (Carrillo-Reid et al., 2019;Chettih and Harvey, 2019;Gill et al., 2020;Marshel et al., 2019;). In particular, two-photon calcium imaging combined with two-photon optogenetics makes it possible to simultaneously monitor and manipulate dozens of neurons with near single-cell resolution (Bovetti and Fellin, 2015;Chen et al., 2019;Forli et al., 2018;Mardinly et al., 2018;Packer et al., 2015;Rickgauer et al., 2014;Yang et al., 2018). However, significant challenges still remain to be addressed to accurately interrogate neural circuits in vivo. First, since the average power of illumination used to stimulate neurons is proportional to the probability of introducing thermal damage to brain tissue and unwanted modulation of neuronal physiology (Owen et al., 2019;Picot et al., 2018), maximizing the efficiency of optogenetic stimulation by increasing opsin excitation and photocurrent is of utmost importance. Thus, there is a need for the development of novel high-efficiency, large-conductance opsins and efficient illumination strategies for their two-photon activation. Second, the crosstalk between imaging and stimulation, that is, the unwanted activation of opsin-expressing neurons by imaging light, needs to be minimized (Forli et al., 2018;Packer et al., 2015). This effect is due to the overlap between the two-photon absorption spectra of the opsin and of the calcium sensor (Venkatachalam and Cohen, 2014), and it is typically pronounced for red-shifted opsins, such as C1V1 (Yizhar et al., 2011) and ReaChR (Lin et al., 2013), which are used in combination with blue-light-sensitive indicators (e.g. GCaMP6 [Chen et al., 2013]). This process may lead to non-negligible neuronal activation during two-photon imaging of the blue-light-sensitive indicator (Packer et al., 2015). The non-selective excitation of neurons in the field-of-view (FOV) due to this type of crosstalk could promote the formation of unwanted cell-assemblies (Carrillo-Reid et al., 2016) and bias the activity in the monitored area, complicating the interpretation of biological results (Emiliani et al., 2015). The use of blueshifted opsins (e.g, ChR2 [Nagel et al., 2003] and GtACR2 [Govorunova et al., 2015]) combined with red-shifted calcium sensors (e.g. jRCaMP1a [Dana et al., 2016]) was proved to be a valid approach, which largely eliminates the effect of the imaging beam on opsin excitation while allowing simultaneous imaging and bidirectional manipulation of neural activity in vivo (Forli et al., 2018). However, the limited photocurrent of ChR2 (Nagel et al., 2003) prevented the efficient stimulation of multiple neurons with low average power delivery to the brain tissue. To address this limitation, we describe here a new high-efficiency blue-light-sensitive opsin (stCoChR), derived from the large-conductance opsin CoChR (Klapoetke et al., 2014), which was targeted to the soma using the Kv2.1 targeting sequence (Lim et al., 2000). We combined stCoChR with holographic two-photon stimulation, low repetition rate laser source excitation tuned at the peak of the opsin absorption spectrum, and imaging of the red-shifted functional indicator jRCaMP1a (Dana et al., 2016) in vivo. Results stCoChR: a high-efficiency, soma-targeted, blue-light-sensitive opsin In order to achieve efficient optogenetic excitation using minimal light power and to minimize crosstalk between optogenetic stimulation and jRCaMP1a imaging, we used the high-conductance, bluelight-sensitive opsin CoChR (Klapoetke et al., 2014). To improve the spatial specificity of stimulation, we restricted the presence of CoChR molecules to the somatic region, such that stimulating one cell body will lead to minimal photocurrents in nearby neuronal projections of other cells. For this purpose, we incorporated a soma-targeting sequence taken from the voltage-gated K + channel Kv2.1, which was previously found to reduce the presence of channels, including channelrhodopsins, in distal dendrites and axons (Baker et al., 2016;Lim et al., 2000). To generate soma-targeted CoChR (stCoChR), we appended the Kv2.1 motif C-terminally to the CoChR coding sequence, followed by a fluorophore (mScarlet or eGFP) which was separated from CoChR-Kv2.1 using the selfcleaving peptide P2A (Kim et al., 2011), leading to fluorescence labeling of the entire cell (EF1a-CoChR-Kv2.1-P2A-XFP; see Materials and methods). We expressed the non-targeted CoChR (hereafter CoChR) and the soma-targeted stCoChR in sparse sets of cortical neurons of mice using AAV injections (Figure 1-figure supplement 1). We also expressed the previously published somatic variant soCoChR (Shemesh et al., 2017), in which somatic targeting was achieved using a sequence from the kainate receptor KA2 subunit. We performed whole-cell recordings from opsin-expressing neurons in acute brain slices in order to measure the photocurrents elicited by each variant and their restriction to the somatic region. Upon two-photon (2P) scanning (l = 940 nm) with a spiral pattern over the soma, photocurrents elicited by stCoChR did not differ from those elicited by CoChR, but were~130-fold higher than those elicited by soCoChR under the same conditions ( Figure 1A,B and Supplementary file 1-Supplementary Table 1; stCoChR: 1927.4 ± 283.3 pA, n = 10 cells; CoChR: 712.3 ± 188.9 pA, n = 11 cells; soCoChR: 14.9 ± 4.2 pA, n = 10 cells; Kruskal-Wallis test with post hoc comparisons: stCoChR vs CoChR, p=0.11; stCoChR vs soCoChR, p=3.8E-6). Under one-photon (1P) full-field illumination covering the soma and neurites (illumination area~0.66 mm 2 ), photocurrents elicited by stCoChR were similar to those of CoChR, but were approximately 40-fold higher than photocurrents elicited by soCoChR ( Figure 1B and Supplementary file 1-Supplementary Table 2; stCoChR: 3709.5 ± 381.2 pA, n = 14 cells; CoChR: 3296.3 ± 452.4 pA, n = 14 cells; soCoChR: 91.8 ± 9.7 pA, n = 11 cells; Kruskal-Wallis test with post hoc comparisons: stCoChR vs CoChR, p=0.87; stCoChR vs soCoChR, p=2.2E-5). As a measure for restriction of the opsin to the somatic region, we calculated the ratio between the photocurrents observed under 2P soma scanning and under 1P full-field illumination per cell ( Figure 1C and Supplementary file 1- Figure 1. Characterization of soma-targeted CoChR variants. (A) Example photocurrents recorded in the acute slice preparation from cortical neurons expressing either non-targeted CoChR, stCoChR or soCoChR upon twophoton scanning with a spiral pattern (10 mm diameter) over the soma (l = 940 nm, 15 or 40 mW on cell, 10.05 ms scan duration, 80 MHz laser repetition rate). Red shaded background indicates scan period. Traces are from cells displaying the median photocurrent per construct. (B) Photocurrents for each CoChR variant under two-photon spiral scan over the soma (as in A) and under one-photon full-field illumination (470 nm, 0.27 mW/mm 2 , 500 ms). Please note the logarithmic photocurrent scale. Points are individual cells and vertical lines are mean ± SEM. In this as well as in other figures, *p<0.05; **p<0.01; ***p<0.001; n.s., non-significant. (C) Ratio between the photocurrent under soma spiral stimulation and full-field stimulation per cell, based on the data in B. (D) Photocurrents obtained during two-photon spiral stimulation of selected points along the neurites of cells in the acute slice preparation expressing either variant, normalized to the current obtained at the soma. Distances are measured from the soma along the path of the neurite to the stimulation point and are binned with 25 mm intervals. Shaded background represents SEM. Inset shows a morphological reconstruction of an example cell (black) expressing stCoChR with the stimulation points (shaded red circles) and the absolute photocurrent obtained at each point (gr with red tick indicating stimulation period). (E) Decay length constants of the photocurrent with distance from soma, based on individual cell data from D. p values in B,C,E are based on Kruskal-Wallis test with post hoc comparisons using Tukey's post hoc HSD test. Table 3). We found that this ratio was higher for stCoChR than both CoChR and soCoChR, whereas it did not differ between soCoChR and CoChR (stCoChR: 0.53 ± 0.06, n = 10 cells; CoChR: 0.18 ± 0.04, n = 11 cells; soCoChR: 0.15 ± 0.03, n = 10 cells; Kruskal-Wallis test with post hoc comparisons: stCoChR vs CoChR, p=8.4E-3; stCoChR vs soCoChR, p=1.3E-3; soCoChR vs CoChR, p=0.80), suggesting that the Kv2.1 motif in stCoChR concentrates the opsin molecules at the somatic region. This effect persisted when using a higher light power for 2P stimulation of soCoChR (40 mW instead of 15 mW; Figure 1B,C, magenta and Supplementary file 1-Supplementary Table 1,3). As another measure for soma restriction, we measured the photocurrents elicited by 2P spiral stimulation of selected points along the neurites of recorded cells ( Figure 1D and Figure 1-figure supplement 2). We found that the photocurrent in cells expressing stCoChR decayed at shorter distances when the stimulation spiral moved away from the soma along the path of the neurites compared with cells expressing CoChR, whereas the decay length constant did not differ between stCoChR and soCoChR and between soCoChR and CoChR ( Figure 1E and Supplementary file 1-Supplementary Table 4; stCoChR: t decay = 16.1 ± 1.9 mm, n = 10 cells; CoChR: t decay = 35.4 ± 6.6 mm, n = 11 cells; soCoChR: t decay = 27.2 ± 6.2, n = 10 cells; Kruskal-Wallis test with post hoc comparisons: stCoChR vs CoChR, p=3.8E-2; stCoChR vs soCoChR, p=0.46; soCoChR vs CoChR, p=0.43). Increasing the power of soCoChR stimulation from 15 mW to 40 mW resulted in a decay length constant similar to that of stCoChR ( Figure 1D,E and Supplementary file 1- Supplementary Table 4; t decay = 17.2 ± 3.2, n = 9 cells; Kruskal-Wallis test with post hoc comparisons: soCoChR under 40 mW vs CoChR under 15 mW, p=0.02; soCoChR under 40 mW vs stCoChR under 15 mW, p=0.94). These results suggest that stCoChR maintains the overall superior photocurrents characteristic of CoChR while a larger fraction of the current arises from opsin molecules in the somatic membrane. Finally, to improve image-based identification of cells in vivo, we generated an additional variant of stCoChR (CoChR-Kv2.1-P2A-NLS-eGFP) in which the fluorophore was restricted to the nucleus using a nuclear localization signal (NLS) derived from SV 40 large T antigen (Kalderon et al., 1984). High-efficiency two-photon holographic stimulation of stCoChRexpressing neurons in vivo We investigated whether stCoChR could be used for efficient two-photon stimulation of neurons in the intact mouse brain in vivo. We first expressed stCoChR in layer 2/3 (L2/3) pyramidal neurons of the mouse cortex ( Figure 2A) using AAV injections. To stimulate neurons, we incorporated a liquid crystal spatial light modulator (SLM)-based holographic module in the beam path of a two-photon microscope ( Figure 2B, see Materials and methods for a detailed description of the optical setup). The SLM allowed projecting holographic oval shapes covering the soma of target neurons ( Figure 2C). In urethane-anaesthetized mice, we then performed two-photon-targeted juxtasomal recordings from stCoChR-positive neurons in order to record their supra-threshold electrical activity ( Figure 2C and D) before, during, and after holographic stimulation of the recorded neurons (shape diameter: 10-15 mm; average power per neuron: 30 mW; stimulation duration: 100 ms; l stim = 920 nm). We observed a significant increase in the firing rate of the recorded neuron during the stimulation period ( Figure 2E). Neuronal responses to holographic two-photon stimulation increased with the average power of stimulation ( Figure 2F). Importantly, the firing frequency increase during two-photon illumination was significantly larger for stCoChR-positive neurons than for stChR2expressing cells previously recorded (Forli et al., 2018) under similar experimental conditions (unpaired Student's t-test, p=3.5E-3, for 10 mW average power; Mann-Whitney test, p=4.6E-5, for 30 mW average power, l stim = 920 nm, Figure 2F). To evaluate the temporal precision of stimulation, we computed the action potential (AP) probability as a function of illumination duration: at all tested average powers (10, 20, and 30 mW), one or more spikes could be recorded in a time window of 100 ms with high probability ( Figure 2G). The latency to first spike and the jitter depended on the illumination power ( Figure 2H). The spatial resolution of holographic stimulation in the radial and axial (dorso-ventral) direction was measured by progressively shifting the illumination volume in the corresponding directions ( Figure 2I,J) while recording the response of the illuminated neuron with juxtasomal recording. The half-response distance was 4.4 ± 3.8 mm in the radial direction, 17.8 ± 4.4 mm in the axial up (dorsal) direction, and 23.8 ± 4.3 mm in the axial down (ventral) direction (n = 6 neurons from five mice). Taken together, these results demonstrate that stCoChR allows higher efficiency two-photon holographic stimulation compared to previously characterized bluelight-sensitive opsins in vivo (Chen et al., 2019;Forli et al., 2018), while maintaining high spatial resolution. Two-photon holographic stimulation of stCoChR-expressing neurons with low average power in vivo The extended illumination approach that we used to stimulate neurons spreads the energy of the laser pulse across the cell body volume of the stimulated targets. This naturally decreases the energy density and the risk for instantaneous non-linear photodamage during illumination, allowing higher energies per pulse to be used. We therefore investigated the possibility to stimulate stCoChRexpressing neurons using an ultrafast pulsed laser with high-energy per pulse and low repetition rate. The laser was coupled with a non-collinear optical parametric amplifier (NOPA) which allowed to stimulate cells around the peak wavelength (l stim = 920 nm) of the opsin's two-photon action spectrum (Shemesh et al., 2017). The NOPA output was directed to the holographic module through a switching mirror, as shown in Figure 3A (see Materials and methods, for a detailed description of the setup). We performed two-photon-targeted juxtasomal recordings in vivo and compared the stimulation response of the same stCoChR-positive neurons to high (80 MHz) or low (1 MHz) repetition rate (stimulus pulse duration: 100 ms; excitation shape diameter:10-15 mm; average stimulation power at 80 MHz: 10-30 mW; average stimulation power at 1 MHz: 1-5 mW; Figure 3B). We found a significant increase (Wilcoxon signed rank test, p=0.031, n = 6 cells from two mice) in the neuronal response using low-repetition rate laser sources compared to using highrepetition rate laser sources ( Figure 3C and D). The AP probability as a function of illumination duration, the latency to first spike, and the jitter of the first spike for stimulation at low repetition rate at various average powers are reported in Figure 3E-F. Using the low repetition rate laser, we also stimulated with train of short pulses (pulse duration: 10 ms; number of pulses: 5) at different frequency (20-50 Hz, Figure 3-figure supplement 1). We found that the probability of eliciting at least one AP raised from 20 Hz to 40 Hz and then tended to decrease at 50 Hz. This is compatible with the depolarization induced by the stimulus n partially summing up to the depolarization induced by the pulse n-1. At lower stimulation frequencies (e.g. 20 Hz) summation leads to increased probability of spiking in the n stimulus. At higher stimulation frequencies (e.g. 50 Hz), summation leads to decreased spiking rate, likely due to larger depolarization during the summation process and consequent inactivation of voltage-gated conductances. We extended the comparison between stimulation at high repetition rate and low repetition rate to other neuron types. We expressed stCoChR in a subpopulation of cortical interneurons -the somatostatin-positive (SST + ) cells -via AAV injections Figure 3-figure supplement 2A. We repeated the two-photon holographic stimulation experiments paired with imaging-guided juxtasomal recordings described above using high (80 MHz) or low (1 MHz) repetition rate laser sources ( Figure 3-figure supplement 2B). Similarly to what we observed on L2/3 principal cells, we found that the increase in firing rate as a function of average illumination power was steeper when using low (1 MHz) repetition rate stimulation compared to using high (80 MHz) repetition rate stimulation (Figure 3-figure supplement 2C). As a consequence, the increase in firing rate per mW was significantly higher at 1 MHz stimulation compared with 80 MHz stimulation (paired Student's t-test, p=3E-4, n = 8 cells from four mice, Figure 3-figure supplement 2D). Using 3 mW and 5 mW average stimulation power, one or more spikes could be recorded in 30 ms with high probability ( Considering the slow decay time of the opsin photocurrent (in the ms range), we reasoned that the repetition rate of the stimulation laser could be further decreased below 1 MHz, while maintaining high-efficiency opsin stimulation and lowering average stimulation power. We explored this possibility by using the pulse picking option of our laser + NOPA system, which decreases the laser repetition rate and keeps the energy per pulse constant (thus decreasing the average stimulation power ( Figure 4A)). We holographically stimulated stCoChR-expressing L2/3 principal cells (energy per pulse under the objective: 5 nJ; stimulus duration: 100 ms; shape diameter: 10-15 mm) while progressively decreasing repetition rate from 1 MHz to 50 kHz ( Figure 4B). As shown in Figure 4C, we found that AP probability non-linearly decreased with the repetition rate. A similar AP probability during illumination was observed when stimulating at 1 MHz repetition rate with 1 mW average power and lower energy per pulse (1 nJ) and when stimulating at 50 kHz repetition rate with 0.25 mW average power and higher energy per pulse (5 nJ) ( Figure 4D). Similar findings were observed in SST interneurons expressing stCoChR (Figure 4-figure supplement 1). These results one high (80 MHz) and one low (1 MHz) repetition rate laser sources are alternated to stimulate the same electrophysiologically recorded cell. Both laser sources are tuned at 920 nm. P 1 , Pockels cell 1; P 2 , Pockels cell 2; NOPA, Non-collinear Optical Parametric Amplifier. (B) Representative traces from the same recorded L2/3 pyramidal neuron in vivo showing the effect of holographic stimulation using the high (left, average power: 20 mW) and the low (right, average power: 1 mW) repetition rate laser. (C) Change in AP frequency as a function of average stimulation power. Black indicates results using the high repetition rate laser and red indicates results using the low repetition rate laser. The red and black lines are fitting the values obtained with non-saturating stimulation power (1 and 3 mW for low repetition, 10 and 20 mW for high repetition, respectively). In this as well in the other panels of this figure, n = 6 cells from two mice. (D) Neural response in terms of AP frequency increase per mW of delivered average laser power in the case of stimulation with the high repetition rate laser (High rep) and low repetition rate laser (Low rep). Wilcoxon signed rank test, p=0.031. (E) AP probability as a function of duration in holographic stimulation experiments using low repetition rate on L2/3 pyramidal neurons expressing stCoChR. Red, yellow, and black indicate different average stimulation powers. Grey indicates the spontaneous AP probability in the absence of holographic stimulation. (F) Latency to first AP and jitter of first AP in holographic stimulation experiments using low repetition rate on L2/3 pyramidal neurons expressing stCoChR. Average stimulation power used in the experiments is indicated with the color code. The online version of this article includes the following source data and figure supplement(s) for figure 3: Source data 1. Source data for Figure 3. demonstrate highly efficient two-photon holographic stimulation of stCoChR with low average power per cell (0.25-5 mW, depending on the laser repetition rate) and energy densities well below photodamage thresholds (Charan et al., 2018;Chen et al., 2019). Crosstalk between imaging and stimulation with stCoChR in vivo Given the high efficiency of two-photon holographic stimulation shown by stCoChR, we asked whether raster imaging at a long wavelength (~1100 nm), typically used to image red-shifted functional indicators, would lead to significant increase in the firing rate of stCoChR-expressing neurons in vivo (crosstalk). We thus measured the supra-threshold responses of stCoChR-expressing L2/3 pyramidal neurons using juxtasomal electrophysiological recordings while performing two-photon raster scanning in vivo ( Figure 5). Raster scanning was performed at 11 Hz and 30-35 mW average power in a rather small (dimension: 161.4 Â 161.4 mm 2 ) region of interest (ROI, Figure 5A). We first tuned the imaging laser at l scanning = 1100 nm, the wavelength that would have been used if a redshifted functional indicator were expressed ( Figure 5B, top panel). We found that raster scanning under these conditions did not significantly affect the firing activity of L2/3 neurons expressing stCoChR ( Figure 5C, left). As an important control, we tuned the imaging laser to 920 nm (the wavelength used for holographic stimulation) and we found that raster scanning the same FOV at 920 nm and moderate power (laser power: 30 mW) significantly increased the spiking activity of stCoChRexpressing neurons ( Figure 5B and C, right). Co-expression of another protein (e.g. jRCaMP1a) could affect the expression levels of stCoChR. We thus repeated the crosstalk experiment in vivo in individual neurons co-expressing stCoChR and jRCaMP1a while simultaneously performing functional imaging and juxtasomal electrophysiological recording ( Figure 5D-F). Similarly to what we observed in cells expressing only stCoChR, we found that raster scanning at 1100 nm (imaging power: 30-35 The laser repetition rate is decreased using pulse picking (top panel). With this approach, the average power delivered to the sample linearly increases with the repetition rate (bottom panel). (B) Representative traces from the same electrophysiologically recorded L2/3 pyramidal neuron stimulated at different repetition rates (from top to bottom panel: 1 MHz, 500 kHz, 100 kHz; energy per pulse: 5 nJ). (C) Probability of discharging one or more AP as a function of the laser repetition rate for principal neurons expressing stCoChR in vivo (illumination duration: 100 ms). n = 6 cells from two mice. (D) AP probability obtained at 50 kHz and 1 MHz repetition rates, but delivering lower average power (0.25 mW vs 1 mW) and higher energy per pulse (5 nJ vs 1 nJ) at 50 kHz compared to 1 MHz. n = 6 cells from two mice; Wilcoxon signed rank test p=0.88. The online version of this article includes the following source data and figure supplement(s) for figure 4: Source data 1. Source data for Figure 4. . Limited crosstalk between imaging and photostimulation using blue-light-sensitive opsins and red-shifted indicators. (A) Two-photon image of L2/3 principal cells expressing stCoChR in vivo (CoChR-Kv2.1-P2A-NLS-eGFP). One opsin-positive neuron was recorded with a glass pipette (dashed lines) in the juxtasomal configuration while two-photon raster scanning inside the indicated area (dashed white box, 161.4 Â 161.4 mm 2 ) was performed (imaging frame rate: 11 Hz). (B) Traces recorded in the juxtasomal electrophysiological configuration from one stCoChR-expressing neuron during epochs (gr boxes) of two-photon raster scanning at wavelength 1100 nm (top) and 920 nm (bottom). Laser power: 30 mW. (C) Average AP frequency during epochs of spontaneous activity (Spont) and during raster scanning (Scan). The left panel shows results when scanning was performed at l = 1100 nm. The right panel displays the results when scanning was done at l = 920 nm. n = 11 cells from six mice; Wilcoxon signed rank test, p=0.31 for l = 1100 nm; Wilcoxon signed rank test, p=9.8E-4 for l = 920 nm. (D) Same as (A), but for cells co-expressing stCoChR (CoChR-Kv2.1-P2A-NLS-eGFP) and jRCaMP1a. (E) Same as (B), but from a cell co-expressing stCoChR and jRCaMP1a. (F) Same as (C), but for neurons co-expressing stCoChR and jRCaMP1a. n = 7 cells from three mice; Wilcoxon signed rank test, p=0.21 for l = 1100 nm; Wilcoxon signed rank test, p=0.02 for l = 920 nm. (G) Representative trace from a L2/3 pyramidal neuron co-expressing stCoChR and jRCaMP1a. AP firing was recorded in the juxtasomal configuration (bottom) while two-photon raster scanning was performed (top). Imaging frame rate and FOV dimensions as in A; average imaging power, 35 mW. The number of recorded APs is reported below the electrophysiological trace (single APs are indicated with an asterisk). The online version of this article includes the following source data and figure supplement(s) for figure 5: Source data 1. Source data for Figure 5. mW) did not alter the spontaneous firing rate of imaged neurons ( Figure 5E-F). Moreover, raster scanning at 920 nm caused a large and significant increase in the neuron's spontaneous firing rate ( Figure 5E-F). Importantly, we verified that the average imaging power used to estimate the crosstalk (between 30 mW and 35 mW) was sufficient to detect jRCaMP1a fluorescence transients associated with spontaneous firing of APs ( Figure 5G), similarly to previous reports (Dana et al., 2016;Forli et al., 2018). We also performed a similar experiment with SST-positive interneurons expressing stCoChR ( Figure 5-figure supplement 1). In agreement with our findings on principal cells, scanning stCoChR-positive interneurons at 1100 nm did not significantly raise the firing rate of recorded cells, while scanning at 920 nm did (Figure 5-figure supplement 1B and Figure 5-figure supplement 1C). Finally, in principal cells co-expressing jRCaMP1a and stCoChR further increasing the average imaging power to 50 mW did not increase the DF/F 0 of calcium events in cells expressing jRCaMP1a ( Figure 5-figure supplement 2), while it led to small but significant increase in the AP frequency induced by raster scanning in both cells expressing stCoChR and cells expressing stCoChR +jRCaMP1 a ( Figure 5-figure supplement 3). Overall, these results demonstrate that imaging at longer wavelengths (~1100 nm) with average power allowing detection of calcium events (30-35 mW) does not significantly perturb the supra-threshold activity of stCoChR-positive principal neurons and interneurons. Simultaneous two-photon imaging of jRCaMP1a and two-photon holographic stimulation of stCoChR We finally combined two-photon holographic stimulation of stCoChR with two-photon imaging of the red-shifted calcium indicator jRCaMP1a (Dana et al., 2016). To this aim, we used one 80 MHz laser source tuned at l imaging = 1100 nm for imaging and one low repetition rate laser ( 1 MHz) tuned at l stim = 920 nm for stimulation ( Figure 6A, see Materials and methods for details). We coexpressed jRCaMP1a and stCoChR coupled to a nucleus-localized eGFP (CoChR-Kv2.1-P2A-NLS-eGFP) in L2/3 pyramidal neurons of the mouse cortex ( Figure 6B), and we programmed the holographic module to project extended shapes covering the cell bodies of target neurons. We simultaneously stimulated ensembles of multiple neurons while imaging their activity (stimulus duration: 200 ms; average stimulus energy per pulse under the objective: 7 nJ; laser repetition rate: 0.1-1 MHz; number of stimulated neurons: 5-30). We observed that consecutive holographic stimulation of stCoChR-expressing neurons typically evoked reliable fluorescence transients in the targeted cells ( Figure 6C). We investigated the relationship between the amplitude of evoked fluorescence transients and the repetition rate of the stimulation laser, at constant energy per pulse ( Figure 6D-E). We found that fluorescence transients in the stimulated cell could be evoked with low power (0.7 mW per cell at 0.1 MHz repetition rate, Figure 6D) and that increasing the repetition rate non-linearly increased the amplitude of the evoked fluorescence transients ( Figure 6E). We performed holographic two-photon stimulation and imaging in experiments in individual cells in which we monitored the firing activity with juxtasomal electrophysiological recordings ( Figure 6F-G). We observed that jRCaMP1a transients induced by the used photostimulation protocol were associated with the firing of 6.7 ± 1.9 APs (range 3-14, n = 5 cells from three mice), in agreement with what we observed in cells expressing only stCoChR ( Figure 2C-E). Importantly, in cells expressing only jRCaMP1a the same stimulation protocol did not increase AP frequency ( Figure 6-figure supplement 1). We also compared spontaneous and photostimulation-evoked jRCaMP1a transients ( Figure 6-figure supplement 2). We found that the DF/F 0 of spontaneous events corresponding to the discharge of few APs were smaller than the DF/F 0 of photostimulation-evoked jRCaMP1a transients, in agreement with the observation that 200 ms of photostimulation evoked trains of 3-14 APs in stimulated neurons ( Figure 6-figure supplement 2). The decay kinetic of spontaneous and photostimulationevoked jRCaMP1a transients were not significantly different ( Figure 6-figure supplement 2). Altogether, these results demonstrate stCoChR is a highly efficient blue-shifted opsin that can be used in combination with red-shifted calcium indicators to perform all-optical circuit interrogations with low average power delivery per cell and minimal crosstalk between imaging and photostimulation. Discussion All-optical interrogation of neural circuits is increasingly recognized as a fundamental tool to causally investigate the codes that neuronal circuits use to drive behavior. By combining two-photon excitation of functional indicators and optogenetic actuators, it has now been possible to simultaneously image and manipulate neural networks with near single-cell resolution in the intact brain of behaving animals (Carrillo-Reid et al., 2019;Gill et al., 2020;Jennings et al., 2019;Marshel et al., 2019). Ideally, optogenetic actuators should require minimal light power for neuronal excitation, allowing efficient stimulation of large numbers of neurons while minimizing heating (Picot et al., 2018). Moreover, they should display no activation by the light wavelength used for imaging, that is limited crosstalk between imaging and photostimulation. Here, we demonstrate that the large-conductance blue-light-sensitive soma-targeted opsin, stCoChR, in combination with high-energy ultrafast laser sources tuned at the opsin's peak absorption wavelength, requires more than one order of magnitude lower average power per cell compared to previously characterized blue-light-sensitive somatargeted opsins in vivo. Moreover, imaging conditions commonly used to monitor red-shifted functional indicators do not result in significant suprathreshold activation of stCoChR-expressing neurons. In a previous study, Shemesh et al. demonstrated temporally precise stimulation of neurons in vitro using a different somatic variant of CoChR (soCoChR, Shemesh et al., 2017). In that study, somatic targeting of CoChR was achieved through a sequence from the kainate receptor (KA2). The resulting soCoChR yielded decreased somatic photocurrents with respect to the unmodified CoChR. In this manuscript, we used a different targeting motif, taken from the Kv2.1 channel (Baker et al., 2016;Chettih and Harvey, 2019;Forli et al., 2018;Mahn et al., 2018;Mardinly et al., 2018;Marshel et al., 2019), to enrich CoChR expression in the somatic compartment. While Shemesh et al. did screen for soma-targeting with the Kv2.1 motif in their study, they conducted this initial screen using dissociated cultured neurons. Our previous work has demonstrated that the efficiency of soma-targeting with the Kv2.1 motif is greatly improved in vivo compared with the dissociated culture , which might explain its apparent inferior performance in the Shemesh et al. study. Furthermore, Shemesh et al. added the Kv2.1 targeting sequence following the CoChR-GFP sequence, whereas our construct contains the Kv2.1 targeting sequence immediately following the CoChR coding sequence, positioning it closer to the membrane as it is naturally located in the Kv2.1 channel sequence. We found that while the photocurrent elicited by stCoChR under full-field illumination is similar to the photocurrent elicited by unmodified CoChR, a higher fraction of that current originated from the soma in stCoChR-expressing cells (Figure 1). The reduction in extrasomatic photocurrents was consistent in stCoChR-expressing cells, while it was only observed in soCoChR-expressing cells under very high light power. Our results suggest that the overall level of expression and efficiency of targeting to the plasma membrane are similar for CoChR and stCoChR, while stCoChR molecules are better retained at the proximal membrane. Thus, stCoChR achieves soma restriction without compromising photocurrent. Although the molecular mechanisms leading to these results remain to be investigated, we found that stCoChR led to very efficient two-photon neuronal excitation which required an average power of 0.25-5 mW per cell. Limiting the crosstalk between imaging and photostimulation is a critical aspect of all-optical methods. Red-shifted opsins are generally prone to unwanted activation during imaging of bluelight-sensitive indicators (Forli et al., 2018;Venkatachalam and Cohen, 2014), because of the intrinsic absorption properties of the retinal group at shorter wavelengths. Previous work demonstrated that combining blue-light-sensitive opsins (e.g. ChR2, GtACR2) with red-shifted indicators (jRCaMP1a) is a valid choice to effectively reduce this form of crosstalk (Forli et al., 2018). Importantly, despite the fact that stCoChR has a much larger photocurrent compared to ChR2 (Klapoetke et al., 2014), we show here that suprathreshold activation of stCoChR-expressing neurons during imaging at red-shifted wavelengths is not significant at imaging power <35 mW (Figure 5 and Figure 5-figure supplement 1). This allowed us to combine high-energy two-photon holographic illumination of stCoChR at around the peak of its absorption spectrum (~920 nm, [Shemesh et al., 2017]) with concurrent two-photon imaging of jRCaMP1a at 1100 nm (around the peak of absorption of this red-shifted indicator), achieving minimal cross activation. So far, efficient two-photon excitation of blue-shifted opsins has largely been constrained by the limited availability of high-energy pulsed systems tuned at~920 nm. The recent accessibility of ready-to-use opticalparametric amplifiers makes it now possible to generate ultrafast high-energy pulses at a tunable wavelength in that spectral range and at a variable repetition rate (0.1-1 MHz). We used one such system to stimulate, for the first time, stCoChR at high energy around its two-photon absorption peak (920 nm, [Shemesh et al., 2017]), thus minimizing energy waste and activation of red-shifted fluorescent indicator. Furthermore, stimulation at 920 nm reduces tissue heating (Podgorski and Ranganathan, 2016) that is, in contrast, higher at the longer wavelengths (>1000 nm) commonly used to excite red-shifted opsins. Previous work showed that under the illumination conditions that we are using in our study (energy per pulse, 7 nJ), thermal effects dominate over non-linear photodamage effects (Charan et al., 2018;Picot et al., 2018) and that lowering the laser repetition rate is advantageous for two-photon excitation of the retinal group (Palczewska et al., 2020). Moreover, previous studies showed that energies up to 60-200 nJ per pulse can be used to activate opsins without causing retinal bleaching and cell damage (Chen et al., 2019;Gill et al., 2020;Mardinly et al., 2018). Thus, it is conceivable that the peak energy could be further increased compared to the one used in our study, while still preserving opsin functionality. The soma-restriction method described in this study effectively increased the spatial resolution of photo-stimulation ( Figure 1D,E). However, the axial spread of the illumination profile combined with the high sensitivity of stCoChR may cause significant activation of non-target cells located above and below the focal plane ( Figure 2J). This is not a major issue for single layered neural targets (e.g. mitral cell layer in the olfactory bulb, [Gill et al., 2020]) or in the case of sparse labeling, but it may compromise the interpretation of photostimulation experiments in denser cell layers, (e.g. neocortical layers). The axial extension of the illumination profile can be reduced using temporal focusing (Chen et al., 2019). Spatial resolution could also be further increased by developing more effective soma-restriction motifs. To reduce crosstalk between imaging and photostimulation, we imaged red-shifted indicators while stimulating blue-light-sensitive opsins. However, red-shifted indicators such as jRCaMP1a Dana et al., 2016;Forli et al., 2018 have lower accuracy in detecting single or few APs compared to green indicators (Chen et al., 2013). Our method will benefit from future improvement leading to more efficient red-shifted indicators. Besides decreasing crosstalk, the choice of red-shifted indicators may facilitate imaging in deeper regions by using longer wavelengths, which are less sensitive to scattering (Helmchen and Denk, 2005). Moreover, the expression of red-shifted indicators is stable over long time windows (Dana et al., 2016) and this may facilitate chronic imaging experiments. Finally, holographic stimulation at 920 nm may decrease tissue heating which is higher at the longer wavelengths (l = 1040 nm) Podgorski and Ranganathan, 2016 used to stimulate red-shifted opsins (e.g. C1V1) (Carrillo-Reid et al., 2017;Packer et al., 2012;Packer et al., 2015;Prakash et al., 2012;Rickgauer et al., 2014). Compared to scanning methods with diffraction-limited-spots (Carrillo-Reid et al., 2016;Carrillo-Reid et al., 2019;Chettih and Harvey, 2019;Marshel et al., 2019;Packer et al., 2015;Yang et al., 2018), our extended illumination approach distributes the laser pulse energy across a much larger volume, that is the cell body volume of the stimulated neuron. Thus, the instantaneous energy density and the consequent risk of non-linear photodamage are decreased, allowing higher energy per pulse to be used. We exploited this advantage and compared light-evoked responses of the same stCoChR-expressing neurons to stimulation with low energy per pulse and high repetition rate (80 MHz) or high energy per pulse and low repetition rate (1 MHz). Previous studies Carrillo-Reid et al., 2019; Chaigneau et al., 2016;Gill et al., 2020;Mardinly et al., 2018;Marshel et al., 2019;Ronzitti et al., 2017;Shemesh et al., 2017;Yang et al., 2018 used the direct output of a low repetition rate laser (l~1040 nm) for opsin excitation. This effectively stimulates red-shifted opsins, but not blue-light-sensitive opsins. Here, we combined a low repetition rate laser with a noncollinear optical parametric amplifier, to efficiently stimulate blue-light-sensitive opsins with high energy pulses at the wavelength corresponding to the peak of the two-photon action spectrum of CoChR (l = 920 nm). Excitation at a low repetition rate allowed us to significantly increase the response of illuminated neurons per mW of delivered average power (Figure 3 and Figure 3-figure supplement 2). Moreover, we reasoned that the slow kinetics of the opsins and the threshold for non-linear photodamage would allow us to further reduce the repetition rate of the excitation light source and the average power delivered to the sample. We found that the probability of eliciting suprathreshold responses during two-photon holographic excitation sublinearly decreased with the repetition rate (energy per pulse was 5 nJ under the objective and constant for the different repetition rates, Figure 4 and Figure 4-figure supplement 1). Importantly, we found that, at 50 kHz repetition rate, stimulation of target cells could be achieved with <1 mW, more than one order of magnitude lower average power per cell compared to previously published blue-light-sensitive soma-targeted (st) opsins in vivo (Forli et al., 2018). Importantly, we validated these findings across both principal cells and interneurons (SST + ), showing that two-photon holographic stimulation can be applied to cell types with different biophysical properties with similar efficacy. Altogether, the use of stCoChR in vivo, compared with other opsins including ST-ChroME, Chronos and the nonsoma-targeted version of CoChR (Chen et al., 2019;Mardinly et al., 2018), allows photoexcitation with lower average power of excitation (1-5 mW at 1 MHz repetition rate, and potentially less at repetition rates < 1 MHz, Figure 3, Figure 3-figure supplement 2). These results pave the way for the simultaneous modulation of the electrical activity of large number of neurons for prolonged periods with minimal risk of tissue heating (Picot et al., 2018;Podgorski and Ranganathan, 2016). In conclusion, we developed a new soma-targeted variant of the large-conductance opsin CoChR. Compared to previously characterized blue-light-sensitive soma-targeted opsins in vivo, stCoChR allowed comparable neuronal stimulation with more than one order of magnitude decreased average power delivered to the brain tissue and with minimal crosstalk between imaging and photostimulation. The combination of stCoChR with tuned amplified laser stimulation and red-shifted functional indicators represents a powerful alternative approach to performing high-efficiency alloptical causal investigation of neural circuits in the intact brain with minimal crosstalk between the imaging and photostimulation channels. . Access to food and water was ad libitum. All the in vivo experiments were performed on urethane-anesthetized young-adult animals (3-16 weeks old, either sex), as described previously (Forli et al., 2018). The number of animals used for each experimental dataset is specified in the text or in the corresponding Figure legend. Viral injections for in vitro experiments To achieve sparse expression of either CoChR variant in the medial prefrontal cortex, an AAV vector containing either variant (Cre-dependent) was mixed with a low-titer Cre-expressing AAV vector and injected into the medial prefrontal cortex as described in Mahn et al., 2018. The titers of purified AAV vectors used for injections were, in genome copies per milliliter (gc/ml): Cre, 5.6E9; stCoChR, 2.1E12; non-targeted CoChR, 7.2E11; soCoChR, 6.2E11. In vitro electrophysiological recordings and characterization of CoChR variants Acute medial prefrontal cortex slices were prepared as described in Mahn et al., 2018. Whole-cell patch-clamp recordings were obtained under visual control using oblique illumination on a two-photon laser-scanning microscope (Ultima IV, Bruker, Billerica, MA) equipped with a femtosecond pulsed laser (Chameleon Vision II, 80 MHz repetition rate; Coherent, CA), a 12 bit monochrome CCD camera (QImaging QIClick-R-F-M-12) and a 20x, 1 NA objective (Olympus XLUMPlanFL N, Tokyo, Japan). Borosilicate glass pipettes (Sutter Instrument BF100-58-10, Novato, CA) with resistances ranging from 3 to 4 MW were pulled using a laser micropipette puller (Sutter Instrument Model P-2000). Recordings were obtained in carbogenated artificial cerebrospinal fluid ([mM] 3 KCl, 11 glucose, 123 NaCl, 26 NaHCO3, 1.25 NaH2PO4, 1 MgCl2, 2 CaCl2; 300 mOsm/kg) supplemented with the glutamate receptor blockers APV (25 mM) and CNQX (10 mM). The recording chamber was perfused at 2 ml/min and maintained at~27˚C. Pipettes were filled with Cs-based intracellular solution ([mM] 5 CsCl, 120 CsMeSO 3 , 10 HEPES, 10 Na 2 -Phosphocreatine, 4 ATP-Mg, 0.3 GTP-Na, 5 QX-314-Cl; 285 mOsm/kg; pH adjusted to 7.25 with CsOH) which contained Alexa fluor 350 dye (<1 mM, Thermo Fisher Scientific) and Neurobiotin Tracer (0.3 mg/ml, Vector Laboratories). Once a whole-cell recording from a prefrontal cell expressing one of the CoChR variants was obtained, peak photocurrent was measured under one-photon full-field illumination (illumination area~0.66 mm 2 ). After the Alexa dye has diffused across the cell, a reference two-photon image stack was scanned (720 nm, 2.5 mm intervals in z axis) and the neurites of the recorded cell were manually traced in three dimensions. Multiple points (9-62) were manually selected along the cell's neurites and targeted for two-photon spiral stimulation (spiral diameter 10 mm, distance between revolutions 1 mm, 15 mW on sample, 10.053 ms duration). The stimulation was repeated twice, once in forward and once in reverse order. For soCoChR cells, the stimulation was repeated also using 40 mW on sample, except for one cell. Recordings were performed using a MultiClamp 700B amplifier, online-filtered at 10 kHz and digitized at 50 kHz using a Digidata 1440A digitizer (Molecular Devices, San Jose, CA). Viral injections for in vivo experiments Viral injections were performed either on pups (P1 -P2, with P0 indicating the day of birth) or in young adults (>P28) similarly to Brondi et al., 2020;Zucca et al., 2019. Briefly, P1 -P2 injection of stCoChR-NLS:eGFP were performed on pups previously anaesthetized via hypothermia and immobilized on a refrigerated stereotaxic apparatus. A small incision on the skin was performed to expose the skull over one hemisphere and~250 nl of viral solution was slowly injected through a glass pipette (1 mm lateral from bregma, at 0.25 mm depth). At the end of the injection, the skin was sutured and the pup revitalized under a heating lamp. Experiments were performed 4-12 weeks after the injection (Figures 3-6 Once young adults animals were anesthetized with 2% isoflurane/0.8% oxygen, they were placed in a stereotaxic apparatus (Stoelting Co, Wood Dale, IL), while the temperature was maintained at 37˚C with a heating pad. The head was shaved and disinfected, and a small incision was performed to expose the skull over the primary somatosensory cortex. One to three small holes were drilled on the skull in order to lower a glass micropipette containing the viral solution into the parenchyma (pipette depth: 0.2-0.3 mm from brain surface). 300 nL of virus per site were injected at 30-50 nL/min by means of a hydraulic injection apparatus driven by a syringe pump (UltraMicro-Pump, WPI, Sarasota, FL). At the end of the injection, the scalp incision was sutured, covered with antibiotic ointment and the animals were placed under a heating lamp until full recovery. Experiments were performed 3-16 weeks after injection. Animal surgery Surgical procedures prior to electrophysiological recordings, photo-stimulation, and imaging experiments were performed on urethane anesthetized young-adult animals (3-16 weeks old, either sex), as described previously (Beltramo et al., 2013;Vecchia et al., 2020;Zucca et al., 2017). Mice were anesthetized with an intraperitoneal injection of urethane (16.5%, 1.65 g/kg). Animal body temperature was kept constant at 37˚C with a heating pad and monitored together with respiration rate, vibrissae movement, and reactions to tail pinching throughout the surgery and the experiments. The scalp was removed after infiltrating all incisions with lidocaine. Head-fixation of mice was achieved using custom printed plastic chambers with a 4 mm central hole, which was attached to the animal's skull by means of superglue and dental cement. A craniotomy (area:~700Â700 mm 2 ) was opened over the right sensory cortex close to the injection sites (identified by looking at the fluorescence of the expressed transgene) and the dura was carefully removed. The surface of the brain was kept moist with standard HEPES-buffered artificial cerebrospinal fluid (aCSF) composed of (in mM): 127 NaCl, 3.2 KCl, 2 CaCl 2 , and 10 HEPES at pH 7.4. Optical setup for in vivo recordings The optical set-up for two-photon holographic illumination at high repetition rate (80 MHz, Figures 2 and 3 and Figure 3-figure supplement 2) was composed of an ultrafast pulsed laser source (S1 in Figure 2 and Figure 6, Chameleon Discovery, 80 MHz repetition rate, tuned at 920 nm or 1100 nm, Coherent, Milan, IT), a customized scan-head (Bruker Corporation, former Prairie Technologies, Milan, IT), an upright epifluorescence microscope (BX61, Olympus, Milan, IT), and a liquid crystal spatial light modulator (SLM, X10468-07 SLM, Hamamatsu, Milan, IT), which was conjugated to the back aperture of the objective (Dal Maschio et al., 2010;Dal Maschio et al., 2011). The laser beam intensity was modulated by a Pockels cell (P 1 in Figures 2 and 3, Conoptics Inc, Danbury, CT) and then directed to the SLM by a sequence of mirrors (UM10AG, Thorlabs, Newton, NJ). A half-wave plate (RAC 5.2.10 achromatic l/2 retarder -B. Halle Nachfl GMBH, Berlin, DE) was placed before the SLM in order to obtain the optimal polarization for phase-only modulation at the SLM. A first telescope (IR doublets 30 mm and 75 mm, Thorlabs, Newton, NJ) expanded the laser beam to fill the active window of the SLM. A second telescope (IR doublets 300 mm and 150 mm, Thorlabs, Newton, NJ) was used to resize the laser beam to fit the dimensions of the scanning mirrors inside the scan-head (G in Figure 2, G2 in Figure 6) and to optically conjugate the plane of the SLM with the back aperture of the objective. Two multi-alkali photomultipliers tubes (PMTs, Hamamatsu, Milan, IT) were used as detectors for raster scanning imaging. Dual emission filters in front of the two PMTs were 525/70 nm and 607/45 nm, respectively. In Figure 2 and Figure 6, D 1 was a 660 nm long-pass dichroic mirror, D 2 a 575 nm long-pass dichroic mirror; in Figure 6, D 3 was a 980 nm longpass dichroic mirror. The Nikon CFI75 LWD 16X W (0.8 NA, Nikon, Tokyo, Japan) was used for most experiments, except those in Figure (Opera-F; Coherent, Milano, IT or Orpheus-F; Light conversion, Vilnius, Lithuania, respectively). The signal output wavelength of the OPA was set at 920 nm. Repetition rate was 1 MHz or below (via AOM pulse-picking). A pulse compressor after the OPA was used in some experiments. The pulse length after the compressor was <290 fs. Laser source for photostimulation could be switched by using a motorized mirror in the optical path. For simultaneous two-photon imaging and stimulation experiments ( Figure 6, Figure 6-figure supplement 1, Figure 6-figure supplement 2) a low repetition rate high energy laser source (Carbide + Orpheus) was used for holographic illumination and tuned at 920 nm. The telescope downstream the SLM was replaced by two IR doublets (400 mm and 125 mm, Thorlabs, Newton, NJ) and the stimulation beam was relayed onto a second set of galvanometric mirrors inside the scan-head (mirror diameter: 3 mm). Imaging and stimulation beams were combined by a dichroic mirror (zt980rdc, Chroma Technology Corporation, Bellows Falls, VT) positioned between the two sets of galvanometric mirrors and the scan lens. Phase modulation for holographic illumination and calibration of the optical setup were done as previously described (Forli et al., 2018). To control for the functionality of excitatory opsins in the recorded neurons, in a subset of the experiments single-photon stimulation of opsins was performed at 488 nm using a laser (MLD, COBOLT, Solna, SE) and a multimode fiber to deliver light to the brain (core diameter 200 mm, 0.22 NA, QMMJ-3X- OZ Optics Ldt,Ottawa,CA). The laser was coupled to the fiber via a 10X objective (MPLN10X, Olympus, Milan, IT). On-off control of illumination was performed directly with a TTL input to the laser driver (stimulus duration: 50 ms). Light intensity was 150-300 mW at the fiber tip. The optical fiber was positioned~500 mm above the craniotomy, at an angle of~30˚. In vivo electrophysiological recordings We performed two-photon targeted juxtasomal electrophysiological recordings as previously described in Bovetti et al., 2017;De Stasi et al., 2016. The Sutter P-97 micropipette puller (Sutter instrument, Novato, CA) was used to pull Borosilicate glass pipettes (Hilgenberg, Malsfeld, Germany) with a resistance of 4-9 MW. Pipettes were filled with aCSF solution mixed with Alexa Fluor 488 or 594 (10 mM, Thermo Fisher Scientific, Waltham, MA). stCoChR-expressing neurons in L2/3 were targeted under the two-photon microscope by imaging the expressed fluorescent reporter (eGFP or mScarlet, at 920 nm and 1050 nm excitation wavelength, respectively) while monitoring the pipette fluorescence and its electrical resistance (through brief voltage steps). When the pipette tip and the target cell were in close contact one to the other, a patch of cell membrane was sealed on the pipette tip by applying a mild negative pressure. The juxtasomal configuration was reached when the pipette resistance was >20 MW and spikes were clearly visible. Extracellular AP waveforms from the target neuron were recorded in current-clamp during spontaneous activity and holographic illumination epochs. Electrical signals were amplified by a Multiclamp 700B, low-pass filtered at 2.2 kHz, digitized at 50 kHz with a Digidata 1440, and acquired with pClamp 10 (Molecular Device, Sunnyvale, CA). Analysis of electrophysiological recordings was carried out using Clampfit 10.4 software (Molecular Device, San Jose, CA), IgorPro (WaveMetrics, Portland, OR) and OriginPro 2018 (Origin-Lab, Northampton, MA). In vivo electrophysiological recordings for crosstalk quantification ( Figure 5 and Figure 5-figure supplement 1) were performed on stCoChR-expressing neurons while raster scanning the recorded cell at 11 Hz (pixel size = 0.6-1.6 mm) at both 1100 nm and 920 nm excitation wavelengths (average power: 35 mW). All-optical two-photon imaging and holographic stimulation in vivo Simultaneous two-photon imaging and photostimulation experiments were performed in anesthetized mice expressing jRCaMP1a and stCoChR in cortical neurons. Imaged field of views were located in L2/3 of the primary somatosensory cortex (average depth of recordings: 145 mm). Twophoton imaging at l exc = 1050 nm was performed to assess the expression pattern of the opsin (stCoChR) and of the calcium indicator (jRCaMP1a) at the same time. A reference image of the selected FOV was acquired, and holographic oval shapes covering the soma of target neurons were generated by the SLM (l exc = 920 nm) and projected at the sample. Temporal series were acquired in raster scanning configuration with the imaging beam (image dimension: 100 Â 100 pixels; frame rate: 11 Hz; pixel dwell time: 4 ms; l exc = 1100 nm). Holographic photostimulation duration was 200 ms and it was repeated six times at 0.066 Hz. Immunohistochemistry for subcellular localization of CoChR and stCoChR Brains of mice sparsely expressing either CoChR or stCoChR in the medial prefrontal cortex were fixated and sectioned as described in Mahn et al., 2018. Coronal cortical sections (50 mm thick) were washed three times in phosphate-buffered saline (PBS) and permeabilized in 0.5% Triton (Sigma-Aldrich, Rehovot, Israel) in PBS for 1 hr at room temperature, followed by incubation in blocking solution (20% normal horse serum [NHS] and 0.3% Triton in PBS) for 1 hr at room temperature. Sections were then exposed to polyclonal rabbit anti-2A peptide primary antibody (diluted 1:500 in PBS with 5% NHS and 2% Triton; Millipore Cat# ABS31, RRID:AB_11214282) for 48 hr at 4˚C. Following three washes in PBS, sections were exposed to polyclonal Cy5-conjugated donkey anti-rabbit secondary antibody (diluted 1:500 in PBS with 2% NHS; Jackson ImmunoResearch Labs Cat# 711-175-152, RRID:AB_2340607) for 2 hr at room temperature. Sections were washed three times in PBS, incubated with DAPI (5 mg/ml solution diluted 1:30,000 prior to staining; Thermo Fisher Scientific) for 5 min at room temperature, washed again for three times in PBS, and embedded in DABCO mounting medium (Sigma-Aldrich) on gelatin-coated slides. Slides were imaged using a confocal microscope (Zeiss LSM 700 or Leica TCS SP5) under identical conditions. Images of cells expressing CoChR or stCoChR were analyzed using Fiji software. Data analysis For the calculation of the decay length constants (t decay ) as a measure for soma restriction in vitro ( Figure 1E), the distance of each stimulated point from the soma along the path of the neurites was measured by extending a segmented line from the soma, along the neurites to the relevant point on the lateral plane using Fiji software (version 1.52) (Schindelin et al., 2012). Distance along the axial axis was neglected. Peak photocurrent at each point was calculated and normalized to the photocurrent at the soma. t decay was calculated for each cell using a monoexponential fit. Electrophysiological traces from in vivo juxtasomal recordings were filtered with a high-pass filter with a cutoff frequency of 10 Hz and spikes were detected using a threshold criterion. The threshold was visually adjusted for each sweeps by an expert user and it was set at >3 the standard deviation of the noise. For experiments in Figures 2 and 3, Figure 3-figure supplement 2, Figure 6-figure supplement 1, traces were divided in three time windows: before (Pre, 1 s), during (Stim, 0.1 s), and after (Post, 1.5 s) holographic illumination. D AP Freq was defined as the difference between the average firing frequencies in the Stim and Pre windows. The spatial resolution was calculated as follows: neuronal responses to photostimulation (D AP Freq) were recorded as a function of the distance between the soma of the targeted neuron and the projected holographic shape ( Figure 2I and J), which was radially (radial shift: 0, 20, 40, 60 mm) or axially (axial shift: À75,-50, À25, 0, 25, 50, 75 mm) shifted. D AP Freq as a function of the shift in three different directions (radial, axial up , and axialdown ) was fitted with a mono-exponential function: as described in Packer et al., 2015. We excluded curves fittings with b < 0 or with values of A showing a difference >25% compared to the D AP Freq value computed when the shape was centered with the target neuron (radial shift: 0 mm). The spatial resolution, l 1/2 , was defined as the distance at which half of the evoked response calculated from fit (A/2) was observed. Axial up was defined as the direction toward the brain surface, while axial down was defined as the ventral direction. For ) was defined as the number of stimulation trials in which one or more APs were recorded during the illumination period (or shorter epochs) divided by the total number of stimulation trials. For Figure 3-figure supplement 3, data were fitted with a decreasing exponential function: where y was the latency/jitter, x the average power, and parameters were not fixed, except for y 0 . In panels Figure 3-figure supplement 3A and C, y 0 was set according to the average latency observed with 1P stimulation. In panels Figure 3-figure supplement 3B and D, y 0 was set to zero. For all-optical experiments in Figure 6, temporal series acquired in vivo were analyzed using custom scripts written in MATLAB (Mathworks, Natick, MA). Stimulated regions of interest (ROIs) were drawn and, for each ROI, the change in fluorescence relative to the baseline (DF/F 0 ) was computed as a function of time for jRCaMP1a signals. The fluorescence jRCaMP1a baseline (F 0 ) was calculated in ten frames at the beginning of the recorded session. When above noise level, fluorescence artifacts due to holographic stimulation were removed by blanking the jRCaMP1a signal in the frames corresponding to stimulation periods. Fluorescent jRCaMP1a transients were fitted with a mono-exponential function and the corresponding amplitude of the transient at the offset of stimulation and decay time were calculated. Repetition rate during holographic stimulation was 1 MHz, 500 kHz, and 100 kHz. A cell was defined 'responsive' to the holographic stimulation if the amplitude of the average (across stimulation trials) fluorescence transient at the offset of holographic illumination was larger than three times the standard deviation of the trace measured during a pre-stimulation period. The success rate of all-optical manipulation at 1 MHz repetition rate was 85% (28 responsive neurons out of 33 stimulated neurons). Responsive cells decreased with the laser repetition rate. At 0.5 MHz the percentage of responsive cells was 73% (24 responsive neurons out of 33 stimulated neurons); at 0.1 MHz the percentage of responsive cells was 39% (13 responsive neurons out of 33 stimulated neurons). In responsive neurons stimulated at 1 MHz repetition rate, the average DF/F 0 ratio of stimulated calcium transients was 23%±2, N = 28 cells from two mice. For all-optical experiments and simultaneous juxtasomal recordings ( Figure 6F,G, Figure 5-figure supplement 2, Figure 6-figure supplement 1, Figure 6-figure supplement 2), the amplitude of fluorescence transients corresponding to AP events was calculated as described previously (Forli et al., 2018). Decay time for spontaneous jRCaMP1a activity ( Figure 6-figure supplement 2D) was calculated for each recorded neuron by fitting DF/F 0 traces with an autoregressive process of order 2 (Pnevmatikakis et al., 2016). Statistics All values were expressed as mean ± SEM, unless otherwise stated. Sample size (n) for different experiments was chosen based on previous studies (Forli et al., 2018;Packer et al., 2015;Yang et al., 2018). Blinding was not used in this study. Analysis presented in this manuscript included all recordings with no technical issues. For n ! 10, a Kolmogorov-Smirnov test was used while, for n < 10, a Saphiro-Wilk test or a Kolmogorov-Smirnov test (n < 5) were adopted to test for normality. Student's t-test was used to calculate statistical significance when comparing two populations of normally distributed data. The non-parametric Mann-Whitney U test or Wilcoxon signedrank test (for unpaired or paired comparison, respectively) was used in case of non-normal distributions, unless otherwise stated. One-way ANOVA with Bonferroni or Tukey's post hoc test was used when multiple (>2) normally distributed populations of data were compared. For non-normal distribution and multiple comparisons, the non-parametric Friedman test with Dunn's post hoc correction and the Kruskal-Wallis test with Tukey's post hoc HSD were used. All tests were two-sided. Statistical analysis was performed using Prism 6 (GraphPad, La Jolla, CA), OriginPro 2018 (OriginLab, Northampton, MA), and MATLAB (Mathworks, Natick, MA). The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
2021-05-27T06:19:22.117Z
2021-05-25T00:00:00.000
{ "year": 2021, "sha1": "91016c8c448e690d2e31973e86492c8300714c5d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.63359", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a92cada422186131ff66d47690eff413b64deec3", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
245731899
pes2o/s2orc
v3-fos-license
A Multi-Point Geostatistical Seismic Inversion Method Based on Local Probability Updating of Lithofacies : In order to solve the problem that elastic parameter constraints are not taken into account in local lithofacies updating in multi-point geostatistical inversion, a new multi-point geostatistical inversion method with local facies updating under seismic elastic constraints is proposed. The main improvement of the method is that the probability of multi-point facies modeling is combined with the facies probability reflected by the optimal elastic parameters retained from the previous inversion to predict and update the current lithofacies model. Constrained by the current lithofacies model, the elastic parameters were obtained via direct sampling based on the statistical relationship between the lithofacies and the elastic parameters. Forward simulation records were generated via convolution and were compared with the actual seismic records to obtain the optimal lithofacies and elastic parameters. The inversion method adopts the internal and external double cycle iteration mechanism, and the internal cycle updates and inverts the local lithofacies. The outer cycle determines whether the correlation between the entire seismic record and the actual seismic record meets the given conditions, and the cycle iterates until the given conditions are met in order to achieve seismic inversion prediction. The theoretical model of the Stanford Center for Reservoir Forecasting and the practical model of the Xinchang gas field in western China were used to test the new method. The results show that the correlation between the synthetic seismic records and the actual seismic records is the best, and the lithofacies matching degree of the inversion is the highest. The results of the conventional multi-point geostatistical inversion are the next best, and the results of the two-point geostatistical inversion are the worst. The results show that the reservoir parameters obtained using the local probability updating of lithofacies method are closer to the actual reservoir parameters. This method is worth popularizing in practical exploration and development. Introduction Seismic inversion is an important approach to lithology identification and oil-gas interpretation. It converts conventional seismic reflection records into acoustic impedance properties and reservoir parameters in order to give them a more definite geological meaning. It is a common concern of oil and gas geophysicists and geologists to directly apply seismic inversion methods to fine reservoir characterization and modeling. However, due to the noise of seismic data, the finite frequency of seismic waves, and the incomplete mapping of geological attributes to the seismic physical parameters, the inversion and interpretation of seismic records into reservoir attributes are not unique and pose great challenges. Some scholars have conducted a lot of research on eliminating the impact of noise, Ghaderpour [1] proposed a method of seismic data regularization and random noise attenuation via least-squares spectral analysis in frequency wavenumber domain, due to the accuracy of the estimated wavenumbers, the total number of iterations of the method is significantly reduced and the efficiency is significantly improved. However, there are still many problems in the process of connecting seismic property with geology. The design and development of advanced seismic inversion methods that integrate geological, rock geophysics, and even the production of dynamic data, have been important topics for exploration geophysicists, and two types of inversion methods, namely, deterministic inversion and (geological) statistical inversion [2], have gradually formed. Deterministic inversion obtains the maximum posteriori probability model through an optimization algorithm and minimizes the error. Although strong reflector information can be recovered well and the inversion results have a good lateral continuity, the resolution of the inversion results can only reach the resolution of the seismic data due to the limited bandwidth of the seismic data [3]. In order to improve the resolution, the consensus is that it is necessary to integrate various geological (logging) information into the reservoir inversion using spatial reservoir correlation [2]. In geological modeling, this spatial correlation is mainly represented by the variogram function. Journel and Huijbregt [4] first developed the reservoir geological modeling method integrating seismic data, which laid a solid theoretical foundation for seismic stochastic inversion. In 1994, Hass and Dubrule [5] proposed stochastic inversion based on sequential simulation in the First break, which is the prototype of the geostatistical inversion method. Since the spatial correlation is characterized by the vertical variogram function of the borehole data, the planar continuity is obtained from the seismic data. Therefore, the inversion effectively makes use of the vertical resolution of the well data, makes up for the limitation of the seismic bandwidth, and improves the inversion resolution [2,[4][5][6][7][8]. In addition, the inversion probabilities are inferred using the Kriging method, and the Markov chain Monte Carlo (MCMC) method is used for sampling posterior probabilities [9][10][11], which satisfy the needs of statistical inversion uncertainty analysis and evaluation. Azevedo and Demyanov [12] have also conducted research on multi-scale uncertainty evaluation in geostatistical seismic inversion, this method combines geostatistical seismic inversion with a stochastic adaptive sampling and Bayesian inference of the metaparameters to provide more accurate and realistic uncertainty prediction without being limited by a large number of assumptions of largescale geological parameters. Pereira [13] proposed iterative geostatistical seismic inversion combined with local anisotropy, this method adopts a random sequence simulation and joint simulation method, which can deal with the information of spatial variation, and uses local and independent variogram models to reduce the spatial uncertainty related to underground characteristics. Therefore, the geostatistical inversion method has been widely used and has achieved good results in practical applications. With the development of geological modeling research, more and more modelers have pointed out that the variogram-based method is difficult to integrate more information in order to describe a complex curved reservoir morphology, and it cannot fully reveal the spatial variability [6,[14][15][16][17][18]. It is necessary to combine the spatial distribution of multiple points to determine the reservoir's characteristics. Based on this idea, Guardiano and Srivastava [19] proposed the concept of a spatial multi-point joint distribution to represent complex reservoir structures, and they obtained the multi-point probability through repeated scanning of a training image (a quantitative grid-based reservoir lithofacies model) and data samples (i.e., the spatial multi-point combination model) and applied it to the prediction of the points to be estimated. Strebelle [15] improved this method by designing a search tree to store and access the multi-point probability, which improved the simulation efficiency. Multi-point geostatistics were formally introduced into actual reservoir modeling [15] and gradually replaced the traditional two-point geostatistics method based on the variogram function. This has also aroused the attention of geostatistical inversion scholars. Gonzalez [8], who first attempted to apply multi-point geostatistics to reservoir inversion, used the improved Simpat method to obtain the lithofacies distribution, sampled the seismic attributes through the relationship between the lithofacies and seismic attributes, and finally used the likelihood function to determine the optimal matching elastic parameters. Their method emphasizes the control of the relative sedimentary facies quality; that is, the spatial continuity of the elastic parameter field and its sampling are controlled by a specific geological lithofacies model. They named the method mSIMPAT. However, the calculation efficiency of the mSIMPAT is low in the process of updating facies, which creates difficulties in actual seismic inversion. Jeong [20] replaced mSIMPAT with the direct sampling method, which they combined with the adaptive spatial resampling (ASR) method to improve the operation efficiency. However, the ASR method retains the optimal matching facies data and adds conditional data to guide the multi-point geostatistical facies modeling. The inverted elastic parameters were obtained through integral iteration without local lithofacies updating. Especially in lithofacies modeling, the elastic parameters obtained during previous iterations cannot be used as constraints. Liu [21] replaced mSIM-PAT with the SNESIM method and combined it with the probability perturbation method (PPM) to accelerate the inversion iteration efficiency. The updating of the lithofacies model is conducted by disturbing the entire geological model using the probabilistic perturbation method without updating the local lithofacies. Although this disturbance satisfies the actual seismic observation data through annealing optimization, it is likely to be at the expense of disturbing the local specific deposition patterns. Because the specific lithofacies model plays an important role in the inversion, it not only determines the inversion's efficiency, but also the accuracy of the inversion [22][23][24][25]. Therefore, it is necessary to reconsider the local probability updating in facies modeling. In this study, the iterative inversion method of Gonzalez [8] is revised. In the iterative process, the theory of the permanent probability updating ratio is used to integrate the early elastic parameters for the local lithofacies prediction. In addition, the inversion results of the current iteration are not only evaluated but are also compared with the previously partially retained lithofacies and the elastic parameters to determine whether to update. The theoretical model tests reveal that the improved method can reflect the distribution of the reservoir lithofacies and the elastic parameters better, and its calculation efficiency is high. Practical inversion of the Xinchang gas field data in China also demonstrates that the improved method has a higher inversion accuracy. The results of this research provide technical support for oil and gas exploration and development. Inversion Principle and Multi-Point Geostatistical Inversion Method All inversion processes can be regarded as the process of obtaining synthetic seismic records of the elastic parameters in a certain way and matching the real seismic records within an allowable error range, the principle of which can be expressed by the Bayesian formula [26]. where c is the correction parameter and is a constant, γ M (m) is the prior probability, and γ D (g(m)) is the likelihood function. M is the simulation region, m is the initial model or pattern group, g(m) is the forward operator, and σ M (m) is the posterior probability. Inversion is an inference process in which the prior probability is updated and made faithful to the actual seismic data, and the maximum posterior probability is the core objective. γ D (g(m)) is used to measure the matching degree between the forward simulation record and the actual observed seismic track. Its elastic parameters are generally obtained from the prior probability sampling, and the wavelet comes from the actual seismic working area. Therefore, the core of the inversion lies in the method of obtaining the prior probability γ M (m) [17]. Haas and Dubrule [5] used a sequential Gaussian simulation to obtain the impedance data, in which the prior probability of the impedance was predicted using the variogram function, which also constituted the most initial geostatistical inversion. Subsequently, different scholars discussed the influence of the prior information on the Bayesian inversio. Accurate prediction of the prior probability is the key to improving the accuracy of the seismic inversion. Considering that multi-point statistics can obtain higher-order prior statistics from training images and can integrate more information than the second-order statistics of the variogram function, using multi-point geostatistics to predict the prior impedance information is a potential development direction. However, multi-point geostatistics is mainly applicable to discrete variables, and it is difficult to predict continuous variables. In seismic inversion, it is often necessary to establish statistical rock physics models; that is, the statistical relationship between the elastic parameters m elas (such as the impedance and velocity) and the reservoir properties m res (such as the lithofacies). According to the chain rule of conditional probability, the prior probability can be written as γ M (m res , m elas ) = P prior (m res , m elas ) = P petro (m elas |m res )P prior (m res ). ( Thus, the prior probability of the lithofacies can be predicted using multi-point statistics, and the current joint prior probability distribution of the elastic parameters-lithofacies can be obtained from the lithofacies and elastic parameter probability [8]. The likelihood function γ D (g(m)) is used to measure the error between the forward simulated record and the actual observed seismic trace. Selecting a specific likelihood function is essential to determining what is a good enough fit. It can be based on the distribution of the measurement errors, or it can be assessed subjectively, for example, using the seismic root mean square error or correlation coefficient. The likelihood function is generally expressed as the sum of the residuals between the forward simulated record and the actual seismic data (assuming that the seismic noise has a Gaussian random distribution with mean value of 0 and a variance of σ e ): where D is the observed seismic trace, and g(m) is the synthetic seismic trace. By combining Equations (1)-(3), the posterior probability can be expressed as e P petro (m elas |m res )P prior (m res ) . Once the a posteriori probability distribution is calculated, it can be used several times for sampling and characterization of the a posteriori probability of the reservoir model. Each model in the model set is consistent with the geological knowledge and the prior information in the training image. Lithofacies and the actual seismic data have a better matching relationship. This sampling is usually achieved using MCMC sampling. However, it takes a long time for the Markov chain to visit all the state spaces, and it converges slowly to a stationary distribution. Gonzalez [8] cleverly designed the internal and external double iteration method to achieve an inversion effect using a limited number of iterations. Its two core processes of this method are preprocessing and inversion. Preprocessing is the preparation of the information required for the inversion, including training images, statistical relationship between lithofacies and elastic parameters, and well data. Inversion is an iterative process. First, a random path is defined, the prior probability of the lithofacies is obtained through multi-point scanning of the training images, and the geological model library is established. The selection of different geological models can be regarded as the external iteration. Then, according to relationship between lithofacies and elastic parameters, the attribute values, such as the acoustic velocity and density are extracted, which is the internal iteration. According to the attribute values obtained from the simulation, the reflection coefficient sequence is obtained, and the forward simulation record is obtained through seismic wavelet convolution and is compared to the actual seismic record. If the error between them meets the set condition, the attribute value of the point to be estimated is retained; otherwise, it is extracted and simulated again. If the internal iteration is completed, and the best matching geological model is not found, the cycle is broken out and a new geological model is searched from the model library. The above steps are repeated until the given conditions are met in order to achieve seismic inversion and reservoir prediction. Method Improvement Gonzalez [8] introduced multi-point statistical inversion (mSIMPAT), which has been widely applied and studied. Because the mSIMPAT method is used to search for the best matching deposition pattern, the entire training image must be scanned repeatedly each time. When the size of the training image and the data sample is slightly larger, the overall scanning will seriously increase the computational load. In the process of internal and external double iteration, the optimal elastic parameters and lithofacies data obtained from the previous external iteration inversion do not provide information and constraints for the next inversion iteration cycle, resulting in each iteration cycle being independent. Thus, it is difficult to iteratively update the local lithofacies model. To solve the above problems, the iterative inversion algorithm was improved. In view of the low computational efficiency of the mSIMPAT method, scholars replaced it with the direct sampling (DS) method and the SNESIM method. The DS method is a direct matching method [27]. Since it does not need to store the multi-point conditional probability, it avoids the storage problem when the probability of the training image scanned is larger. Because of the non-integral scanning, its computational efficiency is significantly better. Local areas can be selected during scanning, which can ensure the reproduction of the local characteristics of the sedimentary model and reflect the non-stationary reservoir structure to a certain extent, and it is more suitable for reservoir prediction involving complex changes. Therefore, the DS method is a natural choice to replace the mSIMPAT method as the prior probability method [20]. However, the DS method is still difficult to implement in terms of local updating under synthetic elastic parameter constraints. In contrast, the SNESIM method has a high computational efficiency because it stores all of the multi-point probabilities through one scan. Single point prediction more easily integrates multiple sources of information, especially the elastic parameters obtained in the previous iteration. Therefore, in this study, the SNESIM method was chosen to replace the mSIMPAT method. In view of the local updating of the lithofacies in the inversion process, the statistical relationship between the elastic parameters and the lithofacies is attained using the permanent ratio of the updating theory in the inner cycle [28]: P(A|B,C) is the current joint statistical probability of the training images and the elastic parameters. P(A|B) is the multi-point probability under the condition of only lithofacies data. P(A|C) is probability under the condition of the optimal elastic parameters of the previous inversion, which is known from the elastic parameters-lithofacies statistical probability. P(A) is the lithofacies statistical probability obtained from the geological analysis; hence, a in Equations (6) and (10) can be interpreted as a prior distance to the event A occurring. Likewise, the values b and c in Equations (7), (8) and (10) state the uncertainty about occurrence of A, given information B and C, respectively. x is the uncertainly when knowing both B and C. To describe the relationship between B and C, the τ factor is introduced: τ(B, C) is an evaluation of the correlation degree between the seismic elastic parameters and the lithofacies, and it indicates whether the seismic elastic parameters reflect the type and distribution of the lithofacies, and it is generally obtained through trial and error. According to Equations (5) and (10), the elastic parameters obtained from the previous iteration inversion can be used to constrain the local lithofacies prediction and update the local lithofacies model. In order to determine the optimal elastic parameters in the local inversion, the current forward simulation records are compared with the previous forward simulation records, including the optimal records retained in the earlier stage of the outer cycle. In the outer cycle, the current overall inversion results are compared with the actual error to decide whether to retain the inversion results of the elastic parameters and repeat the cycle iteration. This continues until the local elastic parameter inversion and the global inversion satisfy the given conditions. Then, the cycle terminates and the inversion results are output. Inversion Steps Based on the above improvements, a multi-point geostatistical inversion method based on the local probability updating method for the inversion of lithofacies (LPUMI) was developed in this study. The main steps are as follows ( Figure 1). Check the data. Check whether the seismic data and well data are complete, including lithology, density, p-wave velocity, and s-wave velocity information. b. Statistical analysis of the data. When the shear wave information cannot be obtained from the logging data, it can be estimated using empirical formulas. The probability density functions of the different elastic parameters of the lithofacies are established to provide a basis for the subsequent elastic parameter sampling. The plot of the lithofacies versus the elastic parameters is established to provide a basis for the fluid prediction. c. The attribute values of the initial reservoir elastic parameters are given. According to the statistical well data, the initial elastic parameter attribute values, including the density, p-wave velocity, and s-wave velocity, are assigned to the simulation grid. d. Build training images. Commonly, unconditional modeling methods such as objectbased stochastic modeling, sedimentary process modeling, multi-point simulation results, outcrop and modern deposition models, digital geological sketches, and physical simulation interpretation are used to confirm the working area's reservoir characteristics for the training images. e. Scan the training images to establish a search tree. Only the data events that actually appear in the training image are saved in the search tree. In order to limit the geometric configuration of the data events and prevent it from being too large, the maximum number of searched data needs to be defined. Build a search tree based on the sample of the largest search data. Step 1: Preprocessing a. Check the data. Check whether the seismic data and well data are complete, including lithology, density, p-wave velocity, and s-wave velocity information. b. Statistical analysis of the data. When the shear wave information cannot be obtained from the logging data, it can be estimated using empirical formulas. The probability density functions of the different elastic parameters of the lithofacies are established to provide a basis for the subsequent elastic parameter sampling. The plot of the lithofacies versus the elastic parameters is established to provide a basis for the fluid prediction. Step 2: SNESIM simulation using LPUMI i. Griding and assignment of the well data and elastic parameters. Each conditional data point is assigned to the nearest grid node in the simulation grid. If multiple conditional data points are assigned to the same grid node, the nearest one is assigned to the center of the grid node. ii. Define the path through the remaining nodes of the simulated grid. A path is a vector that contains all of the indexes of the grid nodes to be simulated in sequence. Random, one-way (i.e., the nodes are accessed in a regular order starting from one side of the grid), or any other path can be used. The simulation path is from a dense well area to a sparse well area and finally to a no well area. iii. Search for domains that simulate node X. They consist at most of n nodes {x 1 , x 2, . . . , x N } that have recently been assigned to or simulated in the simulation grid. If the field of X is not found in the first iteration (such as the first unconditionally simulated node), a node Y is randomly selected in the TI, and its value (Z(y) to Z(x)) is assigned in the simulation grid. Then, proceed to the next node of the path. iv. Determine the search tree's conditional probability P(A|B). v. Determine whether there is a point at which in the previous simulation, the elastic parameters were reserved. If there is, using the permanent ratio of the updating theory, probability P(A|B) will update to P(A|B,C). Otherwise, the update is still the conditional probability P(A|B). Step 3: Prestack inversion According to the reservoir's elastic parameters obtained from the logging data, the density and p-wave velocity are uniformly sampled in the suggested data mode to obtain the p-wave impedance Z P of the sample. According to the relationships between the p-wave impedance Z P and the s-wave impedance Z S and the p-wave impedance Z P and the density ρ given by Hampson and Russell (2005), in general, Z S and ρ can be expressed as follows: They are looking for deviations away from a linear fit in logarithmic space. k and m are the corresponding slop. k c and m c are the corrsponding intercept. The deviations away from this straight line, ∆L S and ∆L D , are desired fluild anomalies. The seismic forward modeling record calculation of the proposed elastic parameters in the proposed data model is conducted as follows: where c 1 = 1 2 c 1 + 1 2 kc 2 + mc 3 , c 2 = 1 2 c 2 , c 1 = 1 + tan 2 θ, c 2 = −8γ 2 tan 2 θ, c 3 = −0.5tan 2 θ + 2γ 2 sin 2 θ, γ = V S /V P . W(θ) is the incident angle of the wavelet, D is the differential operator, L P = ln(Z P ), L S = ln(Z S ), L D = ln(ρ), and g(θ) is the seismic forward modeling record. The likelihood function Equation (3) and the posteriori probability Equation (4) are determined from the forward simulation record and the actual seismic record. Gonzalez's (2008) method is adopted to select the elastic inversion parameters that retain the maximum likelihood function as the results; or according to the Metropolis-Hasting optimization criterion, a large number of implementations of the lithofacies and elastic parameters are generated from the posterior function, and these implementations represent the probability distribution of the posterior function. The acceptance criteria of the model are proposed to determine the optimal inversion elastic parameters. In consideration of the computational efficiency and algorithm continuity, Gonzalez's [8] method was adopted in this study to select the optimal matching inversion results through iterative comparison of the multiple sampling (generally 25-30). Step 4: Iteration All of the simulation grids are visited to realize a single inversion. According to the matching degree of the synthetic seismic records and the actual records, it is judged whether the iteration should be terminated. If the conditions are not met, start again from Step 2 for the next external iteration. Usually, after six iterations, the average correlation coefficient of the seismic data is greater than 85% and the inversion results are output. Theoretical Model Testing The meandering river model with a low curvature in the first layer of the Stanford VI-E reservoir was taken as the test object, which is a 150 × 200 × 80 model. The lithofacies were subdivided into point bar, channel, and floodplain mudstone deposits (Figure 2). The different microfacies have different elastic parameter distributions (Figure 3). By designing 68 virtual wells, the seismic inversion method was tested based on the given elastic parameters and the lithofacies interpreted from the well data. In order to verify the accuracy of the method, only 63 wells were selected as the condition wells, and the remaining five wells were used as the test wells to analyze the inversion results. The theoretical lithofacies model was selected as the training image, and the tests were carried out using the traditional two-point statistical inversion method (TPI), the conventional multi-point statistical inversion method (MPI), and the multi-point statistical inversion with local probability updating method (LPUMI). The results show that compared with two-point statistical inversion, multi-point statistical inversion can reproduce the reservoir lithofacies better, and the inversion results are more consistent with the theoretical model. The synthetic seismogram is more similar to the actual seismogram (Figures 4-6). The average matching rate of the multi-point statistical inversion is 83.5%, while that of the two-point statistical inversion is 81.5%, indicating that the multi-point statistical inversion produced a more accurate prediction of the inter-well reservoir properties (Figure 7). According to the correlation coefficient of the seismic record calculated via the inversion, the correlation coefficient increases gradually as the number of iterations increases. After six iterations, the correlation between the inverted synthetic seismic track and the actual seismic track is close to 80%. The LPUMI has the largest correlation coefficient, reaching 0.78; the correlation coefficient of the MPI is in the middle (0.76); and the TPI has the lowest correlation coefficient (0.75) (Figure 8). The results show that the reservoir parameters obtained using the LPUMI are closer to the actual reservoir parameters. This shows that the proposed method is more reasonable and can be applied to actual reservoir inversion prediction. Real Reservoir Testing The Xinchang gas field is located in the western part of the Sichuan Basin, China. The main gas-bearing horizon is the second member of the Xujiahe Formation, and the main sandbodies are braided delta front distributary channels and mouth bars. The thicknesses of the sand bodies are large and their horizontal distributions are wide. The horizontal distribution of the sweet spot reservoir is not uniform, which causes difficulties in the exploration and development of the gas reservoir. Real Reservoir Testing The Xinchang gas field is located in the western part of the Sichuan Basin, China. The main gas-bearing horizon is the second member of the Xujiahe Formation, and the main sandbodies are braided delta front distributary channels and mouth bars. The thicknesses of the sand bodies are large and their horizontal distributions are wide. The horizontal distribution of the sweet spot reservoir is not uniform, which causes difficulties in the exploration and development of the gas reservoir. In this study, three methods, including the TPI, MPI, and LPUMI, were applied to the prediction of the TX 2 3 − TX 2 7 sand formation in the study area. Because this was mainly undertaken to test the proposed inversion method and compare it to the two existing methods, the inversion prediction was not performed for the entire region. Instead, a relatively regular local area with a relatively simple stratigraphic structure and no-fault development was selected to carry out the study. The total thickness of the vertical direction of the intercepted area is about 235 m, and the length in the I direction and J direction on the plane is about 4000 m (Figure 9). The grids were 100 × 100 m in the plane and 2 m in the vertical direction, and the total number of simulated grids was 40 × 40 × 118 = 188,800. Figure 10 shows the pre-stack track sets at different angles (5°, 15°, and 25°) in the test block. Figure 11 shows the spatial distribution and attribute interpretation for 11 In this study, three methods, including the TPI, MPI, and LPUMI, were applied to the prediction of the TX 3 2 − TX 7 2 sand formation in the study area. Because this was mainly undertaken to test the proposed inversion method and compare it to the two existing methods, the inversion prediction was not performed for the entire region. Instead, a relatively regular local area with a relatively simple stratigraphic structure and no-fault development was selected to carry out the study. The total thickness of the vertical direction of the intercepted area is about 235 m, and the length in the I direction and J direction on the plane is about 4000 m (Figure 9). The grids were 100 × 100 m in the plane and 2 m in the vertical direction, and the total number of simulated grids was 40 × 40 × 118 = 188,800. Figure 10 shows the pre-stack track sets at different angles (5 • , 15 • , and 25 • ) in the test block. Figure 11 shows the spatial distribution and attribute interpretation for 11 wells. The analysis shows that the main body of the channel is composed of sand and silt, with little mud. The p-S wave velocity has an obvious linear relationship, with a small p-S wave velocity ratio and a low gamma ray (GR) value. The interchannel region is mainly composed of clay deposits with a small amount of silt and fine sand. The p-S wave velocity has an obvious linear relationship, with a high p-S wave velocity ratio and a high GR value. The mouth bar is composed of fine and silty sand, with fine sorting and a pure quality, and it has a small S-wave velocity ratio and low GR value, which is similar to the main body of the channel (Figure 12). The statistical analysis of the elastic parameter-lithofacies was conducted based on the data for these 11 wells, and its probability distribution was established for the elastic parameter sampling under lithofacies control during the subsequent inversion ( Figure 13). The seismic wavelets from different angles were extracted based on the seismic records of the sidewalks (Figure 14). After comprehensive analysis, 25 Hz theoretical Rick wavelets were selected for the inversion seismic record synthesis. Based on geological analysis, a three-dimension training image of the study area was established (Figure 15), which was used to calculate the two-point variogram function and to extract the multi-point prior probability. The inversion profile results obtained using the different method were captured for wells Xins1-X201 for comparison ( Figure 16). As can be seen from the profiles, overall, all of the inversion lithofacies profiles are mainly composed of channel sand bodies. The mouth bar deposits are locally developed, and the mudstone deposits are relatively scarce and are mainly developed in the upper part. The lithofacies inversion is relatively continuous and the distribution of the channel sand body is reflected well. However, in terms of the structure, the sand body continuity of the TPI is too good to reflect the complex heterogeneity. The distributions of the MPI and LPUMI are highly variable and are connected locally, and the overall structure is close to that of the actual reservoir. In terms of the seismic track records, the inversion seismic records of the MPI and LPUMI are close to the actual seismic records, while the TPI exhibits chaotic reflection characteristics, which are quite different from the actual continuous lithofacies distribution. The results show that the MPI and LPUMI are able to reflect the sand body and elastic parameters better. The inversion profile results obtained using the different method were captured for wells Xins1-X201 for comparison ( Figure 16). As can be seen from the profiles, overall, all of the inversion lithofacies profiles are mainly composed of channel sand bodies. The mouth bar deposits are locally developed, and the mudstone deposits are relatively scarce and are mainly developed in the upper part. The lithofacies inversion is relatively continuous and the distribution of the channel sand body is reflected well. However, in terms of the structure, the sand body continuity of the TPI is too good to reflect the complex heterogeneity. The distributions of the MPI and LPUMI are highly variable and are connected locally, and the overall structure is close to that of the actual reservoir. In terms of the seismic track records, the inversion seismic records of the MPI and LPUMI are close to the actual seismic records, while the TPI exhibits chaotic reflection characteristics, which are quite different from the actual continuous lithofacies distribution. The results show that the MPI and LPUMI are able to reflect the sand body and elastic parameters better. The inversion profile results obtained using the different method were captured for wells Xins1-X201 for comparison ( Figure 16). As can be seen from the profiles, overall, all of the inversion lithofacies profiles are mainly composed of channel sand bodies. The mouth bar deposits are locally developed, and the mudstone deposits are relatively scarce and are mainly developed in the upper part. The lithofacies inversion is relatively continuous and the distribution of the channel sand body is reflected well. However, in terms of the structure, the sand body continuity of the TPI is too good to reflect the complex heterogeneity. The distributions of the MPI and LPUMI are highly variable and are connected locally, and the overall structure is close to that of the actual reservoir. In terms of the seismic track records, the inversion seismic records of the MPI and LPUMI are close to the actual seismic records, while the TPI exhibits chaotic reflection characteristics, which are quite different from the actual continuous lithofacies distribution. The results show that the MPI and LPUMI are able to reflect the sand body and elastic parameters better. Gonzalez [8] pointed out that the accuracies of the inversion iterations can be compared using the absolute error recorded by the forward simulation or the seismic trace correlation. Due to the possible errors in the time-depth conversion, direct comparison may cause large errors. However, the underground reservoir prediction is more likely to reveal the sand body and the spatial structure of the interlayer. If the reflected structure is similar, the overall similarity of the seismic records will increase. Therefore, the correlation between the forward simulation records and the actual records was used to compare the results of the different methods. The correlations between the forward modeling records and the actual seismic track were calculated. The results show that the correlation coefficient of the TPI is 0.72. The correlation coefficient of the MPI is 0.74, and that of the LPUMI is 0.77. This demonstrates that the LPUMI results are closer to the actual seismic record and have a higher accuracy. Gonzalez [8] pointed out that the accuracies of the inversion iterations can be compared using the absolute error recorded by the forward simulation or the seismic trace correlation. Due to the possible errors in the time-depth conversion, direct comparison may cause large errors. However, the underground reservoir prediction is more likely to reveal the sand body and the spatial structure of the interlayer. If the reflected structure is similar, the overall similarity of the seismic records will increase. Therefore, the correlation between the forward simulation records and the actual records was used to compare the results of the different methods. The correlations between the forward modeling records and the actual seismic track were calculated. The results show that the correlation coefficient of the TPI is 0.72. The correlation coefficient of the MPI is 0.74, and that of the LPUMI is 0.77. This demonstrates that the LPUMI results are closer to the actual seismic record and have a higher accuracy. Furthermore, the cross-validation method was used to test and compare the methods. The cross section of well X853 was used to compare the prediction accuracies of the different methods. Both the MPI and LPUMI can reflect the characteristics of the upward transition of the mouth bar sand body, which is consistent with the migration trend of the lithofacies in the training image. However, the TPI can hardly reflect this characteristic. Compared with the actual seismic record, the synthetic seismic records of the MPI and LPUMI are closer to the actual seismic profile (Figure 17). According to the comparison of the elastic parameters across well X853 (Figure 18), the different inversion methods can reflect the variations in the elastic parameters in the area around the well successfully, with a good degree of matching. In terms of the elastic parameter errors, the TPI performed the best, followed by the MPI and the LPUMI. However, the differences are not significant. In terms of the correlation of the elastic parameters, the overall difference is not significant. The correlation of the LPUMI is 0.74, that of the MPI is 0.73, and that of the TPI is 0.72. However, the degrees of lithofacies matching are different. The results show that the best method is the LPUMI (0.862), followed by the MPI (0.856), and the TPI is the worst (only 0.78). This indicates that the LPUMI has more advantages. According to the comparison of the elastic parameters across well X853 (Figure 18), the different inversion methods can reflect the variations in the elastic parameters in the area around the well successfully, with a good degree of matching. In terms of the elastic parameter errors, the TPI performed the best, followed by the MPI and the LPUMI. However, the differences are not significant. In terms of the correlation of the elastic parameters, the overall difference is not significant. The correlation of the LPUMI is 0.74, that of the MPI is 0.73, and that of the TPI is 0.72. However, the degrees of lithofacies matching are different. The results show that the best method is the LPUMI (0.862), followed by the MPI (0.856), and the TPI is the worst (only 0.78). This indicates that the LPUMI has more advantages. Discussion Seismic records are a comprehensive representation of subsurface lithofacies, physical properties, elastic parameters, and fluids. The essence of statistical inversion is to seek the optimal solution that reflects the underground reservoir parameters through convolution of the elastic parameters. The distribution of the elastic parameters is mainly revealed through rock physics modeling. In the field of geological modeling, due to the intrinsic relationship between the lithofacies and physical properties, developing facies-controlled reservoir parameter modeling method has become an important means of improving the accuracy of physical property modeling. Therefore, introducing the idea of facies control into seismic inversion can improve the inversion accuracy. In fact, Azevedo and Soares [29] compared the inversion results of the given lithofacies model with conventional inversion results and showed that the inversion elastic parameter distribution of the given lithofacies model is more reasonable and the iteration convergence is faster. Based on the theoretical model and practical model developed and tested in this study, the multi-point inversion method considering facies control is significantly better than the traditional two-point statistical inversion method without facies control. Therefore, making full use of lithofacies control in seismic inversion should be an important direction in the future. In a sense, the accuracy of the constrained lithofacies model determines the effectiveness of the final inversion effect. Discussion Seismic records are a comprehensive representation of subsurface lithofacies, physical properties, elastic parameters, and fluids. The essence of statistical inversion is to seek the optimal solution that reflects the underground reservoir parameters through convolution of the elastic parameters. The distribution of the elastic parameters is mainly revealed through rock physics modeling. In the field of geological modeling, due to the intrinsic relationship between the lithofacies and physical properties, developing facies-controlled reservoir parameter modeling method has become an important means of improving the accuracy of physical property modeling. Therefore, introducing the idea of facies control into seismic inversion can improve the inversion accuracy. In fact, Azevedo and Soares [29] compared the inversion results of the given lithofacies model with conventional inversion results and showed that the inversion elastic parameter distribution of the given lithofacies model is more reasonable and the iteration convergence is faster. Based on the theoretical model and practical model developed and tested in this study, the multi-point inversion method considering facies control is significantly better than the traditional two-point statistical inversion method without facies control. Therefore, making full use of lithofacies control in seismic inversion should be an important direction in the future. In a sense, the accuracy of the constrained lithofacies model determines the effectiveness of the final inversion effect. In this improvement, the seismic elastic parameters are mainly used for local lithofacies updating. The SNESIM method is a single-grid point lithofacies forecasting method. When using elastic parameters to update the local lithofacies, only the information provided by the elastic parameters of the points to be estimated is used, which has no significant improvement effect compared with the conventional multi-point geostatistical inversion. This may be due to the fact that grid by grid updating of lithofacies does not significantly change the sedimentary facies model. In addition, probabilistic statistical sampling errors inevitably exist and are transmitted to the subsequent updates, resulting in limited improvement of the inversion accuracy. Another disadvantage of the SNESIM lithofacies prediction is that all reservoir predictions based on statistical methods require a stable lithofacies distribution model, but in reality, the lithofacies distribution is very complex and has non-stationary characteristics. This is one of the reasons why multi-point statistical methods such as the SIMPAT, Filtersim, and DS methods have been developed. This is also the reason why Gonzalez [8] used SIMPAT as the lithofacies inversion method. Arpat (2005) pointed out that seismic information can be integrated in SIMPAT geological modeling; that is, a seismic reference image with a good correspondence to the lithofacies can be constructed, and its contribution to the distance can be taken into account in the lithofacies matching to constrain and guide the selection of the optimal lithofacies mode: where s h dev T (u), pat k T is the similarity between the model at the point to be estimated dev T (u) and the data model pat k T in the lithofacies training image s s sdev T (u), spat k T and the similarity between the seismic attribute model sdev T (u) at the corresponding point to be estimated and the seismic attribute model spat k T in the training image. It should be noted that the contribution of the seismic attributes (i.e., soft data) is different from that of well data (i.e., hard data), so it is necessary to effectively measure the contribution of the seismic attributes. The similarity value of the seismic training images is multiplied by a weight to represent the contribution of the seismic attributes in the similarity calculation. In addition, because the scale of the seismic attributes is not consistent with that of the facies attributes, the seismic attributes must be normalized before the similarity calculation in the application of the seismic data in order to avoid the absolute superiority of the seismic similarity in the entire similarity due to the different scales. Lithofacies training images can be obtained from the geological anatomy and through sedimentary simulation. However, seismic attribute training image is often difficult to obtain. Based on rock physics modeling, the forward modelling can be conducted many times and the optimal matching seismic attributes can be calculated, which can be applied to Equation (15) to update the overall local lithofacies and improve the accuracy of the inversion. The efficiency of the mSIMPAT algorithm is relatively low, and adding seismic attribute constraints will further increase the computational burden. Therefore, using parallel computing and deep learning theory to accelerate the inversion iteration is an important direction in future research. Conclusions This paper proposed a new multi-point geostatistical inversion through local iterative updating rock facies using the constrains of elastic parameters. An internal and external double cycle iteration mechanism was adopted to execute the iteration and updating. During the internal cycle iteration, the optimal elastic parameters obtained in the previous external cycle were combined with the statistical probability of the lithofacies and elastic parameters, and the elastic parameters were combined with the permanent ratio of the updating theory to achieve local lithofacies updating. Based on this, inversion prediction of the lithofacies and elastic parameters was carried out. In the outer loop, the current global inversion results were compared with the actual error to determine whether the inversion results of the elastic parameters meet the conditions, and the cycle iteration was carried out again until local elastic parameter inversion and global inversion satisfy the given conditions. Then, the cycle was terminated and the inversion results were output. Both the theoretical and practical model tests conducted confirm that the correlation between the actual seismic track and the synthetic seismic track obtained using the LPMUI is the best, and the degree of lithofacies matching is the highest. The results of the MPI are the next best, and the results of the TPI are the worst, indicating that the reservoir parameters obtained using the LPMUI are closer to the actual reservoir parameters. This method is worth popularizing in practical exploration and development. The calculation efficiency of double iteration in the LPMUI is much better than traditional MPI; however, it is still lower than the TPI. In future, the deep learning method or parallel computing method can be introduced to improve the calculation efficiency. Another improvement may exist in the use of the elastic parameters for rock lithofacies updating. Here, only the elastic property in the un-simulated grid was used to update the rock lithofacies in the same grid, a multi-point geostatistical simulation for sedimentary pattern reproduction and updating was not conducted, which may have caused the failure of the reproduction of a continuous geobody. How to use elastic parameters in a multi-point data template to update rock lithofacies patterns, is still a challenge for future work. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: LPUMI a multi-point geostatistical inversion method based on the local probability updating method for the inversion of lithofacies. MCMC Markov chain Monte Carlo. ASR adaptive spatial resampling.
2022-01-06T16:17:23.296Z
2022-01-02T00:00:00.000
{ "year": 2022, "sha1": "718de4dcf681a93910826b0fb36056b87a989f24", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/15/1/299/pdf?version=1641132302", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b331c11b2697be72634e04703d934b3c20d29c15", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
2641089
pes2o/s2orc
v3-fos-license
Initial clinical experience of CrossBoss catheter for in-stent chronic total occlusion lesions Abstract Background: The CrossBoss coronary chronic total occlusion (CTO) crossing catheter has been demonstrated to have greatly improved the success rate of crossing CTO lesions, but there are no published data on its application for in-stent CTO lesions. Methods: In the current study, we retrospectively reviewed the clinical data of 8 patients with in-stent CTO lesions that were managed with the CrossBoss catheter and herein we report the efficacy and safety of the CrossBoss crossing and re-entry system for this clinically challenging condition. Results: The CrossBoss catheter was used for 8 patients with in-stent CTO lesions, which resulted in success in 6 cases and failure in 2 cases, with a 75% success rate. Of the 6 patients with successful treatment, 5 cases had the occlusive lesions crossed with the CrossBoss catheter through a proximal lumen-to-distal lumen approach, whereas the remaining case had his occlusive lesions penetrated by the CrossBoss catheter and the guidewire. Two cases failed in treatment as the CrossBoss catheter could not cross the occlusive lesions. The 6 cases with successful treatment included 3 cases with occlusive lesions in the left anterior descending artery, 1 case with occlusive lesions in the obtuse marginal branches, and 2 cases with occlusive lesions in the right coronary artery, and the 2 cases with failure in treatment had their occlusive lesions in the right coronary artery. In addition, patients with a higher Japan chronic total occlusion score were found to have a lower success rate of crossing the occlusive lesions. None of the patients developed complications. Conclusion: Our study demonstrates that the CrossBoss catheter has a high success rate and is safe for in-stent CTOs and can be recommended for this rather clinically challenging condition. Introduction There are 20% to 30% of coronary heart disease patients require percutaneous coronary intervention (PCI) because of chronic total occlusions (CTOs). [1,2] Though an ever-growing and impressive array of dedicated guidewires have become available, the success rate still hovers around 70% in most studies [3,4] and may be as low as 49% as reported in the SYNTAX trial, [5] with failure to cross a wire being a major contributor to an unsuccessful PCI for CTO. Furthermore, the risk of stent thrombosis after stent implantation remains high despite antithrombotic therapy and CTOs due to in-stent CTO lesions account for up to one-quarter of all PCI for CTO. [6,7] However, PCI for in-stent CTO suffers from a low success rate due to challenges with wire and balloon crossing. [8] Soft guidewire may fail to cross the fibrous calcified tissues in the occlusion, whereas hard guidewire may penetrate the stent mesh and even penetrate through the vascular wall due to poor controllability. The Boston Scientific Coronary CTO Crossing System, which was first introduced in 2008, consists of the CrossBoss catheter, the Stingray balloon, and the Stingray guidewire. [9] Unlike guidewires, the CrossBoss catheter has a hydrophilic coated shaft and a blunt 1.0 Â 1.0-mm olive tip, which blocks the catheter tip from penetrating through the stent mesh. Furthermore, the visual structure of the stent guides the moving direction of the CrossBoss catheter. Even if penetrating through the stent region and entering the subintimal region to form a dissection, the CrossBoss catheter may re-enter the lumen by using the stingray balloon and the stingray guidewire. [9] The CrossBoss coronary CTO crossing catheter has been shown to have greatly improved the success rate of crossing CTO lesions [1] and even for patients refractory to conventional PCI, the CrossBoss catheter has achieved a high success rate without compromising safety as reported in the FAST-CTOs trial. [10] Nevertheless, scant knowledge is available about the efficacy and safety of the CrossBoss catheter for in-stent CTO lesions. The FAST-CTOs trial also excluded this subset of patients from the study. In the current study, we retrospectively reviewed the clinical data of 8 patients with in-stent CTO lesions that were managed with the CrossBoss catheter and herein we report the efficacy and safety of the CrossBoss crossing and re-entry system for this clinically challenging condition. Patients In this study, we reviewed the clinical records of 2429 coronary heart disease patients who were seen at the Second Hospital of Jilin University between January 2015 and December 2015 and had undergone PCI due to in-stent restenosis and were managed with the CrossBoss catheter. In-stent CTO was considered present if the occlusion was located within a previously placed stent or within the 5-mm margins proximal and distal to the stent. The study protocol was approved by the local institutional review board at the authors' affiliated institution and no patient consent was required of the patients due to the retrospective nature of this study. The procedure Patients with in-stent CTO were treated with aspirin (100 mg/ day) and clopidogrel (75 mg/day) before angiography for at least 5 days. During the procedure, intravenous heparin was given to maintain an activated clotting time of > 250 seconds. The radial or femoral approach and 6 F or 7F guiding catheters were left to the operator. Corssboss was used initially or after the conventional method failed. When the wire passed the occlusion lesions, drug eluting stent or drug eluting balloon was used. Example of the procedure of a left anterior descending artery (LAD) in-stent CTO using the Corssboss catheter was shown in Fig. 1, with procedure of a right coronary artery (RCA) in-stent CTO using the Corssboss catheter shown in Fig. 2. After the procedure, patients were maintained on aspirin 100 mg and clopidogrel 75 mg daily. Patient evaluation The Multicenter CTO Registry of Japan J-CTO (Japan chronic total occlusion) score was calculated for each lesion based on occlusion length, stump morphology, the presence of calcification, presence of tortuosity, and prior attempt to open the CTO. [3] Total procedure time and utilizing CrossBoss time were recorded. Technical success was defined by a final TIMI 3 flow and <30% residual stenosis, whereas procedural success was defined as technical success without major adverse cardiac events (MACE). MACE (cardiac death, myocardial infarction, target lesion revascularization, or emergency bypass surgery) were recorded through 30 days post-procedure. Statistical analysis Continuous parameters were reported as median (range) if they were not normally distributed. Discrete parameters were reported Treatment outcomes The median total procedure time was 49 (range 20.0-86.4) minutes and the median duration of using CrossBoss time was 10.5 (range 1.65-45.0) minutes. The angiographic data of the patients are shown in Table 2 and imaging and surgical features of each patient are listed in Table 3. Both technical and procedural successes were achieved in 6 cases (75%, 6/8). Technical success was achieved in 5 patients via the lumen-tolumen approach (J-CTO score = 1). For these patients, the median procedure time for using the CrossBoss catheter was 2.65 (range 1.65-5) minutes. In the remaining patient, the CrossBoss catheter partially crossed the occlusive lesion, and subsequent use of the guidewire finally resulted in successful crossing of the CTO lesions (J-CTO score = 2). The median The CrossBoss catheter did not manage to cross the occlusive lesions in 2 cases, and the procedure time for using the CrossBoss catheter was 15.75 and 45 minutes, respectively. Their J-CTO score was 2 or 3. Safety No MACEs were documented. Discussion PCI for in-stent CTO is clinically challenging and suffers from a low success rate. As a recently developed apparatus for CTO lesions, [9] the CrossBoss catheter has a hydrophilic coated hollow mental shaft and a blunt 3F tip. When the apparatus at the tail of the catheter is rapidly rotated, the apparatus at the tip may cross the lesions without penetrating through the vascular wall, allowing the CrossBoss catheter to cross the lumen or subintimal region to reach the distal occlusive lesions. [11] Though the CrossBoss catheter has been shown to improve the success rate of revascularization of CTOs refractory to conventional PCI in the FAST-CTOs trial, scant data are available on the efficacy and safety of the CrossBoss catheter in in-stent CTO patients. We report in-stent CTOs treated with the CrossBoss catheter and achieved a technical success rate of 75% (6/8), which is comparable to the rate reported for in-stent CTOs. [11,12] We also noted no MACEs in our patients. Our study demonstrates that in in-stent CTOs, the CrossBoss catheter has a high success rate and is safe. The CrossBoss catheter may directly cross the lumen to reach the distal vascular lumen in ∼20% to 30% of patients with coronary CTO. [10] If the CrossBoss catheter crosses the occlusive segment via the subintimal region, combining stingray balloon and hard guidewires may re-enter the distal vascular lumen. After the CrossBoss catheter crosses the occlusive segment, the guidewire may reach the distal lumen via the hollow shaft of the catheter. In 5 of our patients, the CrossBoss catheter completely crossed the occlusive lesions. The CrossBoss catheter only partially crossed the occlusive lesions in 1 case and failed to cross the bending segment of the occlusive lesions in 2 cases with technical failure. Our success rate of 75% for crossing in-stent CTO lesions by the CrossBoss catheter was lower than that reported by Wilson and James (90%) [11] and Papayannis et al (83%). [12] Wilson and James [11] reported that the CrossBoss catheter reached the distal lumen from the proximal lumen via the occlusive segment in 88% of patients with successful treatment, which is comparable to our data (83.3%, 5/6) but higher than that (41.5%) reported by Papayannis et al. [12] In the remaining patients, the CrossBoss catheter crossed the occlusive segment with the help of guidewires after crossing the proximal fibrous cap, or the CrossBoss catheter entered the distal subintimal region and then re-entered the lumen by using the stingray balloon. In our series, the CrossBoss catheter did not enter the Table 2 Angiographic characteristics of the study subjects. subintimal region in all subjects with successful crossing, and no re-entry technique was used. We speculated that the high success rate of our series was partially attributed to the low J-CTO score (mean 1.5; median, 1; range 1 to 3) of our patients, which is usually considered as a predictor of the outcome of interventional therapy. [11] Patients who had their CTO lesions successfully crossed without resort to the use of the guidewire had a J-CTO score of 1 whereas those who failed in the attempt had a J-CTO score of 2 or 3. All 3 patients with in-stent CTO in the left anterior descending artery achieved technical success. Only 2 (2/4) cases with in-stent CTO in the right coronary artery had technical success. Compared with the anterior descending artery, there are 2 turns in the right coronary artery, which has a high bending degree. In 2 cases with failure in treatment, the tortuosity of the lesions in the right coronary artery were more than 45 degrees, which has been reported to depress the success rate of crossing the CTO lesions. [13] Tortuosity decreases the forward pushing force of the CrossBoss catheter, jeopardizing its ability to cross the occlusion. Wilson et al [7] found that most patients with failure in crossing had extremely tortuous occlusions. One other contributor to our success rate is the lack of apparent calcification at the occlusion in our patients, which is a predictor of failure in interventional therapy. The CrossBoss catheter has difficulty penetrating severely calcified lesions. In this study, no coronary artery perforation occurred, and the CrossBoss catheter did not cross the stent mesh to enter the subintimal region, indicating that the CrossBoss catheter is safe for in-stent CTO lesions. To date, there is but 1 case report describing the crossing of the CrossBoss catheter through the stent mesh to enter the subintimal region in a patient with in-stent CTO lesion. [14] One limitation of our study is the small size of the study cohort and the applicability of our experience is restricted by the fact that the patients came from a single tertiary care center. Because financial factors, not all in-stent CTOs were managed with the CrossBoss catheter because the CrossBoss catheter is very expensive, contributing to the small size of the study cohort. In conclusion, our study implied that the CrossBoss catheter has a relatively high success rate and is safe for in-stent CTOs and might have the potential to be recommended for this rather clinically challenging condition.
2018-04-03T03:20:41.853Z
2016-10-01T00:00:00.000
{ "year": 2016, "sha1": "6e6ed27efe081c1627093979bec61fd1244be1a0", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1097/md.0000000000005045", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6e6ed27efe081c1627093979bec61fd1244be1a0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
28789611
pes2o/s2orc
v3-fos-license
Comparative Effects of Statin Therapy versus Renin-Angiotensin System Blocking Therapy in Patients with Ischemic Heart Failure Who Underwent Percutaneous Coronary Intervention Statins and renin-angiotensin system (RAS) blockers are key drugs for treating patients with an acute myocardial infarction (AMI). This study was designed to show the association between treatment with statins or RAS blockers and clinical outcomes and the efficacy of two drug combination therapies in patients with ischemic heart failure (IHF) who underwent revascularization for an AMI. A total of 804 AMI patients with a left ventricular ejection fraction <40% who undertook percutaneous coronary interventions (PCI) were analyzed using the Korea Acute Myocardial Infarction Registry (KAMIR). They were divided into four groups according to the use of medications [Group I: combination of statin and RAS blocker (n=611), Group II: statin alone (n=112), Group III: RAS blocker alone (n=53), Group IV: neither treatment (n=28)]. The cumulative incidence of major adverse cardiac and cerebrovascular events (MACCEs) and independent predictors of MACCEs were investigated. Over a median follow-up study of nearly 1 year, MACCEs had occurred in 48 patients (7.9%) in Group I, 16 patients (14.3%) in Group II, 3 patients (5.7%) in Group III, 7 patients (21.4%) in Group IV (p=0.013). Groups using RAS blocker (Group I and III) showed better clinical outcomes compared with the other groups. By multivariate analysis, use of RAS blockers was the most powerful independent predictor of MACCEs in patients with IHF who underwent PCI (odds ratio 0.469, 95% confidence interval 0.285-0.772; p=0.003), but statin therapy was not found to be an independent predictor. The use of RAS blockers, but not statins, was associated with better clinical outcomes in patients with IHF who underwent PCI. Statins and renin-angiotensin system (RAS) blockers are key drugs for treating patients with an acute myocardial infarction (AMI). This study was designed to show the association between treatment with statins or RAS blockers and clinical outcomes and the efficacy of two drug combination therapies in patients with ischemic heart failure (IHF) who underwent revascularization for an AMI. A total of 804 AMI patients with a left ventricular ejection fraction <40% who undertook percutaneous coronary interventions (PCI) were analyzed using the Korea Acute Myocardial Infarction Registry (KAMIR). They were divided into four groups according to the use of medications [Group I: combination of statin and RAS blocker (n=611), Group II: statin alone (n=112), Group III: RAS blocker alone (n=53), Group IV: neither treatment (n=28)]. The cumulative incidence of major adverse cardiac and cerebrovascular events (MACCEs) and independent predictors of MACCEs were investigated. Over a median follow-up study of nearly 1 year, MACCEs had occurred in 48 patients (7.9%) in Group I, 16 patients (14.3%) in Group II, 3 patients (5.7%) in Group III, 7 patients (21.4%) in Group IV (p=0.013). Groups using RAS blocker (Group I and III) showed better clinical outcomes compared with the other groups. By multivariate analysis, use of RAS blockers was the most powerful independent predictor of MACCEs in patients with IHF who underwent PCI (odds ratio 0.469, 95% confidence interval 0.285-0.772; p=0.003), but statin therapy was not found to be an independent predictor. The use of RAS blockers, but not statins, was associated with better clinical outcomes in patients with IHF who underwent PCI. INTRODUCTION Statin therapy is a key treatment for patients with coronary heart disease (CHD). Multiple randomized trials have shown beneficial effects of statin therapy for reducing the rate of recurrent myocardial infarction (MI), coronary disease mortality, the need for revascularization, and stroke. 1,2 Inhibitors of the renin-angiotensin system (RAS) such as angiotensin converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARB) are important drugs for patients with any potential causes of systolic heart failure (HF). RAS blockers improve morbidity and mortality in these patients and also has been recommended for MI patients with HF or left ventricular ejection fraction (LVEF) less than 40%. [3][4][5] The post-hoc analysis of a GREACE (Greek Atorvastatin and Coronary Heart Disease Evaluation) study showed the 'synergic effect' of statins and ACE inhibitors in reducing vascular events in patients with CHD. Aggressive statin use in the absence of an ACE inhibitor also substantially reduced cardiovascular events. Treatment with an ACE inhibitor without statins use did not significantly reduce clinical events in comparison to patients not treated with an ACE inhibitor. 6 The aim of the present study is to compare these two drugs, statins and RAS blockers, and to assess which drugs would be more effective for the reduction of major adverse cardiac and cerebrovascular events (MACCEs) in IHF patients who underwent percutaneous coronary intervention (PCI) for acute myocardial infarction (AMI). Patient population The study population was selected from the Korea Acute Myocardial Infarction Registry (KAMIR). This is a Korean prospective multicenter data collection registry reflecting real-world treatment practices and outcomes in Asian patients diagnosed with AMI. 7 The registry includes 53 communities and teaching hospitals with facilities for primary PCI and on-site cardiac surgery. The KAMIR was supported by a research grant from the Korean Circulation Society in commemoration of its 50th anniversary. Data was collected by a trained study coordinator using a standardized case report form and protocol. The study protocol was approved by the ethics committee at each participating institution. Between November 2011 and July 2014, 9,369 AMI pa-tients were enrolled. Inclusion criteria for the present analysis were patients aged over 18, diagnosed as AMI with a LVEF <40%, and who underwent PCI. The exclusion criteria for the study were patients who had died during hospitalization druing the index procedure; were lost to follow-up; and lacked information for the LVEF. From the registered patients, a total of 804 patients were included in this analysis (Fig. 1). Patients were divided into four groups according to the documentation of drugs prescribed at discharge [Group I, combination of statins and RAS blockers (n=611), Group II, statins alone (n=112), Group III, RAS blockers alone (n=53), and Group IV, neither treatment (n=28)]. PCI procedure PCI was performed using a standard technique. All patients received a 300 mg loading dose of aspirin and a 300 to 600 mg loading dose of clopidogrel before PCI unless they had previously received these antiplatelet drugs. Anticoagulation during PCI was performed according to current practice guidelines established by the Korean Society of Interventional Cardiology. The decision for thrombus aspiration, pre-dilatation, direct stenting, and post-adjunctive balloon inflation, and the administration of glycoprotein IIb/IIIa inhibitors were left to the discretion of individual operators. Drug eluting stents were used without restrictions. The duration of the dual antiplatelet therapy was determined by the operators. Definitions and outcomes MI was diagnosed by the presence of a characteristic clinical presentation, serial changes on electrocardiogram suggesting infarction, and increased cardiac enzymes. ST-segment elevation myocardial infarction (STEMI) was diagnosed as a suggestive symptom with a ST-segment elevation >2 mm in ≥2 precordial leads, ST-segment elevation >1 mm in ≥2 limb leads, or a new left bundle branch block on the 12-lead electrocardiogram with a concomitant increase of at least one cardiac enzyme. Heart failure is defined as decreased ventricular function and ischemic haert failure is described as heart failure with any ischemic cause such as coronary artery disease. A left ventricular ejection fraction of <40% by echocardiogram is diagnosed as heart failure. The study end point was the time beginning with the first MACCEs lasting for 2 years. MACCE included death of cardiac origins, non-fatal MI, repeated PCI, the need for a coronary artery bypass graft, and a cerebrovascular accident. Statistical analysis Baseline characteristics and angiographic characteristics were summarized with the use of descriptive statistics, between-group differences were assessed by means of the Kruskal-Wallis test. All other data, which were non-parametrically distributed and are expressed as median values with inter-quartile ranges, were analyzed with the use of the Kruskal-Wallis test for between-group comparisons. Survival curves were constructed with Kaplan-Meier estimates and compared with the log rank test; data was censored at the time of the last visit. The Cox proportional-hazards model was used to identify factors associated with an increased risk of MACCEs. Factors associated with MACCEs with a p value of less than 0.20 in the univariate analysis were entered in the multivariate model, and non-significant factors were removed by means of a backward-selection procedure. All statistical analyses were done with SPSS 18.0 (Statistical Package for the Social Sciences, SPSS-PC Inc, Chicago, IL, U.S.A.). A p value<0.05 was considered statistically significant. Baseline clinical characteristics Baseline clinical characteristics are shown in Table 1. There were no significant differences among the treatment groups except for those with a history of diabetes mellitus, Killip class ≥III on admission, an estimated glomerular filtration rate (eGFR), total cholesterol and low density lipoprotein (LDL) cholesterol levels, and the proportion receiving  blockers with respect to the overall demographic characteristics. Group IV had a highest prevalence of diabetes mellitus and the lowest eGFR. Group III and IV had higher rates of Killip class ≥III on admission. Procedural characteristics The coronary angiographic and procedural characteristics are shown in Table 2. The prevalence of left anterior descending artery or multi-vessel involvement did not differ, and the rate of severe lesion and pre-PCI Thrombolysis in Myocardial Infarction (TIMI) flow were also similar and there were no significant differences in stent diameter or length among the four groups. However, Group IV had the lowest proportion of stent use and the lowest rate of post-procedural TIMI flow with a grade of 3. Clinical outcomes The median duration of follow-up was 362 days (interquartile range 186-686). Table 3 shows the cumulative clinical outcomes of the study groups. MACCEs occurred in 48 patients (7.9%) in Group I, in 16 patients (14.3%) in Group II, in 3 patients (5.7%) in Group III, and in 7 patients (21.4%) in Group IV (p=0.013). Fig. 2 shows a Kaplan-Meier curve for MACCE-free survival. The four groups did not show the same survival curves (p=0.005). In post-hoc analysis, Group I showed a better outcome compared with Group II (p=0.047) or Group IV (p=0.011), but did not show a difference compared with Group III (p=0.634). Group III also showed better outcome compared with Group IV (p=0.011). All other inter-group analysis showed no significant difference between Group II and III (p=0.130), and between Group II and IV (p=0.183). Table 4 shows the univariate analysis of the independent predictors for the development of a MACCE. Age, previous history of hypertension, eGFR, body-mass index, multi-vessel involvement, STEMI, and use of RAS blockers were also associated. Table 5 shows the multivariate analysis of the independent predictors for development of MACCE. The use of RAS blockers [Odds ratio (OR) 0.469, 95% confidence interval (CI) 0.285-0.772, p=0.003], eGFR (OR 0.989, 95% CI 0.981-0.997, p=0.005), multi-vessel involvement (OR 1.810, 95% CI 1.049-3.125, p=0.033), and a previous history of hypertension (OR 1.691, 95% CI 1.015-2.817, p=0.044) were the independent predictors of MACCEs in patients with IHF who underwent PCI for AMI. DISCUSSION This study was analyzed retrospectively using the KAMIR data. This gives a comprehensive view of the contemporary treatments and outcomes of patients with AMI in Korea. The present study found that in patients with AMI with reduced left ventricular systolic function who underwent PCI, the use of RAS blockers was associated with a 53% reduction in the risk of cardiac death, MI, revascularization, or a cerebrovascular accident. However. statin use did not show a reduction of the incidence of MACCEs. To analyze the results regarding the MACCE-free survival curve, the ratio of the patients who suffered a MACCE in Group IV was much higher than that in other groups. As mentioned above, Group IV had a higher prevalence of diabetes mellitus, higher rates of Killip class ≥III, lower eGFR, lower LDL cholesterol level, lower rates of stent use, and lower rates of post-PCI TIMI flow III. Although there was no statistical difference among the 4 groups, Group IV had a higher prevalence of hypertension and heart failure, as well as a higher rate of female patients than Group I or II. These differences of the baseline characteristics might make have led to a poorer prognosis in Group IV. The highest number of non-cardiac death and the lowest frequency of prescribed  blockers in Group IV indicated that Group IV patients had poor general health conditions such as low blood pressure or slow heart rate. The small number of patients in Group IV, only 28 persons, prevents the results from this study to be widely generalized. After excluding Group IV, comparing Group I with Group II and Group I with Group III. RAS blockers showed a statistically significant beneficial effect (p=0.047) but the statin dose did not (p=0.638). ACE inhibitors can reduce morbidity and mortality due to major cardiovascular events in patients with left ventricular systolic dysfunction after AMI. [8][9][10] The effectiveness of ARB showed weaker results than that of ACE inhibitors. However, in the Valsartan in Acute Myocardial Infarction (VALIANT) trial, valsartan was superior to captopril in patients with left ventricular systolic dysfunction after AMI. 11 Therefore, the guidelines recommend that ACE inhibitors should be started and continued in all patents with a LVEF less than 0.40 after AMI and ARBs should be considered as an alternative in patients who are intolerant to ACE inhibitors. 4,5 Recent studies show that ARB is better than ACE inhibitors in some specific subgroups such as patients with STEMI with preserved left ventricular systolic function. 12 In the present study, RAS blockers reduced MACCEs effectively compared to statin use alone or neither using RAS blockers nor statins. As RAS blockers have similar mechanisms, we congregate ACE inhibitors and ARB into one group. All ARBs look the same and show a class effect, but ARBs can be classified as insurmountable or surmountable. Insurmountable ARBs were more effective on long-term clinical outcomes than surmountable ARBs in patients with AMI, those with a LVEF greater than 40% with a low Killip class, or those with normal renal function. 13 Further studies are needed to evaluate the effectiveness of rhw subclasses of ARBs. Statins also lower the risk of coronary heart disease death, recurrent MI, stroke, and the need for coronary revascularization in patients stabilized after an acute coronary syndrome. 14,15 All patients with coronary heart disease should receive long-term intensive lipid lowering therapy with a statins. 4,5 When it comes to patients with HF, however, the efficacy and safety of statin therapy is still controversial. Large observational and post hoc analyses from large clinical trials have suggested that statins could provide clinical benefits to patients with HF. [16][17][18] However, two recent, large, randomized controlled trials have demonstrated that rosuvastatin does not affect clinical outcomes in patients with chronic HF, even in ischemic cases. 19,20 So the current guidelines do not recommend statins solely for HF in the absence of other indications for their use. 3 Some concerns have been raised that the results cannot be generalized to all patients with HF because the enrolled patients had moderate to severe disease and were older, only low to moderate doses of rosuvastatin were tested, and ischemic events occur less frequently in patients with HF compared with the broad population of patients with established cardiovascular disease. 21 More recently, in a large scale prospective propensity score matched cohort study, statins were associated with improved outcomes, specifically in the presence of ischemic heart disease. 22 A meta-analysis also found that statin therapy significantly decreases the rate of hospitalization for worsening HF and increased left ventricular ejection fraction, though it does not decrease all-cause or cardiovascular mortality compared with placebo. 23 Statin therapies for hypercholesterolemia and primary and secondary prevention of CAD has been established, however, their effects on patients with HF have remained unclear. In the present study, statins did not show any beneficial effects in the prevention of the occurrence of future events. However, there was a small number of patients who did not use statins (81 patients, 10% of total patients). Therefore, a large, prospective, randomized trial is needed to determine the effectiveness of statin therapy for ischemic heart failure. There were several limitations to this study. First, this was a non-randomized study based on a prospective, observational registry. Therefore, a selection bias was largely unavoidable. The baseline demography showed the differences in the 4 groups. RAS blocker users more frequently prescribed  blockers as well. In this study  blockers were not shown to be statistically effective (OR 0.843, 95% CI 0.488-1.456, p=0.547), but  blockers are also a key drug in patients with HF and proven to reduce mortality.  blockers may induce some or all of the positive effects of RAS blockers, especially the prevention of MACCEs. Although most confounders were included in the multivariate analysis, it is possible that some potential bias was included. Second, the types and doses of the prescribed statins were variable. There were four different types of statins and each had three or four variation of dosage. Third, KAMIR data does not have information on why physicians did not prescribe a RAS blocker or statin, because of the limitations of our database. Finally, a median follow-up of 12-months might be too short to conclusively determine the long term efficacy of treatment. These findings support the need for prospective, randomized, blinded, placebo-controlled trials to determine the effectiveness of statin therapy for IHF. RAS blockers showed beneficial effects in patients with IHF who underwent PCI for AMI. But statins were not as-
2017-11-11T16:24:52.190Z
2016-05-01T00:00:00.000
{ "year": 2016, "sha1": "1f3fdf767fc429991f86c6a9fdb01fe2b9723e06", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc4880578?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "1f3fdf767fc429991f86c6a9fdb01fe2b9723e06", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
1256142
pes2o/s2orc
v3-fos-license
Update of the UMD-FBN1 Mutation Database and Creation of an FBN1 Polymorphism Database Fibrillin is the major component of extracellular microfibrils. Mutations in the fibrillin gene on chromosome 15 (FBN1) were first described in the heritable connective disorder, Marfan syndrome (MFS). FBN1 has also been shown to harbor mutations related to a spectrum of conditions phenotypically related to MFS, called "type-1 fibrillinopathies." In 1995, in an effort to standardize the information regarding these mutations and to facilitate their mutational analysis and identification of structure/function and phenotype/genotype relationships, we created a human FBN1 mutation database, UMD-FBN1. This database gives access to a software package that provides specific routines and optimized multicriteria research and sorting tools. For each mutation, information is provided at the gene, protein, and clinical levels. This tool is now a worldwide reference and is frequently used by teams working in the field; more than 220,000 interrogations have been made to it since January 1998. The database has recently been modified to follow the guidelines on mutation databases of the HUGO Mutation Database Initiative (MDI) and the Human Genome Variation Society (HGVS), including their approved mutation nomenclature. The current update shows 559 entries, of which 421 are novel. UMD-FBN1 is accessible at www.umd.be/. We have also recently developed a FBN1 polymorphism database in order to facilitate diagnostics. INTRODUCTION Fibrillin (FBN1; MIM# 134797) is the major component of microfibrils, structures found in the extracellular matrix (ECM) either as isolated aggregates or closely associated with elastin fibers. Ultrastructurally, microfibrils display a typical ''beads-on-a-string'' appearance, consisting of a long series of globules connected by multiple filaments. Fibrillin is a large glycoprotein (350 kDa) with a complex multidomain structure including 47 epidermal growth factor (EGF)-like modules, 43 of which have calcium-binding (cb) consensus sequences (cbEGFlike modules) (Fig. 1). The gene encoding type 1 fibrillin, FBN1, lies on the long arm of chromosome 15 at 15q21.1. This very large gene (over 230 kb [Human Genome Sequencing Project NT _ 034890 sequence]) is highly fragmented into 65 exons and is transcribed in a 10-kb mRNA that encodes a 2871 amino acid protein [Lee et al., 1991;Maslen et al., 1991;Corson et al., 1993;Pereira et al., 1993;Biery et al., 1999]. Primary mutations in the FBN1 gene have been associated with a wide range of phenotypes that show considerable variation in the timing of onset, tissue distribution, and severity. The severe end of this broad spectrum of phenotypes, neonatal MFS, is defined by a rapidly progressive form of MFS, present at birth and associated with significant functional impairment. Death in early childhood often occurs from congestive heart failure associated with mitral and tricuspid regurgitation. Classic MFS (MIM# 154700) is a highly variable condition with pleiotropic manifestations in the eye (myopia and ectopia lentis), skeleton (long bone overgrowth and joint laxity), cardiovascular system (aortic dilatation/dissection, mitral valve prolapse), respiratory system (bullous changes in the lungs and recurrent spontaneous pneumothorax), skin (striae distensae), and integument (hernia and dural ectasia). Conditions at the mild end include: mitral valve prolapse, aortic dilatation, and skin and skeletal manifestations syndrome (MASS; MIM# 604308); mitral valve prolapse syndrome (MVP; MIM# 157700); isolated skeletal features; familial aortic dissection (FAD; MIM# 132900); ectopia lentis (EL; MIM# 129600) with relatively mild skeletal features; and recently, Weil-Marchesani syndrome (WM; MIM# 266700) [Faivre et al., 2003]. Many of these conditions show significant overlap with MFS and are quite common in the general population. The spectrum of overlapping disorders presenting with FBN1 mutations define the molecular group of ''type 1 fibrillinopathies'' [Collod-Beroud and Boileau, 2002;Robinson et al., 2002]. DATABASE As in previous editions, mutation names are numbered with respect to the FBN1 gene cDNA sequence obtained from GenBank (accession number L13923; complete coding sequence of HUM-FIBRILLIN Homo sapiens fibrillin mRNA). Intron-exon boundaries are as defined by Pereira et al. [1993] and module organization is from SWISS-PROT (accession number P35555). The database has recently been modified to follow the guidelines on mutation databases of the HUGO Mutation Database Initiative (MDI) and the Human Genome Variation Society (HGVS), including the approved mutation nomenclature from den Dunnen and Antonarakis [2001], also available at www.HGVS.org/mutnomen/. Data on fibrillin protein biosynthesis classification groups have been added when described as in Aoyama et al. [1994]. ''Restriction enzyme'' appears on the first page of the mutation record and provides a restriction map showing the abolished or the new site created by the mutation and the enzymes of interest. MUTATION ANALYSES To date, 563 FBN1 mutations have been identified and reported. Since the software cannot accommodate complex mutational events in a given individual, four mutations are not included in the current version of the database (3901 _ 3904del; 3908 _ 3909del [Nijbroek et al., 1995]; 1642del3ins20pb and 1888delAAinsC [Schrijver et al., 2002]; and 1882 _ 1884delinsAAA [Rommel et al., 2002]). The double mutant 3212T4G;3219A4G (I1071S;E1073D) reported by Wang et al. [1996] is reported in two different records linked by the same sample ID, as well as for the double mutant 3797A4T;5746T4A (Y1266F;C1916S), found in a French proband (unpublished data). Other mutations are spread throughout almost the entire gene without obvious predilection for any given region (Fig. 1). So far, mutations were thought to be private and generally nonrecurrent. With the increased amount of data, the percentage of recurrence has become more evident and now represents 12% (56 recurrent mutations representing 156 events; see Table 1). These mutations have been characterized by different teams, but until haplotype analysis becomes available, it is unclear whether these are truly recurrent mutations or whether they result from a founder effect. When information about transmission type (available for 398 mutations) is examined, there are a surprising number of de novo mutations as compared to transmitted mutations (188 de novo vs. 210 familial). One can ask if this extreme representation can really be explained by a selection bias: each family accounting for one. De novo mutations may represent more than the usually suspected 25%. Exonic mutations (492 mutations) are present in all exons, however they are significantly underrepresented in exon 45 (cbEGF-like#27) and exon 57 (8 cystein#7) and overrepresented in exon 13 (encoding cbEGF-like#4), exon 26 (cbEGF-like#12), exon 27 (cbEGF-like#13), exon 28 (cbEGF-like#14), and exon 43 (cbEGF-like#25) ( Table 2). Overrepresentation in exons 26 to 28 can be explained by the fact that almost all the mutations identified in neonatal cases of MFS1 are located within this area (exons 24-32). Furthermore, mutations in this region are more likely to be associated with a severe clinical phenotype [Tiecke et al., 2001], which probably led to a bias in patient selection for mutation detection. The fibrillin gene has been identified and sequenced in vertebrate species: bovine (L28748), mouse (L29454), rat (AF135059), dog (partial cDNA, AF29080), pig (AF073800), chick (partial cDNA, U88872), and invertebrates: medusa (partial cDNA, L39930). Identification at the amino acid level between mammals is so high (e.g., 97.8% human-bovine, 98% human-rat, and 96.2% human-mouse) that very often phylogenic conservation should be observed at the amino acid position affected by a given missense mutation. In fact, all reported mutations in the FBN1 gene affect a conserved amino acid with respect to the bovine sequence. It is interesting to note that only six mutational events affect amino acids nonconserved between mouse and man: three deletions (4179 _ 4187del [Lesley Adès and Katherine Holman, personal communication, 2000], 4177 _ 4177delG, and 7965 _ 7977del [Schrijver et al., 2002]); a duplication (6409 _ 6411dup, Collod-Beroud, in preparation); a nonsense mutation for which the causality is obvious (6339T4G ); and a missense mutation (3382G4A corresponding to V1128I [Loeys et al., 2001]). In the medusa (Podocoryne carnea), the primary amino acid sequence (440% sequence identity with mammals), the highly repetitive multidomain structure, and the beaded microfibril appearance of fibrillin are highly conserved with respect to the human gene [Reber-Müller et al., 1995]. Fibrillin, as well as collagen, is thought to be a very early invention of metazoans in evolution [Doolittle, 1992]. Reber-Müller et al. [1995] suggested that with the invention of fibrillin, resilience and elasticity might have been added to the characteristics of ECM, thus providing the biomechanical basis for the development of a freeswimming medusa life stage. Large Rearrangements As of January 14, 2003, the Human Gene Mutation Database (HGMD) contains 32,249 mutations in 1,301 genes (public dataset numbers available online). Of those, 1,771 (5.5%) are gross deletions and 262 (0.8%) are gross insertions and duplications. Surprisingly, in the FBN1 gene, no major deletions have been reported except for the exon 60-62 genomic deletion [Kainulainen et al., 1992] and two multi-exon deletions [Liu et al., 2001] (see Supplementary Table S3, which is available online at http://www.wiley.com/humanmutation/suppmat/2003/v22.html). However, it is still unclear if screening for this type of mutation has always been done. The deletions of contiguous EGF-like domains have different effects depending on their location within the fibrillin-1 molecule. Deletion of the three contiguous cbEGF-like domains encoded by exons 44-46 resulted in a severe phenotype with onset in infancy and a rapidly progressing clinical course [Liu et al., 2001]. Inframe deletion of exons 42-43 was characterized in a patient presenting with bilateral ectopia lentis and Marfanoid skeletal features [Liu et al., 2001], and finally, deletion of the cbEGF-like domains encoded by exons 60-62 in the C-terminal domain of fibrillin-1 resulted in a much less severe phenotype characterized by a moderate Marfanoid habitus [Kainulainen et al., 1992]. In the patient presenting with the inframe deletion of exons 42-43, two sets of identical pentamers (cagta and ggaaa) were identified near the breakpoints in introns 41 and 43. In the deletion of exons 44-46, the exchange occurred within an identical pentamer (atttt). None of these sequences are known to predispose to genomic rearrangement. For the exon 60-62 genomic deletion, these data are not available. Small Insertion/Deletion Mutations and Duplications Of 80 small insertion/deletion mutations (o20 nt), 72 mutations create a premature termination codon (PTC). These account for 12.9% of the total mutations (small insertion: 22 and small deletions: 50). These act as dominant negatives, but display a highly variable clinical phenotype, ranging from severe to mild. The severity of the phenotype is directly related to the quantitative expression of the mutant allele and to the percentage of truncated proteins incorporated in the microfibrils [Nijbroek et al., 1995;Karttunen et al., 1998]. In the 55 small deletions reported (Supplementary Table S1, available online at http://www.wiley.com/humanmutation/suppmat/2003/v22.html), 18 single base pair deletions can be the result of a mechanism of slipped mispairing, four small deletions (5791 _ 5793delGTT, 8525 _ 8529delTTAAC, 755 _ 762 del, and 3603 _ 3668del) are flanked by direct repeats, and three mutations are deletions of a repeated sequence (635 _ 636delCA, 3355 _ 3358delAGAG, and 4920 _ 4923 delTGAA). For the 30 other deletions, the mechanisms must be determined by searching for the presence of quasi-palindromic sequences, inverted repeats, or symmetric elements that facilitate the formation of secondary-structure intermediates [Krawczak et al., 1991]. Twenty-seven insertions have been reported so far (Supplementary Table S2). Six are insertions within runs of identical bases and can be explained by slippage mispairings at the replication fork. Eleven single base pair insertions correspond to the duplication of an existing base. Seven insertions are small duplications of existing sequences (Supplementary Table S2). The mechanisms for the remaining three insertions must be determined by searching, as described above. Splice Mutations The pre-mRNA splicing machinery recognizes exons and joins them together to form mRNAs with intact translational reading frames. Splicing requires canonical sequences at the intron/exon boundary. Three categories of mutations can be identified (Supplementary Table S3). The first one corresponds to mutations in canonical sequences and represents, for the FBN1 gene, 60 of the 73 splice mutations that cause abnormal splicing patterns using the nearest and strongest consensus splice site. The second category (10 mutations) corresponds to mutations not located in canonical sequences. Over the last few years, several studies have indicated that distinct sequence elements distant from the splice sites are also ''Stat exons'' studies the distribution of exonic mutations and enables detection of a statistically signi¢cant di¡erence between observed and expected mutations. The algorithm takes into account the mutability of each base from an exon. The mutability is de¢ned as follows: for each base, the signi¢cance of a mutation is de¢ned by the ability to produce a new amino acid. In these conditions, the position of the speci¢c base in the codon has a major incidence. If any substitution results in a new amino acid, its individual mutability is three. If only two substitutions result in a new amino acid, its mutability is two.The exon's mutability is de¢ned by the addition of all individual mutability for each base.The expected value is calculated by the formula: (exon's mutability/by the cDNA mutability) n observed mutations.The p value is calculated using the usual Chi square formula. needed for normal splicing. These sequence elements can affect splice-site recognition during constitutive splicing and can also play important roles in directing alternative splicing [Cooper and Mattox, 1997]. They can be auxiliary splicing elements (ASEs), which are required for cell-specific modulation of alternative splicing within introns that flank alternative exons, or exonic splicing enhancers (ESEs) within both coding and noncoding exons, which direct specific recognition of splice sites during constitutive and alternative splicing. Two exonic mutations, a nonsense mutation 6339T4G (Y2113X) , and a silent exonic mutation 6354C4T (I2118I) , have been reported as inducing in-frame skipping of the entire exon 51, and demonstrate the existence of an ESE sequence in exon 51 [Dietz and Kendzior, 1994;Caputi et al., 2002]. Eight other mutations could belong to this category, with mutations up to 53 bp away from the canonical sequence. For most patient cDNA, analysis is not available and abnormal splicing has not been demonstrated, so causality is still uncertain. Finally, the third category of mutations is provided by single base pair changes that introduce novel splice sites that substitute for the wild-type sites. A single recurrent mutation that may create a potential donor splice site has been reported, but an abnormal splicing pattern has not been demonstrated (3294C4T) (see Table S3). This alteration was not observed during screening of 504 chromosomes for reference a. In the majority of splice-site mutations, exon-skipping results in an in-frame mRNA product. This results in a mutant fibrillin-1 missing an integral domain. Mutant alleles produce abnormal monomers that interfere considerably with the assembly of fibrillin molecules in the microfibrilliar network. In a small number of patients (nine cases), the skipping of an exon causes a frameshift, a premature termination codon (PTC), and reduced mutant RNA levels because of nonsense-mediated decay of the mutant transcript [Frischmeyer and Dietz, 1999]. Furthermore, MFS patients have been reported in whom the donor splice site mutation results in the incorporation of intronic sequence (IVS46+1G4A, IVS27+1G4A) into the transcript or in the use of a cryptic splice site inducing partial exon deletion (IVS18+2T4C, IVS37+5G4T) (see Table S3). Nonsense/Missense Nonsense (61 cases) and missense (337 cases) mutations represent respectively 10.9% and 60.3% of mutations. Among missense mutations, more than threequarters (263/337) are located in the calcium-binding modules. These mutations can either introduce (20/263; 7.6%) or substitute (129/263; 49%) cysteine residues potentially implicated in disulfide bonding. Pulse-chase studies on fibrillin-1 secretion from MFS patient fibroblasts have shown that these mutations often result in a delay in secretion or intracellular retention of profibrillin-1 [Aoyama et al., 1993[Aoyama et al., , 1994Halliday et al., 1999;Schrijver et al., 1999]. Since three disulfide bonds are required to maintain the native cbEGF-like module fold, suppression or addition of cysteine residues would result in cbEGF-like module misfolding, which impairs trafficking [Johnson and Haigh, 2000;Lippincott-Schwartz et al., 2000;Lord et al., 2000]. Most of the remaining mutations of these modules affect residues of the calcium consensus sequence and result in reduced calcium affinity, which may in turn destabilize the interface between two consecutive cbEGF-like modules. Calcium binding would rigidify the interdomain region between two cbEGF-like modules and allow multiple tandem cbEGF-like modules to take on a rigid, rod-like conformation Knott et al., 1996;Cardy and Handford, 1998]. An increased protease susceptibility due to reduced calcium affinity is a mechanism also reported for missense mutations. This pathological mechanism emphasizes the importance of calcium binding for the structural integrity of fibrillin-1. Mutations which do not belong to one of these subclasses may likely be involved in protein-protein interactions. Other modules are carriers of one-quarter of missense mutations and their pathological mechanisms have yet to be clearly demonstrated. The global molecular analysis of the FBN1 mutations reveals two classes of mutations. The first, which represents more than one-third of the mutations (38.6%), corresponds to mutations predicted to result in shortened fibrillin-1 molecules; 61 nonsense mutations, 71 splicing errors, 23 insertions and duplications, 51 deletions, and 10 inframe deletions are in this class. They act as dominant negative but display a highly variable clinical phenotype, the severity of which is directly related to the quantitative expression of the mutant allele and to the percentage of truncated or shortened proteins incorporated in the microfibrils [Nijbroek et al., 1995;Karttunen et al., 1998]. The second class represents less than two-thirds (60.3%) of the mutations and corresponds to missense mutations, most among them are located in cbEGF-like modules (78%). They can be subclassified into: 1) mutations creating or substituting cysteine residues potentially implicated in disulfide bonding and consequently in the correct folding of the monomer; and 2) amino acids implicated in calcium binding and subsequently in interdomain linkage, rigidification of monomer, and in protease susceptibility; see www.hgvs.org/mutamen for updated recommendations. ERRORS MOST FREQUENTLY OBSERVED IN MUTATION PUBLICATIONS The latest nomenclature recommendation, suggesting description of the mutation at the nucleotide level, is not always used [den Dunnen and Antonarakis, 2001]. The description of the mutation at the protein level is preferred by some diagnostic laboratories. This nomenclature is not sufficient, since the degenerated code makes several nucleotide changes correspond to the same amino-acid change. The second most frequent error is ignoring the consensus that names an insertion or a deletion in a stretch of nucleotides or repeated sequences as a mutation of the ultimate 3 0 repetition. Examples are: ATCCCGT4ATCCGT is 5delC; ATCCCGT4 ATCCCCGT is 5insC; AATCATCGTC4AATCGTC is 5delATC; and AATCATCGTC4AATCATCATCG TC is 7insATC [den Dunnen and Antonarakis, 2001]. Finally, some mutations are reported twice for the same patient with a modified patient ID in publications. Therefore, there is the need for a strong recommendation for each of the teams implicated in diagnosis to use a unique patient identifier corresponding to family, patient country, and team number. FBN1 POLYMORPHISM DATABASE To assist in diagnosis, we created an independent database for FBN1 polymorphisms that reports all the polymorphisms published or contributed. The aim is to make available a complete set of FBN1 gene variations (mutations + polymorphisms) to the community, which may be used to quickly identify a causative mutation, thus saving time and money on testing for a polymorphism in a control population. In a first step along with the molecular data already published, we report information regarding: population ethnicity in which the polymorphism was first described; other populations tested, if done; the number of tested chromosomes; and the frequency of each allele. The list of the published polymorphisms is reported in Supplementary Table S4 (available online at http://www.wiley.com/humanmutation/suppmat/2003/ v22.html). Another step will be to report patients in which these polymorphisms have been found. In the future, this information would be of particular interest and assistance in determining if an FBN1 genotype could be associated with greater severity or moderate phenotypes in coordination with other FBN1 mutations, ECM proteins, or enzyme genotypes. These data should provide a tool for beginning to understand the great phenotypic variability associated with a mutation in different probands or in the same family. DISCUSSION Elucidating the molecular basis of MFS and related fibrillinopathies is the major goal of the teams working on this pathology [Collod-Beroud and Boileau, 2002;Robinson et al., 2002]. The extreme clinical variability, the difficulties associated with clinical diagnosis, and the low detection rate of mutations of this large gene (240 kb) all conspire to negatively impact on progress. At present, it is not possible to predict the phenotype for an FBN1 mutation. On one hand, mutations affecting different positions within a given module may be associated with quite different phenotypes. On the other hand, mutations affecting an analogous residue within two different modules may also be associated with differing phenotypes. Therefore, it is apparent that neither the location of the affected structural module in the protein nor the position of the altered residue is, in itself, sufficient to predict potential genotype-phenotype correlations [Palz et al., 2000]. The high degree of intrafamilial variability suggests that environmental and perhaps stochastic or epigenetic factors are important for the phenotypic expression of disease. The effects of unknown modifier genes (enhancing or protecting) on the clinical expression, as well as the conjugation of different alleles of the multiple fibrillin-interacting proteins, are likely to constitute the foundation of an enhanced susceptibility for disease severity. All of these hypotheses are starting points for future research.
2018-04-03T03:12:01.134Z
2003-09-01T00:00:00.000
{ "year": 2003, "sha1": "ae2a02ad24e24d8eca030aa0c415f9f03a24d995", "oa_license": null, "oa_url": "https://doi.org/10.1002/humu.10249", "oa_status": "GOLD", "pdf_src": "Wiley", "pdf_hash": "52dcd51b3bf6a46d2aca68ef14e23529a3945a80", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
7091133
pes2o/s2orc
v3-fos-license
Disruptive Models in Primary Care: Caring for High-Needs, High-Cost Populations. Starfield and colleagues have suggested four overarching attributes of good primary care: "first-contact access for each need; long-term person- (not disease) focused care; comprehensive care for most health needs; and coordinated care when it must be sought elsewhere." As this series on reinventing primary care highlights, there is a compelling need for new care delivery models that would advance these objectives. This need is particularly urgent for high-needs, high-cost (HNHC) populations. By definition, HNHC patients require extensive attention and consume a disproportionate share of resources, and as a result they strain traditional office-based primary care practices. In this essay, we offer a clinical vignette highlighting the challenges of caring for HNHC populations. We then describe two categories of primary care-based approaches for managing HNHC populations: complex case management, and specialized clinics focused on HNHC patients. Although complex case management programs can be incorporated into or superimposed on the traditional primary care system, such efforts often fail to engage primary care clinicians and HNHC patients, and proven benefits have been modest to date. In contrast, specialized clinics for HNHC populations are more disruptive, as care for HNHC patients must be transferred to a multidisciplinary team that can offer enhanced care coordination and other support. Such specialized clinics may produce more substantial benefits, though rigorous evaluation of these programs is needed. We conclude by suggesting policy reforms to improve care for HNHC populations. INTRODUCTION Starfield and colleagues 1 have suggested four overarching attributes of good primary care: Bfirst-contact access for each need; long-term person-(not disease) focused care; comprehensive care for most health needs; and coordinated care when it must be sought elsewhere.^As this JGIM symposium on reinventing primary care highlights, there is a compelling need to develop new models for advancing these objectives. The innovations described in this series of articles, particularly models of care delivery within retail clinics and the home environment, 2 the integration of behavioral health within primary care, 3 and technological advances facilitating primary care delivery, 4 could help primary care clinicians work towards Starfield's vision. The need to deliver on Starfield's aims is particularly urgent for the most vulnerable populations: high-needs patients with complex medical, behavioral health, and social problems, who frequently Bfall through the cracks^within the existing healthcare landscape. 5 Such patients commonly report high levels of frustration with the care delivery system, and may experience avoidable complications resulting from disjointed, inaccessible, and poorly aligned care. 6 Because Bhigh-needs^patients require extensive attention and services, they also consume a disproportionate share of resources. 7 In many populations, approximately 80% percent of medical expenditures are concentrated within 20% of patients and 50% of costs within 5%. 8 As a result, the term Bhigh-needs, high-cost^(HNHC) has emerged to describe these patients. 9,10 Sophisticated algorithms incorporating comorbidities, 11 socioeconomic factors, behavioral health and substance abuse diagnoses, geographic factors, and qualitative feedback from providers and staff have been developed for identifying HNHC patients (see Fig. 1). 12 While improving care for HNHC patients represents an important opportunity, 6 it also presents a formidable challenge, particularly for traditional office-based primary care practices that lack the multidisciplinary resources these patients so often need. 13 Primary care clinicians in such clinics often feel ill-equipped to manage HNHC patients, 14 and may abdicate their role in overseeing their care. Indeed, much of the care for HNHC populations occurs in specialty offices, emergency rooms, and hospitals, 15 where services tend to be disjointed. 16 Without a primary care Bquarterback^in charge, HNHC patients may experience avoidable and expensive downstream complications and costs. 17 Spurred by the rise of value-based payment models, 18 programs to redesign primary care for HNHC populations have recently proliferated. Though they vary in design, these programs attempt to empower primary care clinicians, supported by multidisciplinary teams, to reclaim the management of HNHC populations in an effort to achieve Starfield's aims. 6 The rationale for investing in these efforts is that high-quality, patient-centric care will not only improve outcomes for HNHC patients, but will also yield dividends in the form of avoided downstream resource utilization. 19,20 In this essay, we offer a clinical vignette highlighting the challenges of caring for HNHC populations. We then describe two primary care approaches for managing HNHC populations: complex case management 21 and specialized clinics focused on HNHC patients. 22 We conclude by suggesting policy reforms that could improve care for HNHC populations. CLINICAL VIGNETTE A 49-year-old man with a history of alcoholism, bipolar disorder, and an idiopathic cardiomyopathy has visited the emergency room seven times over the past year, resulting in three hospital admissions, all for the primary diagnosis of congestive heart failure. The patient recently received an implantable cardioverter defibrillator and a biventricular pacemaker. He lives with his brother in subsidized housing in a low-income part of the city where fast food restaurants are abundant and there are few supermarkets. He has Medicaid coverage, and is empanelled to a busy community primary care physician, whom he has seen irregularly for 2 years. He has Bno-showed^for visits to this physician on two occasions over the past year. The primary care office staff left messages to reschedule his appointments but conducted no other outreach. Because the patient has been flagged as a Bhot-spotter^5 by his community hospital, a case manager has become involved in his case. The case manager has made another primary care appointment for him. The patient arrives at his primary care doctor's office on a Friday afternoon, an hour-and-a-half late, because he missed the bus. Despite his tardiness and the fact that there are numerous other patients on the schedule, his primary care doctor agrees to see him. When the primary care doctor enters the room, she realizes that the faxed hospital records never arrived. During the visit, the patient appears fatigued and anxious. He reports eating all his meals at McDonald's because Bthere's nowhere else to eat.^He is uncertain about which medications he is taking, and forgot his pill bottles. He does report taking valproic acid, without which he Bgets into trouble,^as well as a Bblood thinner.^He can't recall the last time the level of the blood thinner was checked. How should this primary care doctor manage her patient? APPROACH 1: COMPLEX CASE MANAGEMENT EMBEDDED WITHIN PRIMARY CARE Complex case management, also often called complex care management, refers to the provision of care coordination and support services for HNHC populations. 23 These services may include everything from connecting patients with social service programs, to identifying and assisting with psychological or behavioral health needs, to providing protocol-driven disease management. The services may be provided by a wide array of providers, 24 including social workers, nurses, medical assistants, and community health workers. These services may be superimposed on, or incorporated into, standard officebased primary care practices, as has been proposed in the patient-centered medical home model. 25 Figure 1 Stratification of health risk within a population. Patients with Bsevere health risk^represent the 5-20% of the primary care population with the highest needs and costs. These are the patients most likely to benefit from intensive primary care services. Standard continuity primary care works best for those with moderate to high risk. The majority of the population, who are at low health risk, require only routine preventative care and episodic low acuity acute care. Because it is generally not practical to hire dedicated case management staff in small community primary care practices, complex case management programs have predominately arisen within large, risk-bearing health systems that have a sufficient population to justify investment in dedicated staff. 26 Examples include the complex case management programs offered by Kaiser, 27 the U.S. Department of Veterans Affairs (VA), 24 and Partners HealthCare. 28 Complex case management may also be offered by entities serving multiple small primary care practices that, in aggregate, offer sufficient scale to justify the investment. Examples include the ubiquitous complex case management programs offered by third-party payers, 29 the Community Care of North Carolina program offered by the state Medicaid agency, 30 and the charitably funded Camden Coalition that serves HNHC individuals in Camden, NJ. 5 While in principle complex case management might seem an intuitive strategy for supporting HNHC populations, in practice the impact of these programs has been modest. 10 Perhaps the most informative example to date was the Medicare Coordinated Care demonstration. In 2002, Medicare funded the demonstration to assess the value of intensive care coordination for chronically ill geriatric patients. 31 The demonstration involved 15 organizations-commercial disease management companies, community hospitals, academic medical centers, an integrated delivery system, a hospice, a long-term care facility, and a retirement community. Most programs in the demonstration enrolled patients with chronic conditions who had been hospitalized within the previous year, and assigned them to a care coordinator (typically a registered nurse) who assessed them and developed a care plan. The coordinators, usually with caseloads <100, contacted patients several times per month, primarily by telephone. Expectations for the program were high, 32 but the results were disappointing. Relative to control patients, quality measures among enrollees improved only modestly; there was no reduction in hospitalization rates, and after discounting the upfront costs, none of the programs lowered overall expenditures. 31 Still, some programs in the demonstration fared better than the rest. Most notably, Health Quality Partners, a non-profit disease management organization in Pennsylvania, reduced hospitalization rates and overall mortality among their enrollees, though the program did not lower total costs after discounting program fees. 33 In contrast to other programs in the demonstration, care coordinators in the Health Quality Partners program provided regular in-person interaction with their assigned patients and had close relationships with their primary care clinicians. The coordinators also focused on coordinating care transitions and medication management. 34 Other analyses of complex case management programs have similarly found that these strategies are essential for success. 10 APPROACH 2: DEDICATED PRIMARY CARE CLINICS FOR HIGH-NEEDS, HIGH-COST PATIENTS Another approach for managing HNHC patients involves the establishment of specialized primary clinics offering concentrated complex case management resources, including multidisciplinary personnel, all in one place. Primary care clinicians who work in these clinics typically have reduced panel sizes-sometimes less than 200 35 vs. a standard primary care panel of >2,000 36 -to enable more individualized attention for enrolled patients. One of the best known examples of such clinics was developed at The Boeing Company, a large self-insured employer. In 2007, Boeing established an intensive primary care clinic for the 15% of its employees with the highest health care costs. Patients enrolled in the clinic are supported by a primary care physician and a multidisciplinary team led by a nurse or social work coordinator with a panel of 150-200 patients. 37 Upon enrollment, patients receive a health assessment to identify their medical and social needs, and develop a care plan in collaboration with their team. Because of enriched resources in the clinic, patients may interact with their care team at all hours, and can usually schedule visits on a next-day basis. A controlled, though not peer-reviewed, evaluation found that the Boeing program produced substantial improvements in patients' physical and mental functioning, a 56% reduction in missed work days, and a 20% reduction in the monthly cost of care, driven in part by a 55% reduction in emergency department visits. 38 Another well-known primary care program for HNHC populations is Medicare's Program of All-Inclusive Care for the Elderly (PACE), which has over 100 sites nationwide. PACE targets older community-dwelling patients with medical and functional comorbidities meeting criteria for nursing home placement. 39 PACE programs-which accept full financial risk for their populations-offer rich complex case management resources, and provide many non-traditional services, such as furnishing air conditioners during summer heat waves and arranging for social outings. They are also culturally tailored to their population. 40 Primary care clinicians in PACE programs often have panel sizes of less than 200. 41 Compared to the general Medicare population, PACE enrollees experience lower emergency room, hospital, specialty care, and nursing home utilization, and lower overall mortality, 39 though there has yet to be a rigorous prospective controlled evaluation of the program. There have been a number of iterations of clinic models for HNHC patients. Some have done away with the brick-andmortar clinic entirely, instead bringing primary care services directly into the home environment. [42][43][44] In other models, patients receive longitudinal primary care from their regular clinicians, but are referred to specialized clinics during highrisk periods, such as perioperatively or following a hospitalization. The Medicare Advantage group CareMore 45 offers one such program, and a team at the University of Chicago is currently testing a similar model, the Comprehensive Care Physician Model. 46 DISCUSSION OF THE CLINICAL VIGNETTE The vignette described above highlights both the challenges and opportunities for managing HNHC patients in the primary care setting. The patient in the vignette suffers not just from complex medical comorbidities, but also from social and behavioral challenges that adversely impact his health. He would clearly benefit from intensive medical, social, and psychosocial services, specifically assistance planning for medical appointments, identifying healthful dietary options, managing his medications, and connecting with social and behavioral support services. No one is better positioned to oversee his care than his primary care clinician. Yet most primary care physicians lack the resources to coordinate these services, and as a result his care may end up fragmented. Both of the primary care programs for HNHC patients described in this essay could benefit the patient in this vignette. One option would be for his existing primary care clinician to refer him to a specialized HNHC clinic, where he would be reassigned to a new primary care clinician with a smaller panel size and more time and resources to devote to his care. In such a clinic, other team members-such as social workers, behavioral health experts, and health educators-would be available to proactively support his care. Such clinics are in short supply, however, particularly in low-income communities, 6 and it is far from certain he would have access to such a program, even in a Medicaid managed care environment. As an alternative, this patient's existing primary care clinician might continue overseeing his care, with support from complex case management resources in the community, e.g. from the patient's health plan or a community organization. The highest-yield programs utilize coordinators who develop close relationships with patients and their primary care clinicians, and focus on care transitions and medication management (though many real-world organizations do not meet these specifications). Unfortunately, most Medicaid programs do not reimburse primary care clinicians for their efforts in coordinating this care (though recently Medicare began offering a modest care management fee to support such activities 47 ). POLICY CHANGES TO IMPROVE CARE FOR HIGH-NEEDS, HIGH-COST PATIENTS As this vignette highlights, providing high-quality primary care for HNHC patients in a community setting often requires extraordinary effort. This underscores the need for policy changes to facilitate caring for HNHC populations. Of the two approaches for managing HNHC populations described in this essay, existing evidence, though nascent, suggests that specialized clinics for HNHC populations offer more hope for substantially improving care of HNHC patients than complex case management embedded within, or superimposed upon, traditional primary care practices. Such specialized clinics for HNHC populations have begun to spread in recent years. 48,49 Nevertheless, such clinics require substantial investment and a rethinking of the traditional primary care infrastructure, and thus remain few and far between. Likewise, complex case management programs have recently proliferated 29 ; however, as many primary care clinicians and patients attest, 13 these services vary in quality and can be difficult to access in a timely manner. 50 Worst of all, resources for managing HNHC populations tend to be most scarce in low-income communities where medical and social complexity is most intense and where these services may be most needed. 6 Nevertheless, we believe that modest policy changes could substantially increase the availability of primary care resources for HNHC populations. 51 First, there is a need for changes in how care for HNHC populations is reimbursed. Traditional fee-for-service payment provides only limited reimbursement for the intensive services HNHC populations need, and programs that do cover these services like the Medicare care management fee 47 are available in only a small number of situations. Even large health delivery systems that assume partial financial risk for their population may be hesitant to make large investments in HNHC populations because they reap only a fraction of the potential downstream savings from these investments. 26 In contrast, organizations that assume full risk-such as the VA, Kaiser, CareMore, PACE, and some self-insured employers-tend to invest more heavily in their HNHC populations. 29 Though full-risk models are impractical in many settings, a shift towards risk-bearing arrangements, as well as value-based reimbursement models 18 and shared-savings programs like accountable care organizations, 52 is likely to increase investment in HNHC populations. Moreover, reimbursement models should recognize not just traditional medical comorbidities but also social factors when adjusting for risk. 53 Another factor that may stymie efforts for improving care for HNHC populations is the fact that funding for medical, behavioral health, and social service delivery is often separate. 54,55 Particularly for HNHC individuals, social determinants such as housing, health literary, food security, and access to healthy food and exercise opportunities as well as behavioral health resources are likely to influence health outcomes to a greater extent than are medical services. 56 As a result, even health systems that bear full financial risk for their population may be limited in the services they can offer HNHC patients. 57 Better integration of medical, behavioral health, and social service delivery, such as the models currently being tested as part of state waivers for managing dualeligible populations, 58 the Centers for Medicare & Medicaid Services Accountable Health Communities demonstration, 59 and the British Bpersonal health budgets^program, 60 could bolster the ability of health care delivery systems, and primary care clinicians in particular, to meet the needs of HNHC patients. 61 A final challenge for improving care for HNHC populations is that, with a handful of exceptions (some of which were noted in this essay), there are few rigorous studies of HNHC programs. 10 Part of the explanation for this is that evaluating programs for HNHC populations presents unique hurdles. Patients selected for extreme cost and utilization patterns tend to improve over time due to regression to the mean. 62 Thus, to rigorously evaluate HNHC programs, it is critical to include well-matched contemporaneous control groups-which adds complexity and cost to the evaluation. Still, to provide health system leadership with data to justify investments in programs for HNHC patients, as well as to ensure that these programs are designed efficiently, it will be necessary to make these research investments. 51 Fortunately, many of the reforms we have suggested are already under way. In some regions, health care delivery organizations are accepting more financial risk and accountability for their populations. 30 As noted above, efforts are also starting to take shape, albeit slowly, to integrate the delivery of medical, behavioral health, and social services through state initiatives 58 and national demonstrations. 59 Moreover, federal 63 and foundation 6 funders have prioritized investment in research for HNHC populations. The need to improve care for HNHC populations offers a compelling reason to maintain momentum for these policy changes. CONCLUSION As other articles in this symposium highlight, there is a pressing need for new strategies to reinvigorate primary care. HNHC populations are among those most likely to benefit from such reforms. Indeed, without better strategies for managing resource-intensive patients, these timeconsuming patients may stymie more general efforts to redesign primary care for the twenty-first century. For HNHC populations, perhaps more than for any other group, the foundation of high-quality primary care rests on four simple attributes: being available for patients when needed, caring for them in a holistic manner, offering comprehensive services, and serving as the quarterback of their care. The challenge is how to provide primary care clinicians and their teams with the support and resources needed to deliver on these objectives for their most vulnerable patients.
2017-08-03T00:19:10.826Z
2017-02-27T00:00:00.000
{ "year": 2017, "sha1": "0f006dcd0c5004dcdfbf9031c37b6e69433d218c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-016-3945-2.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0f006dcd0c5004dcdfbf9031c37b6e69433d218c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226988477
pes2o/s2orc
v3-fos-license
In-situ observation of ancient co-fusion steelmaking process based on HT-CLSM Observing and analyzing the microstructure of zone samples had been an essential concept in the study of co-fusion steelmaking. High-temperature confocal laser scanning microscope (HT-CLSM) provided a new method in the in-situ observation and real-time analysis of the co-fusion steelmaking process. In this research, a series of experiments had been designed based on Shen Kuo’s The Dream Pool Essays (a famous ancient Chinese literature) and carried out by HT-CLSM. The results showed that a new interface was formed with the phase transition during the heating process, which had an essential influence on promoting the carbon diffusion rate in co-fusion steelmaking. The cast iron zone occurred in mushy solidification because of decarbonization. There were collision, aggregation, and other behaviors of inclusions in the cast iron zone and they moved towards the boundary, which led to the purity of the matrix and the aggregation of a large number of inclusions at the boundary. Scientific Reports | (2020) 10:19842 | https://doi.org/10.1038/s41598-020-76326-5 www.nature.com/scientificreports/ and connection of different materials [22][23][24][25] . However, the in-situ observation and research of ancient Chinese iron and steel materials have not been reported yet. In this paper, HT-CLSM was introduced into the study of ancient co-fusion steelmaking for the first time. Based on the technology recorded in ancient literature, the reactions in simulation experiments were observed in-situ, and the real-time dynamic analysis of these reactions have been carried out. With this technology's help, the high-temperature reactions such as the fusion of cast iron, solidified due to decarburization, the migration of the solid-liquid interface, and the movement of inclusions could be adequately characterized in-situ. Based on the in-situ observation results, on the one hand, the microstructure characteristics of the simulated co-fusion samples could be summarized and compared with the suspected archaeological co-fusion samples; on the other hand, the results provided a reference for the in-depth restoration of ancient co-fusion technology. Experimental Experimental design basis. The basis of simulation experiment design was crucial. This simulation experiment was designed based on the record of co-fusion steelmaking technology in Shen Kuo's The Dream Pool Essays 26 , a famous ancient Chinese literature, and it was improved under the laboratory conditions. Since ancient Chinese craftsmen usually used charcoal as fuel, and the furnace's structure was relatively small, so the upper limit of the simulation experiment temperature was set to 1200 °C 27 . "Bending wrought iron, inserting cast iron" meant that the wrought iron was processed into a suitable shape to ensure the formation of a diffusion interface between cast iron and wrought iron. The purpose of "sealing with clay and refining" was to form a relatively independent atmosphere in the smelting process to prevent the raw materials' oxidation. The simulation experiment's ideal condition was to use argon as the protective gas in the whole process. "Forging the materials" was intended to make the raw materials tightly by physical forging under suitable temperature. The formation of a solid-liquid interface was more conducive to the migration of carbon atoms between the cast iron zone and the wrought iron zone under the condition of high temperature. Although naked eyes tightly observed the materials in ancient technological conditions, there might be a gap at the interface, so it was reasonable to have a gap of less than 0.1 mm between the cast iron zone and the wrought iron zone in the sample. Samples preparation. The physical and chemical changes of raw materials in the simulation experiment were observed in-situ using ultra-high pure white cast iron with trace Si and S and industrial wrought iron as raw materials. The composition and process mainly referred to Han 27 and Wagner 28 . The chemical composition of raw materials selected in the experiment was shown in Table 1. The cast iron used in the experiment was processed into a cylinder with a diameter of 4 mm and a height of 1 mm, and the wrought iron was processed into a cylinder with a diameter of 7.5 mm and a height of 1.5 mm. Suitable space was pulled out along the centre of the upper surface, and the cast iron was embedded into the wrought iron appropriately. According to the above experimental scheme, the volume of the cast iron was 12.56 mm 3 , and the volume of the wrought iron was 53.67 mm 3 , the ratio of the cast and wrought iron was 1:4.25. The size and shape were as follows in Fig. 1. Experimental method. The HT-CLSM used in the experiment was mainly composed of two parts: the laser confocal imaging system and the heating system. The laser confocal imaging system mainly used VL2000DX purple laser, the wavelength was 408 nm, and the resolution could reach 450 nm at high temperature. The heating system could control temperature by the program, and the minimum heating speed was 0.1 °C/s; the maximum heating speed was 300 °C/min. In this experiment, the scanning speed parameters is 15 frames per second and magnification 100 times were recorded automatically. The prepared sample was cleaned, inlaid, polished, then removed from the inlaid mould and placed into the heating furnace of the high-temperature confocal scanning microscope. Due to the limitation of ancient furnace temperature conditions, the upper limit of the simulation experiment temperature needed to be controlled at 1200 °C 27 . The sample was heated to 1200 °C (at 115 °C/min), kept for 1 h, and cooled down with 130 °C /min cooling rate. Moreover, to prevent the adverse effects of sample oxidation, the sample chamber was filled with high purity Ar as the protective gas. The heating rate mainly refers to the data of the steel ingot placed in the open charcoal furnace during the simulation experiment in a sword workshop. The cooling rate mainly refers to the steel ingot cooling rate in air. The authors agree that argon as a protective atmosphere did not exist in the ancient times. However, the importance of controlling the relatively independent atmosphere is often mentioned in the literature of ancient co-fusion process. In-situ observation results Phase transformation in the heating process. In theory, several phase transformations were mainly involved in the experimental conditions: the transformation of Fe 3 C (Cementite phase), α (Ferrite phase) to γ (Austenite phase), and part of L (Liquid phase) in the cast iron zone; the transformation to L was not involved in www.nature.com/scientificreports/ the wrought iron zone; and the phase transformation occurred in the process of carburization and decarburization of two raw materials. When the temperature rose below 1100 °C, the first phase transformation was from α to γ. The microstructure of the cast iron zone was P (Pearlite phase) + Fe 3 C II (Eutectoid Cementite phase) + Ld' (Modified Ledeburite phase) at room temperature transformed to γ + Fe 3 C II + Ld (Ledeburite phase) with the increase of temperature. The Fe-C phase diagram showed that the transformation temperature was 727 °C, which is slightly higher than the observed temperature. The wrought iron zone was mainly transformed from α to γ. The microstructures of the sample at 650 °C, 800 °C, 950 °C, and 1100 °C were observed in-situ, and the results were shown in Fig. 2. According to Fig. 2a, the microstructure of the cast iron zone and the wrought iron zone of the sample at 650 °C were mainly the same as that at room temperature, and there was no visible phase transformation. In Fig. 2b, there was no apparent phase transformation at 800 °C in the wrought iron zone, which is related to the low carbon content. In the cast iron zone, Ld' → Ld, P → γ, part of Fe 3 C II was dissolved in γ. It could be observed that compared with Fig. 2a, the carbides were dissolved. Continue to heat up to 950 °C, as shown in Fig. 2c, α → γ in the wrought iron zone, mainly γ + Fe 3 C II + Ld in the cast iron zone. To 1100 °C, the microstructure of the samples was basically unchanged. Based on the Fe-C phase diagram, the eutectic temperature of cast iron was 1148 °C. Because the expressed temperature should deviate from the actual surface temperature of the sample in the simulation experiment, therefore, attention should be paid when the temperature rose above 1150 °C. When the furnace temperature rose to 1170 °C, the cast iron zone began to fuse, as shown in Fig. 3a. This phenomenon was first seen in the boundary of the cast iron zone. It was rising to 1180 °C, as shown in Fig. 3b, the width of L in the boundary was further increased, and the carbides in the cast iron zone were active. When the temperature was over 1190 °C ,as shown in Fig. 3c, the network carbides in the cast iron zone fused, and the difference between the two phases could be observed, that is, the fused carbides and the internal γ. Heating up to 1200 °C, the carbides in the cast iron zone fused utterly, and the phase transformed to L + γ (Fig. 3d). Solidification process. Unlike the solidification with the decrease of temperature, the cast iron zone in the sample solidified due to decarburization under the experimental conditions. After smelting at 1200 °C for 15 min, the in-situ observation was shown in Fig. 4. At 1200 °C, the cast iron zone of solidified slowly due to decarburization. The boundary solidified first to form a transition region, and the carbon content in this region was affected by the diffusion law. Combined with the solidification results in Fig. 4a-d, the size of the solid-liquid interface from the boundary was obtained, and the fitting results could be obtained in Fig. 5. The results showed a linear relationship between the size of the solid-liquid interface migration and the time. According to Fig. 5, the solid-liquid interface's migration rate was 0.40 μm/s, and the fitting determinable coefficient R 2 was 0.9955. The migration distance reflected the change of the carbon content at a certain point in the cast iron zone. It was because the solidification mode of the sample was affected by the carbon content. According to the Fe-C phase diagram, when the cast iron zone's carbon content decreased to less than 1.8% at 1200 °C, the sample began to solidify. It could be understood by the diffusion equation solution of a certain represented an error function, which could be found out by the particular function table, x represented the distance of any point according to the diffusion interface, t was the reaction time, C was the concentration of any point, in this experiment it was the concentration of carbon. When C was given in any direction, In this plane, the migration distance ΔL was directly proportional to x 2 , so it could be seen that: When the diffusion coefficient D was fixed in the simulation experiment, the size of the solid-liquid interface from the boundary had a linear relationship with the time, which was consistent with the experimental results. Aggregation and movement behaviour of inclusions. Unlike the term "inclusions" in the modern steel industry, inclusions in ancient steels were quite different in size, number, and composition. Due to the control of various parameters in the ancient steel smelting process cannot be compared with that in modern times. In this experiment, even high purity Ar gas was maintained during the observation. The surface oxidation is unavoidable. Therefore, the appearance and movement of inclusions in the simulation sample caused by oxidation were essential for analyzing the microstructure of co-fusion artifacts. In-situ observation of the aggregation and movement behavior of inclusions in the co-fusion process was helpful to understand the size, shape, and distribution of inclusions in the archaeological co-fusion samples at www.nature.com/scientificreports/ room temperature. Because of the cast iron zone fused, the inclusions floated, aggregated, and migrated in the simulation experiment. The results were shown in Fig. 6. According to Fig. 6a-c, the small-sized inclusions in the cast iron zone initially formed large-sized inclusions after collision and aggregation, then moved to the solid-liquid interface under the action of liquid surface tension. The phenomenon of inclusions aggregation began to appear. In Fig. 6d-f, the inclusions aggregated near the center of the cast iron zone, then moved to the solid-liquid interface. With the continuous outward movement of inclusions in the cast iron zone, there were few inclusions in the center, and the matrix was pure. However, the inclusions at the boundary would gather in large quantities. Analysis and discussion Analysis of in-situ observation results. The in-situ observation and analysis of the simulated co-fusion samples during the heating process revealed that when the temperature was lower than the fusing point of cast iron, the solid-gas-solid interface was formed. The carbon atoms in the cast iron zone were affected by temperature and interface, and the diffusion rate was slow. As the temperature rose to the liquid phase transformation of carbides in cast iron, the boundary of the cast iron zone fused first. The solid-liquid interface was formed between the zones, which reduced the free energy of the whole system 30 , and increased the diffusion rate of carbon atoms from the cast iron zone to the wrought iron zone. Therefore, it could be seen that during the heating process, a new interface had been formed between the zones with the phase transformation, which had a significant influence on improving the carbon diffusion rate in co-fusion. The diffusion connection and new solid-liquid interface formation were essential features in the co-fusion process of heating. Based on the convection and diffusion theory, the composition of the transition area would undergo complex changes after diffusion. In ideal conditions, the reaction-diffusion rate constant at the interface was equivalent to the surface reaction rate constant, which was shown as the surface reaction law 31 . Under the condition of the same concentration difference and liquid-solid interface, the higher the temperature was, the faster the diffusion rate from the liquid phase to the transition area was, and the faster the transition area diffused to the wrought iron zone was. After the formation of the new interface, there was a relatively high degree of carbonization gradient between the zones under the conditions of temperature, interface, and composition. The carbon atoms diffused www.nature.com/scientificreports/ rapidly from the cast iron zone to the wrought iron zone. The decrease of carbon atom concentration at the cast iron zone resulted in the rapid solidification of the transition area. For inclusions, there were observable behaviors of precipitation, collision, and movement in the cast iron zone. The inclusions near the center precipitated again and moved towards the boundary, which made the matrix pure. There were a massive number of inclusions and shrinkage cavities at the boundary. No visible movement behavior of inclusions was observed in the wrought iron zone. The difference of inclusions size and quantity between the center and boundary of the cast iron zone, and this feature was not found in the wrought iron zone, which was a vital microstructure feature of co-fusion samples. Microstructure of ancient co-fusion samples from the results of simulation experiments. The analysis and determination of microstructure characteristics of the ancient co-fusion samples were one of the critical problems in Chinese metallurgical history. In-situ observation results might provide new evidence for the problem. Based on the metallographic analysis results of the samples at room temperature, as shown in Fig. 7, it was discussed from the aspects of microstructure and inclusions. From the perspective of microstructures, the in-situ observation results of the co-fusion sample showed that there was a new interface between the zones, which significantly promoted the carbon diffusion rate. With the diffusion of carbon from cast iron to wrought iron, the cast iron zone solidified due to decarburization. Because of the same process transformation in different regions, the final microstructure of the sample was more determined by the initial state. The carbon concentration gradient might exist in different areas after diffusion for a while, and there might be an evident carbon diffusion phenomenon at the interface of zones. Based on the analysis results, www.nature.com/scientificreports/ the study on the microstructure of ancient co-fusion samples should focus on the carbon content difference and the phenomenon of carbon diffusion in different regions. From the perspective of inclusions, there was a significant difference between the cast iron zone and the wrought iron zone in co-fusion. The simulation experiment results showed that the inclusions tend to move towards the boundary after aggregation due to fusion and decarburization of the cast iron zone, while the matrix was quite pure; this phenomenon was not observed in the wrought iron zone. It was suggested that the size and distribution of inclusions in different regions might be significantly different. First of all, unlike modern cast iron and wrought iron, the composition, size, and quantity of inclusions in ancient were quite different 25 . Secondly, there were further changes of inclusions in co-fusion, mainly the inclusions in the cast iron zone gather towards the boundary, and the matrix was pure, and further forging would extrude the inclusions at the boundary. There was no visible movement behavior of inclusions in the wrought iron zone, but the solubility of elements would be different due to the change of carbon content, then there were differences in number, size, and shape. Combined with the micro-analysis of iron artifacts in previous studies, the results of simulation experiments could well correspond with the characteristics of archaeological co-fusion samples. In terms of microstructures, archaeological co-fusion samples contained many characteristics related to cast iron [9][10][11][12]32 . However, the results of simulation experiments showed the characteristics of homogenization. It was due to the long holding time under high temperatures in the simulation experiments, but the archaeological samples might have a short time of heat preservation. On the other hand, the diffusion phenomenon between the interfaces was quite evident. In terms of inclusions, they were rare in the matrix of cast iron zone and distributed dispersedly in the wrought iron zone. Furthermore, generous inclusions were concentrated at the interface. This phenomenon could be seen in both archaeological artifacts and simulation experiment samples. Conclusions 1. The high-temperature laser confocal scanning microscopy could be used for in-situ observation to understand the co-fusion steelmaking process's physical and chemical changes. It had great potential in real-time analysis of the phase transformation in heating, the formation of new phases in solidification, the aggregation and migration of inclusions. Combined with the analysis results of simulated co-fusion samples at room temperature, the micro-characteristics of ancient co-fusion samples could be discussed. 2. In the heating process, a new solid-liquid interface was formed between the cast iron zone and the wrought iron zone, which significantly influenced the carbon diffusion rate. With the diffusion of carbon, the new solid-liquid interface gradually migrated to the center at a specific rate. The final solidification mode was mushy solidification. The inclusions in the cast iron zone floated, aggregated, and migrated to the solid-liquid interface. As a result, the cast iron zone matrix was pure, and quantity inclusions gathered at the boundary. The in-situ observation results provided an essential basis for further restoration of ancient co-fusion steelmaking technology. Data availability The data is available in form of Excel files from qiaoshangxiao115@163.com on e-mail request.
2020-11-18T14:07:00.830Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "07a2932ca3aa82d292679deb0962376d3c62b82e", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-76326-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a479eb4cb8d0d739ad14648dcb615dbba4ce21c8", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
253350367
pes2o/s2orc
v3-fos-license
6-Nitro-2,3-bis(thiophen-2-yl)quinoxaline The structure of the title nitrobis(thiophen-2-yl)quinoxaline has been determined at 298 K. The title compound, C 16 H 9 N 3 O 2 S 2 , was synthesized via a condensation reaction in refluxing acetic acid. One thienyl ring is nearly coplanar with the quinoxaline unit [dihedral angle = 3.29 (9) ], the other makes an angle of 83.96 (4) . Structure description 6-Nitro-2,3-bis(thiophen-2-yl)quinoxaline crystallizes in space group P2 1 /c. All bond lengths and angles are within expected values. Unlike in the related molecule 5-nitro-2,3bis(thiophen-2-yl)quinoxaline (de Freitas et al., 2020), one thienyl ring and the nitro group in the title compound are nearly coplanar with the quinoxaline moiety. The nitro group makes a dihedral angle of 7.76 (14) with respect to the mean plane of the quinoxaline unit. A survey of the literature on other 6-nitroquinoxalines reveals that the nitro group is routinely nearly coplanar. The two thienyl rings make dihedral angles of 83.96 (4) and 3.29 (9) , for the rings with S1 and S2 respectively, with the mean plane of the quinoxaline unit. The coplanar thienyl ring sulfur atom is closer in proximity to the quinoxaline nitrogen atom, in the trans arrangement of Du & Zhao (2003). The other thienyl ring is nearly perpendicular to the plane of the quinoxaline; barely adopting the aforementioned authors cis arrangement. There are no intermolecular interactions of consequence. An ORTEP view is shown in Fig. 1 and a view of the unit cell along (010) is shown in Fig. 2. Refinement Crystal data, data collection and structure refinement details are summarized in Table 1. Funding information This research was funded by a CCSU-AAUP research grant. Figure 1 A view of 6-nitro-2,3-bis(thiophen-2-yl)quinoxaline (Farrugia, 2012). Displacement ellipsoids are drawn at the 50% probability level. where P = (F o 2 + 2F c 2 )/3 (Δ/σ) max = 0.001 Δρ max = 0.51 e Å −3 Δρ min = −0.33 e Å −3 Special details Geometry. All esds (except the esd in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell esds are taken into account individually in the estimation of esds in distances, angles and torsion angles; correlations between esds in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell esds is used for estimating esds involving l.s. planes. data-2 IUCrData (2020). 5, x200203 Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > 2sigma(F 2 ) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. H atoms were included in calculated positions with C-H distances of 0.93 Å and were included in the refinement in riding motion approximation with U iso = 1.2 of the carrier atom.
2022-11-06T05:15:29.071Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "4e31abc172c0b2e9a77aabffc7d09833140fb289", "oa_license": "CCBY", "oa_url": "https://iucrdata.iucr.org/x/issues/2020/02/00/ff4034/ff4034.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0520e4841737dce06d038bba30692adaf6468f86", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
245588425
pes2o/s2orc
v3-fos-license
Correlation Analysis of circRNA Circ_0071662 in Diagnosis and Prognosis of Esophageal Squamous Cell Carcinoma Background The role of circRNA circ_0071662 has been studied in bladder cancer. The present study aimed to analyze its involvement in esophageal squamous cell carcinoma (ESCC). Methods Patients with ESCC (n = 66), esophageal ulcer (EU, n = 66), or gastroesophageal reflux disease (GERD, n = 66) and healthy controls (n = 66) were enrolled in this study. Plasma samples were collected from all patients and controls. ESCC and paired non-tumor tissue samples were collected from ESCC patients. Circ_0071662 levels in these samples were determined by RT-qPCRs. Diagnostic and prognostic values of circ_0071662 for ESCC were analyzed with ROC curve and survival curve analyses. Results Circ_0071662 level was decreased in ESCC, but not in GERN and EU compared to the controls and in ESCC tissues compared to the non-tumor tissues. Plasma circ_0071662 was closely correlated with patients’ tumor size but not with other clinical features. Decreased plasma circ_0071662 levels separated ESCC patients from GERN patients, EU patients, and healthy controls. Low plasma circ_0071662 levels were closely correlated with worse survival outcomes of ESCC patients. Conclusion Circ_0071662 is lowly expressed in ESCC and may serve as a potential diagnostic and prognostic biomarker for ESCC. Introduction Esophageal squamous cell carcinoma (ESCC) originates from the flat and thin cells lining the inside of the esophagus. 1 ESCC accounts for more than 90% of esophageal cancer, the sixth most common cancer worldwide. 1 ESCC is most frequently observed in the middle and upper part of the esophagus but can also affect other parts of the esophagus. 2,3 ESCC patients, in general, have poor survival, although their overall survival is significantly affected by stages. 4 It is reported that about 47% of patients with ESCC detected at the local stage can survive longer than 5 years, while only 25% of ESCC patients with regional metastasis and 5% of ESCC patients with distant metastasis can survive 5 years. 5,6 Therefore, the key for the survival of ESCC patients is still early diagnosis. 7 One of the major causes of delayed diagnosis of ESCC is misdiagnosis. 8,9 Several clinical disorders, such as esophageal ulcer (EU) and gastroesophageal reflux disease (GERD can mimic the symptoms of ESCC, 10,11 leading to common misdiagnosis. Although different clinical disorders may share similar clinical presentations, they may require the involvement of different molecular factors. Therefore, certain differentially expressed factors may serve as biomarkers to distinguish ESCC from other clinical disorders. 12,13 CircRNAs encode no protein products, but they are involved in the regulation of protein synthesis and degradation, thereby exerting oncogenic or tumor-suppressive functions. 14 Therefore, circRNAs may be applied as the diagnostic biomarkers for ESCC. The role of circRNA circ_0071662 has been studied in bladder cancer. 15 The present study aimed to analyze its involvement in ESCC. Participants and a 5-Year Follow-Up The study was approved by the Ethics Committee of the Third Affiliated Hospital of Chongqing Medical University and included 66 ESCC patients (43 males and 23 females, 55.9±7.5 years), 66 esophageal ulcer (EU) patients (43 males and 23 females, 55.7±7.1 years), 66 gastroesophageal reflux disease (GERD) patients (43 males and 23 females, 55.8±7.4 years) and 66 healthy controls (43 males and 23 females, 55.8±8.0 years) who admitted to the hospital. All ESCC patients were diagnosed by multiple approaches, including imaging techniques and biopsies. EU and GERD patients were diagnosed according to the standard method. The inclusion criteria were 1) newly diagnosed cases and 2) no therapies had been initiated. The exclusion criteria were 1) patients with blood relationships, 2) patients complicated with severe infections and other severe diseases, and 3) pregnant women. All healthy controls were enrolled if they completed systemic physiological examination including height, weight, vision, blood routine, urine routine, stool routine, lung function test, liver and kidney function test, blood pressure test, blood sugar and blood lipid test and electrocardiography and had normal physiological functions. All participants signed informed consent. From the day of admission, the 66 ESCC patients were visited monthly via telephone or outpatients visit to monitor their survival. All 66 patients completed the follow-up or died of ESCC during the follow-up. None of these patients died of other causes. Preparation of Tissue and Plasma Samples After admission, paired non-tumor and ESCC tumor specimens were collected from all 66 ESCC patients either through biopsy or dissecting the resected primary tumors. Blood (3mL) samples were extracted from the elbow vein of both patients (ESCC, EU, and GERD) and healthy controls after fasted overnight and used to isolate plasma samples using conventional methods. RNA Extraction and Purification RNA samples were immediately isolated from the collected samples using Monarch Total RNA Miniprep Kit (T2010, NEB). In brief, samples were lysed, loaded onto gDNA removal columns, and centrifugation at 12000g for 10 min. The follow-through RNA samples were then loaded onto RNA purification columns, treated with DNase I, washed with washing buffer, and eluted with RNase-free water. Finally, RNA samples were subjected to the analysis of quality and quantity using 2100 Bioanalyzer. RT-qPCRs RNA samples with a RIN value higher than 8.5 were diluted to about 1500 ng/μL with RNase-free water. After that, 4 μL RNA samples were used to prepare cDNA samples through reverse transcriptions using a High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems). Circ_0071662 levels were determined by qPCRs with 18S rRNA as the internal control using Fast Start Universal SYBR Green Master (Roche). Relative gene expression levels were normalized using the 2 −ΔΔCt method. Statistical Analysis SPSS software program (15.0) was applied to analyze data and plot images. With ESCC patients as the true positive cases, and EU, GERD or controls as the true negative cases, ROC curve analysis was carried out to analyze the role of plasma circ_0071662 in the diagnosis of ESCC. The 66 ESCC patients were grouped into high and low plasma circ_0071662 level groups (n = 33) with the median plasma circ_0071662 level in ESCC patients as the cutoff value, and chi-squared test was applied to explore the associations between plasma circ_0071662 and patients' clinical factors. The survival curves of patients in both circ_0071662 level groups were plotted based on the 5-year survival data and compared using Log rank test. P < 0.05 was statistically significant. The results showed circ_0071662 levels were decreased in ESCC tissues than in non-tumor tissues ( Figure 1A, p < 0.01), and the plasma circ_0071662 levels also decreased in ESCC patients, but not in GERN and EU patients compared with the healthy controls ( Figure 1B, p < 0.01). Associations Between Plasma Circ_0071662 and Patients' Clinical Factors The 66 ESCC patients were assigned into high and low plasma circ_0071662 level groups, and chi-squared test was applied to explore the associations between plasma circ_0071662 and patients' clinical factors. Interestingly, plasma circ_0071662 was closely correlated with patients' tumor size but not other clinical features (Table 1). Therefore, it is reasonable to hypothesize that circ_0071662 is mainly involved in the growth of ESCC tumors. Diagnostic Values of Plasma Circulating Circ_0071662 for ESCC With ESCC patients as the true positive cases and other participants (EU, GERD, or controls) as the true negative cases, ROC curve analysis was carried out to analyze the role of plasma circulating circ_0071662 in the diagnosis of DovePress ESCC. Our data analysis revealed that decreased circ_0071662 plasma levels could be used to effectively separate ESCC patients from GERN patients (Figure 2A), EU patients ( Figure 2B) and healthy controls ( Figure 2C). Therefore, decreased plasma circ_0071662 may be applied in clinical practice to increase the diagnostic accuracy of ESCC. Analysis of the Role of Plasma Circulating Circ_0071662 in Predicting the Survival Outcomes of ESCC Patients The 66 ESCC patients were grouped into high and low plasma circ_0071662 level groups. The survival curves of both circ_0071662 level groups were plotted based on the 5-year survival data and compared using Log rank test. Low plasma circ_0071662 levels were closely correlated with worse survival outcomes (Figure 3). Discussion The present study mainly analyzed the role of circ_0071662 in ESCC. We showed that circ_0071662 was lowly expressed in ESCC but not in EU and GERD. Moreover, decreased plasma circ_0071662 levels could be used to assist the diagnosis and prognosis of ESCC. A recent study characterized circ_0071662 as a novel tumor suppressor in bladder cancer, in which circ_0071662 was found to be lowly expressed and could sponge miR-146b-3p to suppress its role, thereby decreasing tumor growth and metastasis. 15 To our best knowledge, the involvement of circ_0071662 in other cancers is unclear. The present study showed that circ_0071662 is specifically downregulated in ESCC but not in EC and GERD, compared to the healthy controls. Therefore, circ_0071662 may be specifically involved in ESCC. Although we did not analyze the role of circ_0071662 in the development and progression of ESCC, we showed that plasma circ_0071662 level is only closely correlated with ESCC tumor size, but not ESCC tumor differentiation and invasion. Therefore, circ_0071662 is likely only involved in the growth, but not metastasis, of ESCC tumors. However, our speculation is purely based on statistical analysis. Future studies with in vitro cell and/or in vivo animal studies are needed to further confirm our conclusions. In clinical practices, ESCC is frequently misdiagnosed as EU and GERD, which can mimic the symptoms of ESCC, leading to delayed diagnosis and the development of tumor metastasis. The present study showed that decreased plasma circ_0071662 levels could effectively separate ESCC patients from EU and GERD patients. Therefore, measuring plasma circ_0071662 levels prior to treatment may increase the diagnostic accuracy of ESCC. Moreover, plasma circ_0071662 was found to be closely correlated with the worse survival outcomes of ESCC patients. Our data suggested that plasma circ_0071662 could be used in clinical practices to identify patients with a high risk of death, thereby designing personalized treatment approaches to prolong survival. Although previous studies have reported the role of circRNAs as biomarkers for ESCC, these studies failed to include other diseases that can mimic the symptoms of ESCC. 16,17 Our study is the first to report the application of a circRNA to distinguish ESCC from EU and GERD. This study is limited by the small sample size. Our conclusions should be verified by future studies. In addition, functional assays are needed to explore the function of circ_0071662 in ESCC. Conclusion Circ_0071662 is lowly expressed in ESCC and decreased plasma circ_0071662 may serve as a potential diagnostic biomarker to separate ESCC patients from EU and GERD patients. In addition, patients with low circ_0071662 levels may have poor prognoses. Data Sharing Statement The analyzed data sets generated during the study are available from the corresponding author on reasonable request. Ethical Approval and Consent to Participate All patients signed the written informed consent. All procedures were approved by the Ethics Committee of the Third Affiliated Hospital of Chongqing Medical University and completed in keeping with the standards set out in the Announcement of Helsinki and the Laboratory Guidelines of Research in China.
2021-12-31T16:02:38.528Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "ea965a3d0a2da14c16fbf6dfbc127d0cf46695cc", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=77199", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4619a20fcb0b3d3d2d257ee2dad5453eb0d18d95", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
33620626
pes2o/s2orc
v3-fos-license
Functional network analysis of gene-phenotype connectivity associated with temozolomide Rationale Glioma has a poor survival rate in patients even with aggressive treatment. Temozolomide (TMZ) is the standard chemotherapeutic choice for treating glioma, but TMZ treatment consistently leads to high resistance. Aim To investigate the underlying mechanisms of TMZ action with new therapeutic regimens in glioma. Methods and results The biological effects of TMZ mainly depend on the three following DNA repair systems: methylguanine methyltransferase (MGMT), mismatch repair (MMR) and base excision repair (BER). Based on related genes in these three systems, web-based tools containing data compiled from open-source databases, including DrugBank, STRING, WebGestalt and ClueGO, were queried, and five common genes along with the top fifteen pathways, including the glioma pathway, were identified. A genomic analysis of the six genes identified in the glioma pathway by cBioPortal indicated that TMZ might exert biological effects via interaction with the tumor protein P53(TP53) signaling axis. Finally, a survival analysis with the six genes in glioma cases (low-grade glioma and glioblastoma multiforme) was conducted using OncoLnc, which might provide directions for the future exploration of prognosis in glioma. Conclusions This study indicates that a functional network analysis resembles a “BioGPS”, with the ability to draw a web-based scientific map that can productively and cost-effectively associate TMZ with its primary and secondary biological targets. INTRODUCTION Glioma is the most common primary brain tumor in adults and has the highest degree of malignancy [1].Currently, the standard therapy for glioma is maximal surgical resection followed by fractionated radiotherapy along with recurrent adjuvant treatment using alkylating agents such as nitrosoureas, procarbazine, and more recently, temozolomide (TMZ) [2,3].TMZ is a monofunctional cytostatic agent that has been synthesized at Aston University since 1984 [4].TMZ has been reported as a clinical treatment for glioblastoma multiforme (GBM) and metastatic melanoma [5].Recently, GBM has also been studied in refractory acute leukemia with promising results [6,7]. The action of TMZ undergo DNA damage and DNA repair system two stages.In general, DNA repair systems are in charge of maintaining the genome integrity of cellular organism as they are able to neutralize DNA damage induced by various chemical and physical agents [8].Therefore, for a better understanding of the molecular mechanisms of the biological effects of TMZ and target cellresistance to these agents it is necessary to consider underlying pathways of DNA repair.Like many other alkylating agents, the cytotoxic effects of TMZ is believed to manifest largely by the formation of methylation of the O6 position of guanine [9].Consequently, the primary mechanisms of resistance to TMZ is a function of the activity of the DNA repair enzyme O(6)-alkylguanine-DNA-alkyltransferase, also called methylguanine methyltransferase (MGMT) [11].During the process of DNA repair, MGMT also induces DNA mismatch repair system and base excision repair pathway [10,11].Thus, the biological effects of TMZ depend on at least three DNA repair systems [9].(1) MGMT is a DNA repair protein encoded by the MGMT gene and is involved in the cellular defense against toxicity and mutagenesis from alkylating agents [12].(2) The mismatch repair (MMR) system consists of a protein complex that contributes to the repair of biosynthetic errors generated during DNA replication.The MMR system recognizes and repairs not only base mismatches but also insertion and deletion loops [13].(3) The base excision repair (BER) system, similar to the MMR system, is a protein complex that controls damage from cellular metabolic lesions (such as oxidation or the methylation of DNA bases) and from base modifications induced by physical or chemical agents [14]. Prior to the last decade, TMZ was shown to significantly improve outcomes in patients with GBM when administered concomitantly with radiotherapy and as maintenance strategy thereafter [15].Nevertheless, despite intense therapy, the average survival rate of GBM patients is only 12-18 months [16,17].One critical barrier to the effective treatment of malignant glioma is resistance to TMZ, which has attracted significant scientific interest.As demonstrated previously, the biological effects of TMZ mainly depend on three DNA repair systems.However, the primary and secondary targets of TMZ and the interactions between the target genes of these three DNA repair systems remain unclear. With the advancements in multicenter genomics studies, including microarrays, proteomics and highthroughput screening assays, exploratory network-based studies have been used to analyze massive amounts of data and numerous human diseases.In this study, we first employed DrugBank in a broad study of TMZ and drugtarget information.Based on the three DNA repair systems, we identified five common targets using the STRING database.Then, the targets and associated genes were further explored through pathway enrichment analysis using the WebGestalt and ClueGO databases; the glioma pathway was screened, and related genes were identified to further explore genomic alterations using the cBio Cancer Genomics Portal (cBioPortal) database.Overall, our study based on these functional network analyses provides valuable information for exploring the underlying mechanisms of TMZ action against glioma and for identifying potential targets to overcome TMZ resistance in glioma. Characterization of TMZ bioactivity using DrugBank and the visualization of TMZ linkage networks using STRING DrugBank was first queried using TMZ as an input, which resulted in the output DB00853 categorizing TMZ with alkylating agents, antineoplastic agents, imidazoles, immunosuppressive agents, noxae, toxic actions and triazenes (Table 1).Furthermore, grouping TMZ as a Food and Drug Administration (FDA)-approved drug, the indications of TMZ were divided into 3 detailed classes: (1) treatment of adult patients with anaplastic astrocytoma after nitrosourea and procarbazine therapy; (2) concomitant administration with radiation treatment for patients with newly diagnosed GBM; and (3) maintenance therapy for patients with GBM ( 2).As mentioned previously, the biological effects of TMZ primarily depend on three DNA repair systems: MGMT, MMR and BER.The MMR and BER systems consist of protein complexes; the MMR system includes the following 8 genes: MRC1, ATP9B, MSH6, MSH3, EXO1, MLH1, PMS2 and MSH2, while the BER system includes the following 14 genes: RAD1, RAD9A, HUS1, NTHL1, PARP1, PARP2, PARP3, PNKP, APEX1, POLL, TDG, FEN1, USP47 and APEX2 (Table 3).Next, we analyzed MGMT-related genes using STRING, and a total of 50 target protein interactions were identified (Supplementary Table 1).No gene common to all 3 DNA repair systems was detected.Therefore, we expanded our search and further analyzed MMR/BER-related proteinprotein interactions (PPIs); we ultimately identified five genes in common (MSH2, MSH6, TOP2B, TP53 and XRCC3) that were related to the three DNA repair systems (Figure 1).It should be noted that all five of the common genes have been reported to be directly associated with glioma [18][19][20][21][22], suggesting potential roles of these genes in mediating the action of TMZ against glioma. Analysis of functional attributes associated with TMZ-mediated changes in gene sets using WebGestalt and ClueGO To evaluate the functional features of TMZmediated gene sets, we performed a Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis using WebGestalt.The top 15 KEGG pathways were identified based on genes associated with the three DNA repair systems (Table 4).These pathways included BER (11 genes), platinum drug resistance (11 genes), pathway in cancer (19 genes), bladder cancer (8 genes), non-small cell lung cancer (8 genes), MMR (6 genes), Next, to validate the pathways identified by WebGestalt, we additionally performed KEGG pathway enrichment using ClueGO.The glioma pathway was also screened for statistical significance in this analysis (Figure 2A and Table 5).A broad search revealed that 6 genes in the glioma pathway were connected to TMZ-linked genes: CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53 (Figure 2B and Table 5).Notably, among these 6 selected genes (identified by both WebGestalt and ClueGO), TP53was found to participate in numerous signaling pathways, including apoptosis and several cancer-related pathways (Figure 2C), suggesting a critical role for TP53 in diverse biological effects mediated by TMZ. Mining genomic alterations associated with TMZ-related genes in glioma using cBioPortal To further validate the relationship between TMZrelated genes and the glioma pathway, cBioPortal was used to explore genomic alterations in genes connected with TMZ treatment of glioma.A total of 8 glioma studies were included in cBioPortal [23][24][25][26][27]. Three studies were provisionally embraced; thus, emphasis was directed to the remaining 5 studies.A query of the 6 selected genes (CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53) identified in the glioma pathway was performed, and these genes were analyzed in the 5 glioma studies.Alterations ranging from 2.1% to 91.5% were detected for submitted gene sets (Figure 3A).A summary of multiple gene alterations detected across each set of tumor samples from the study of Brennan CW showed the most pronounced genomic changes among the 5 glioma studies [26].In this study, 257 cases (91.5%) had alterations in all 6 genes; the frequency of alterations in each gene is shown in Figure 3B.For CDKN2A (61%), most alterations were classified as deep deletions, with a few cases of truncating mutations.For EGFR (53%), the majority of alterations were amplifications, with a small fraction of missense mutations.For PTEN (31%) and TP53 (22%), the gene changes included deep deletions and truncating, missense and inframe mutations.For KRAS and HRAS, few alterations were detected with 1.8% and 1.1%, respectively (Figure 3B).An analysis of interactions between genes showed that KRAS and TP53 (Pearson's correlation: 0.36) as well as PTEN and TP53 (Pearson's correlation: 0.32) exhibited co-expression in the GBM samples in the study of Brennan CW (Table 6), while CDKN2A and TP53 (p-value<0.001) as well as EGFR and TP53 (p-value=0.001)exhibited mutual exclusivity (Table 7), suggesting a central axis function for TP53 under TMZ control in the glioma pathway. We additionally used cBioPortal to perform interactive analysis and construct networks of genes that were altered in cancer.We created a network containing all neighbors of the 6 query genes (Figure 4A).To reduce the complexity of the analysis, we applied genomic alteration frequency within the selected glioma studies as a filter, such that only neighbors with a high alteration frequency are shown (Figure 4B).First, the 6 selected genes were determined to be associated with CD4 when a filter of ≥ 16.7% alteration was applied to neighbors.In comparison, 8 genes, including CD4 and PDGFRA, were evident using a filter of≥13.2%alteration.Ten gene clusters, including CD4, PDGFRA, NF1 and PIK3R1, were demonstrated when the filter was reduced to 11% alteration, while 11 genes, including CD4, PDGFRA, NF1, PIK3R1 and KIT, were revealed with a filter of 10% alteration.The comprehensive and pruned networks revealed the potential interactions as well as the complexity and the variability of differences in the interactions between TMZ-linked genes and the GBM samples in the Brennan CW study.Moreover, drugs that are applied for specific genes, including cancer drugs, FDA-approved drugs and others, are also shown in cBioPortal.Figure 4B lists specific cancer drugs acting on EGFR, TP53 and CDKN2A (Figure 4B);currently, TMZ is not used to target any of these genes.This network analysis provides a molecular basis for future clinical applications of TMZ targeting selected genes. Low-grade gliomas (LGGs) in the brain are slower growing than their high-grade counterparts. LGGs account for 10-20% of all primary brain tumors, and the median survival of LGG patients ranges from 4.7 to 9.8 years [28].Related literature revealed that TMZ is also indicated for LGG patients with high-risk features [29].Figure 3A shows that among the LGG studies in cBioPortal, the study of Johnson BE observed the most pronounced genomic changes (Figure 3A, column 2) [2,3].In this study, gene sets were altered in 90.2% of 61 cases.TP53 (90%) exhibited the most prominent changes, which were classified as missense mutations with a few cases of truncating mutations.For CDKN2A, alterations also included missense and truncating mutations.For EGFR, HRAS, KRAS and PTEN, few alterations existed in LGG cases (Figure 5A).Results from co-expression and mutual exclusivity analyses were not statistically significant (data not shown).Next, a network analysis was performed that included neighbors of the 6 query genes (Figure 5B).Seven genes, including FAT1, wererevealed using a filter of≥13.1% alteration;10 gene clusters (including FAT1, INPPL1, PIK3CA and TNRC6A) were shown when the filter was reduced to 8.1% alteration. Analysis of survival associated with TMZ-related genes in glioma according to OncoLnc To explore the association between TMZ-related genes and the survival of patients with glioma, OncoLnc, an integrated data mining system, was used.The 6 selected genes (CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53) identified in the glioma pathway were used to conduct a survival analysis with clinical profiles in glioma.The results indicated that in LGG, a high level of PTEN expression predicted longer survival with statistical significance (log-rank p-value=0.00521)when patients were classified according to the mean value of the mRNA expression level (Figure 6A-6F).In contrast, in GBM, no gene was statistically significantly associated with patient survival (Figure 7A-7F), which might be attributed to 2 reasons: (1) as shown in Figure 7A-7F, the average survival time for GBM patients was only 500 days, which is consistent with relevant studies; and (2) the OncoLnc analysis was based on a relatively small sample (a total of 152 cases), and the findings could easily be spurious. DISCUSSION Since the landmark findings regarding the antineoplastic activity of TMZ reported in 1984 [4], more than 5529 publications on TMZ have appeared on PubMed.A wide range of biological and cellular activities have been identified for TMZ, demonstrating the fascinating nature of this compound regarding a plethora of related diseases.However, tumor resistance to TMZ is still a critical barrier to the effective treatment of glioma.As such, new analytical means or platforms are required to bridge TMZ with its primary or secondary targets and thereby illustrating the underlying mechanisms of TMZ and its clinical outcomes. In this study, we performed a functional network analysis using a set of web-based tools.First, we demonstrated the feasibility of analyzing the connectivity between TMZ and cancer using DrugBank, STRING, WebGestalt and ClueGO.As reported in other studies, TMZ acted mainly through 3 DNA repair systems: MRMT, MMR and BER.Based on these 3 DNA repair systems, 5 common genes (MSH2, MSH6, TOP2B, TP53 and XRCC3) were identified using STRING (Figure 1).Notably, TMZ has been reported to induce apoptosis in melanoma cells, and the inactivation of MGMT results in a high level of resistance to TMZ and impairs the expression of MSH2/MSH6 through the over expression of P53 [30].Furthermore, other studies have shown that the expression levels of TOP2A/Bare significantly higher in human GBM and that TOP2B transcription is corrected in PDGF (+) PTEN (-/-) or PDGF (+) PTEN (-/-) P53 (-/-) models by susceptibility to cancer drugs [20].For XRCC3, a doublestrand break repair gene, the thr241Met polymorphism of XRCC3 has been associated with susceptibility to developing astrocytomas and GBM [31,32].Therefore, these findings suggested the hypothesis that TMZ might exert anti-tumor effects through MSH2/MSH6/TOP2B/ XRCC3 in glioma patients via a regulated interaction with the TP53 signaling axis. As supporting evidence, the KEGG enrichment analysis conducted using WebGestalt and ClueGO identified the glioma pathway as significantly altered by TMZ-related genes (Table 4).The association between glioma (GBM and LGG) and the beneficial effects exerted by TMZ in cancer was further observed and evaluated using cBioPortal with6 genes (CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53) identified in the glioma pathway.The row lists the following statistics: C: the number of reference genes in category; O: the number of genes in the gene set and also in the category; E: the expected number in the category; R: ratio of enrichment; p value: p value from hypergeometric test. Figure 2: Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis of temozolomide -associated gene sets performed using ClueGO. (A) The top 15 statistically enriched KEGG pathways and involved gene numbers (more details can be found in Table 5).(B) The biological network of temozolomide -linked genes consists of the top 15 statistically enriched KEGG pathways (large circle with different color), corresponding genes (red) and STRING protein interaction.(C) TP53 (in yellow) is shown as an illustration of complex regulatory networks.Among the top 15 KEGG pathways, 11 were associated with TP53.Both PTEN and TP53 are tumor suppressor genes [33] that participated in almost all the cancer pathways identified by WebGestalt and ClueGO.In the case of GBM, most of the genetic alterations in CDKN2A, PTEN and TP53 were deletions or mutations (Figure 3B), which resulted in a reduction of their expression in conjunction with the development of carcinogenesis [34,35].In contrast, for EGFR, most of the genetic alterations were amplifications (Figure 3B), suggesting an over expression during the acceleration of glioma [36,37].Moreover, the coexpression analysis illustrated synergistic effects between KRAS and TP53 as well as PTEN and TP53 (Figure 3C), and the mutual exclusivity analysis revealed a tendency toward mutual inhibition between CDKN2A and TP53 and between EGFR and TP53 (Table 6).Thus, the results are consistent with the activation of TP53 by TMZ as a major driver of anti-tumor effects in GBM.Regarding LGG, TP53 (90%) exhibited the most prominent alterations (Figure 5A).Finally, a survival analysis of the 6 genes was conducted in glioma cases (LGG or GBM) using OncoLnc, and the results indicated that high levels of PTEN predicted a statistically significantly longer survival in LGG.This finding is in accordance with the diagnostic significance of PTEN mutation as a molecular EGFR/HRAS/KRAS/PTEN/TP53 mined from the cBioPortal for Cancer Genomics.Six selected genes and temozolomide-linked genes were applied as seed genes (indicated with thick black border) to automatically harvest all other genes identified as altered in GBM (data taken from the Brennan CW study, Cell, 2013).(B) Neighboring genes connected to the 6 query genes, filtered by alterations (%).Multidimensional genomic information and drug administration for a specific gene are exhibited for the seed genes (CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53).Darker red indicates increased frequency of alteration (defined as a mutation, homozygous deletion and copy number amplification).The filters used involved the highest genomic alteration frequency within the selected GBM study in addition to the query genes. marker for poor prognosis in LGG [38,39].However, the expression levels of other genes showed no statistically significant associations with survival in either LGG or GBM.Therefore, more large, multicenter clinical trials are urgently required to investigate the association between the expression of TMZ-related genes and provide a molecular biomarker for prognosis in patients with glioma.In summary, the query of publicly available computational databases may significantly advance research by (1) unraveling the critical role of TMZ in glioma and revealing the mechanisms of glioma.Furthermore, by providing a deeper understanding of TMZ, the current analysis can assist in TMZ bio-curation, and new biological experimental designs will significantly accelerate glioma biology research.(2) This approach should facilitate early disease diagnosis and improve the accuracy of disease prognosis.The candidate genes identified by STRING, WebGestalt and the ClueGO database may facilitate the interpretation of genomic results and thus provide information useful for guiding research.However, several challenges remain for us to investigate and solve: (1) In addition to the glioma pathway, there are other cancer pathways identified by WebGestalt and ClueGO, such as bladder cancer and non-small-cell lung cancer.Therefore, whether the connectivity map shown in this paper to exist between TMZ and glioma can be extended to other solid tumors remains to be investigated.(2) The roles of drug targets detected by STRING must be further explored in the glioma pathway and in other signaling pathways under TMZ control.(3) Genomic alterations in the LGG samples differed from those in the GBM cases; these alterations may play a role in transition from low-grade glioma to high-grade glioma.Therefore, these genomic differences between LGG and GBM can be used to direct future research with reasonable experimental feasibility based on a functional network analysis.Overall, this paper provides a simple yet flexible procedure to test and validate reasonable hypotheses regarding genetic alterations in glioma by applying available biological information, such as in BioGPS, to assist researchers in translating basic studies to clinical applications. Drug-target search DrugBank is a web-based bioinformatic database containing comprehensive biochemical and pharmacological information about drugs, their pharmacological mechanisms and their targets [40,41].The tool contains over 4100 drug entries consisting of over 800 FDAapproved small molecules and biotech drugs as well as over 3200 experimental drugs.In this study, DrugBank was used to probe the category and indications for TMZ and the interaction between TMZ and its targets to provide insights regarding the TMZ target network. Pathway enrichment analysis and network generation STRING v10.0 is an online database tool to analyze PPIs for differentially expressed genes and provide interaction information predicted by comparative genomics [42].In this study, human PPIs for target genes involved in three DNA repair systems were constructed using the STRING database.WebGestalt is a comprehensive webbased integrated data mining system that provides the maximum flexibility for functional enrichment analyses [43].Biochemical pathways and functions related to the TMZ gene set were specifically queried with a KEGG pathway enrichment analysis in WebGestalt, and the top 15 pathways with an adjusted p-value<0.01were selected.ClueGO is a cytoscape plug in app that visualizes nonredundant biological terms for large clusters of gene sets in a functionally grouped network [44,45].In our study, theenrichment analysis of gene-GO terms and bio-pathways was statistically validated with the cytoscape plugin ClueGO + Cluepedia app. Cancer genomics data linked to TMZ and glioma survival analysis The cBioPortal is an open-access resource for interactively exploring multidimensional cancer genomics datasets [46,47].OncoLnc is a tool for studying survival correlations and downloading clinical data combined with expression profiles for mRNAs, miRNAs and long noncoding RNAs (lncRNAs) [48].In this study, the cBioPortal database was used to analyze connectivity of TMZ-related genes across all glioma studies; these genes were then classified as altered or not altered in glioma samples.The altered genes were then further studied using a Kaplan-Meier analysis to evaluate glioma survival according to gene expression using OncoLnc. Figure 3 : Figure 3: Mining genetic alterations connected with the temozolomide -associated genes CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53 in glioma studies embedded in cBioPortal.(A) Overview of changes in the CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53 genes in the genomics datasets available in 5 different glioma studies.(B) Oncopoint: a visual summary of genomic alterations across a set of glioblastoma multiforme (GBM) samples (data taken from the Brennan CW study, Cell, 2013) based on a query of 6 genes (CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53).Different genomic alterations involving mutations and CNAs (copy number alterations, gene amplifications and homozygous deletions) are summarized, color-coded and displayed as % change in specific affected genes in individual glioma samples.Each row represents a gene, and each column represents a sample.Red bars represent amplifications, blue bars represent homozygous deletions, and green bars represent nonsynonymous mutations.(C) mRNA co-expression of KRAS A. with TP 53 and PTEN with TP53 Figure 5 : Figure 5: Genetic alterations and a visual display of the gene network connected to CDKN2A/EGFR/HRAS/KRAS/ PTEN/TP53 in brain low-grade glioma (LGG) (based on the study of Johnson BE, Science 2014).(A) Oncopoint: a visual summary of genomic alterations across a set of LGG samples (data taken from Johnson BE, Science, 2014) based on a query of 6 genes (CDKN2A, EGFR, HRAS, KRAS, PTEN and TP53).(B) Neighboring genes connected to the6 query genes, filtered by alterations (%). Figure 6 : Figure 6: Survival analysis according to CDKN2A/EGFR/HRAS/KRAS/PTEN/TP53 mRNA expression in brain low-grade glioma (LGG) (A-F).A total of 510 LGG samples were included in the OncoLnc database and classified according to the meanvalue of the mRNA expression levels.Blue lines indicate lower levels of mRNAexpression, while red lines indicate higher levels of mRNAexpression. Figure 7 : Figure 7: Survival analysis according to CDKN2A/EGFR/HRAS/KRAS/PTEN/TP53 mRNA expression in glioblastoma multiforme (GBM) (A-F).A total of 152 GBM samples were included in the OncoLnc database and classified according to the meanvalue of the mRNA expression level.Blue lines indicate lower mRNA expression levels, while red lines indicate higher mRNA expression levels.
2018-04-03T04:31:21.838Z
2017-09-12T00:00:00.000
{ "year": 2017, "sha1": "750ad02168ec8d27520c8f25a8e30010fa9e36c1", "oa_license": "CCBY", "oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=20848&path[]=66399", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "750ad02168ec8d27520c8f25a8e30010fa9e36c1", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268083958
pes2o/s2orc
v3-fos-license
Outcomes and prognostic factors of repeat pulmonary metastasectomy Abstract OBJECTIVES Information on prognostic factors after repeat pulmonary metastasectomy (PM) is limited, and outcomes after a third PM are not well documented. METHODS A single-institute retrospective study was conducted. Between 2000 and 2020, 68 patients underwent repeat PM for pulmonary metastases from various cancers. Outcomes and prognostic factors for the second PM and outcomes after the third PM were analysed. RESULTS This study included 39 men and 29 women. The mean age at second PM was 53.2 years old. The primary tumours were soft tissue sarcoma in 24 patients, colorectal cancer in 19 and osteosarcoma in 10. The interval between the first PM procedure and detection of pulmonary metastasis after the first PM (months) was ≤12 in 37 patients and >12 in 31 patients. At the second PM, 20 patients underwent lobectomy or bilobectomy, and 48 underwent sublobar resection. Complete resection was achieved in 60 patients, and 52 patients experienced recurrence after the second PM. The 5-year relapse-free survival and overall survival rates after the second PM were 27% and 48%, respectively. Multivariable analysis revealed that the interval between the first PM and the subsequent detection of pulmonary metastasis (≤12 months) was a poor prognostic factor for both relapse-free survival and overall survival after the second PM. Seventeen patients underwent a third PM, 3 of whom achieved a 3-year disease-free survival. CONCLUSIONS Patients with a period of >12 months between the first PM and the subsequent detection of pulmonary metastases showed favourable outcomes and are thus considered good candidates for second PM. A third PM may be beneficial for selected patients. INTRODUCTION Despite the lack of evidence from randomized trials, pulmonary metastasectomy (PM) is regarded as a viable treatment option for patients with pulmonary metastasis from various types of cancers [1,2].Since first mentioned by Thomford et al. [3], several modifications have been made with regard to the principal criteria for the indication of PM, and Kondo et al. [4] summarized these criteria in the current era.Since 2000, multidetectorrow computed tomography (CT) has become widely used.Because thin-section CT provides a more accurate count of the number of pulmonary metastases than conventional CT [5], the indication for PM has been judged more precisely since the introduction of multidetector-row CT. A certain proportion of patients who undergo PM experience pulmonary recurrence after the first PM procedure and meet the criteria for repeat PM.The PM procedure is performed a second time in these patients, following which favourable outcomes have been reported in various types of cancers [1,2,[6][7][8][9][10].However, while there have been many reports on prognostic factors associated with the survival after first PM, information on prognostic factors after repeat PM is limited [2]. A relatively small percentage of patients who underwent a second PM and showed re-recurrence met the criteria for a third PM procedure.In the report of The International Registry of Lung Metastases, which included 5206 cases of PM, 15% of patients underwent repeat PM, and only 5% underwent PM !3 times [11].Kr€ uger et al. reported that 35 of 621 patients (6%) who underwent first PM subsequently underwent third PM [12].Recently, Mills et al. [13] reported the short-term outcomes of third PM, concluding that the third PM can be performed safely and feasibly in select patients.However, there has been little information on the mid-to long-term outcomes after third PM.In this clinical context, information on the prognostic factors for repeat PM and mid-to long-term outcomes after the third PM is needed. Therefore, we analysed the outcomes and prognostic factors for second PM, as well as the outcomes and clinical course after the third PM, based on 2 decades of experience in our institution. PATIENTS AND METHODS Between 2000 and 2020, a total of 599 patients underwent PM for pulmonary metastases originating from malignancies other than primary lung cancer at the Osaka International Cancer Institute.After excluding 50 patients with insufficient data or a follow-up period of <3 months, 549 patients were analysed in this retrospective study.Among these patients, 68 patients underwent repeat PM and this cohort was subjected to a further analysis.It is noteworthy that the planned staged bilateral pulmonary resection was regarded a single PM procedure in the present study and was not included as a repeat PM.Patients who underwent surgical biopsy were also excluded from this study.Preoperative diagnoses of pulmonary nodules were made on the basis of chest CT findings.In all patients, the primary tumours were pathologically diagnosed prior to pulmonary resection, and the primary tumour received treatment that included surgery or heavy ion radiotherapy.In all cases, lung specimens were histologically evaluated and pulmonary metastases were diagnosed by pathologists.Clinical information was collected from the medical records at our hospital. In our institution, the mode of treatment is determined by a multidisciplinary cancer board, which includes thoracic surgeons, medical oncologists, radiation oncologists and primary tumour specialists such as colorectal surgeons, orthopaedic surgeons and head and neck surgeons.Patients generally underwent PM after meeting the following criteria [4]: (i) complete resection of the pulmonary metastasis (or metastases) was considered achievable; (ii) the metastatic lesions were limited to the lungs, or extrapulmonary distant metastases were already controlled if present; (iii) the patient's primary tumour was already controlled or controllable; (iv) lymph node metastasis from the pulmonary lesion was determined to be absent on preoperative evaluation; and (v) the general condition of the patient was good, and the patient's respiratory function was sufficient to tolerate pulmonary resection.Repeat PM was performed if the patient met the criteria for the first PM.The type of resection and surgical approach were selected according to the size and location of recurrent pulmonary metastases.In our institution, intraoperative lavage cytology was routinely performed to analyse the surgical margins, as previously described [14]. The indications for perioperative chemotherapy and the timing of chemotherapy were determined by the surgeons or physicians in charge after considering the extent of the disease and the general condition of the patient. Follow-up was generally based on chest X-ray or chest CT, physical examination and blood chemistry evaluations, which were performed every 3-6 months after the first PM. In the survival analysis of patients with pulmonary metastases after the first PM to compare the survival of patients who underwent a second PM to that of patients who did not undergo a second PM, overall survival (OS) was defined as the time interval between the detection of pulmonary recurrence after the first PM and death from any cause.The significance of differences between 2 groups was calculated using the generalized Wilcoxon test.For further survival analysis of patients who underwent a second PM, OS was defined as the time interval between the second PM and death by any cause, and relapse-free survival (RFS) was defined as the time interval between the second PM and the first recurrence of primary tumour cancer or death due to any cause.The follow-up period was defined as the interval between the date of pulmonary resection and the date of death or latest follow-up.The data cut-off date was September 10, 2022.The median follow-up times after the first and second PM in the present study were 46 (range: 3-273) and 36 (range: 3-220) months, respectively. Statistical analyses were performed using JMP Pro 14.2.0 software program (SAS Institute, Berkley, CA, USA).The data are expressed as mean ± standard deviation or median values.RFS and OS after pulmonary resection were analysed using the Kaplan-Meier method.The Cox constant proportional hazards model was used to assess the effects of the covariates on OS and RFS.Statistical significance was set at P < 0.05.Factors with P values <0.05 in the univariate analysis were used for the subsequent multivariable analysis. The study protocol was approved by the Institutional Review Board of the Ethics Committee of Osaka International Cancer Institute (control number 22113). RESULTS The primary tumours of 549 patients who underwent first PM are shown in Supplementary Material, Table S1.Colorectal cancer (CRC) in 192 (35%), head and neck cancer in 74 (13%) and soft tissue sarcoma (STS) in 74 (13%) patients.Of these, 313 patients (57%) experienced recurrence.Among these, 68 patients underwent a second PM (with second PM group) and 68 patients experienced lung-limited recurrence and did not undergo a second PM (without second PM group).The remaining 177 patients experienced recurrence other than lung recurrence or recurrence at both the lung and other sites and did not undergo a second PM.The treatment modalities in 68 patients who experienced lung-limited recurrence and did not undergo a second PM were as follows: chemotherapy (n ¼ 42), other systemic therapy (n ¼ 3), radiotherapy (n ¼ 6), radiofrequency ablation (n ¼ 5), chemoradiotherapy (n ¼ 1), best supportive care (n ¼ 8) and unknown (n ¼ 3).The OS of patients with second PM and those without second PM group after the detection of pulmonary recurrence after the first PM are shown in Supplementary Material, Fig. S1.The OS of patients with second PM was significantly better than that of patients without second PM (5-year OS: 53% vs 41%, P ¼ 0.01). A further analysis of 68 patients who underwent a second PM was conducted, and the characteristics of these patients are shown in Table 1.The mean age at the time of primary tumour treatment was 48.3 years.The primary tumours were STS in 24 (35%) patients, CRC in 19 (28%) and osteosarcoma in 10 (15%).Sixty-seven patients (98%) underwent treatment including surgery for the primary tumour.The interval between treatment for the primary tumour and the first PM was none (synchronous) in 5 (7%), 1-24 months in 33 (49%) and >24 months in 30 (44%) patients.The interval between the first PM procedure and the subsequent detection of pulmonary metastasis (months) was 12 months in 37 patients (54%) and >12 months in 31 patients (46%). The surgical factors for the first and second PM are listed in Table 2.The rate of solitary metastasis was higher in the second PM (53% in the first PM vs 75% in the second PM).In the second PM, 46 patients (68%) underwent surgery on the ipsilateral side of the first PM, that is they underwent redo thoracotomy on the same side.The rate of anatomical resection (i.e.segmentectomy or lobectomy) was high in second PM (26% in first PM vs 44% in second PM).Furthermore, the rate of thoracotomy was high, the operation time was long, the blood loss volume was high and the rate of postoperative complications was high in the second PM.Complete resection was achieved in all 68 patients at the first PM and in 60 patients (88%) at the second PM.In terms of the surgical margin of lung parenchyma, intraoperative lavage cytology was performed and a negative margin was confirmed in 62 patients who underwent a second PM.The following postoperative complications occurred after second PM: prolonged air leak (n ¼ 2), atrial fibrillation (n ¼ 2), empyema (n ¼ 2) and others (n ¼ 4). THORACIC ONCOLOGY After the second PM, 52 patients (76%) experienced relapse.At the time of data cut-off, 29 patients were alive and 39 patients died.The 5-year RFS and OS rates after the second PM were 27% and 48%, respectively (Fig. 1a/b). The factors influencing RFS after the second PM are shown in Table 3. Multivariable analysis showed that age ( 60 years), interval between the first PM and subsequent detection of pulmonary metastasis ( 12 months) and completeness of resection of the second PM (incomplete) were poor prognostic factors for RFS.The results of the analysis of the factors influencing OS after the second PM are shown in Table 4. Multivariable analysis showed that the interval between the first PM and the subsequent detection of pulmonary metastasis ( 12 months) was a poor prognostic factor for OS. The clinical course of the second PM is shown in Fig. 2.Among the 52 patients who experienced relapse after the second PM, the sites of relapse were pulmonary metastasis alone in 30, distant metastasis aside from the chest cavity in 6, pulmonary metastasis and pleural dissemination in 4, pleural dissemination alone in 3, mediastinal lymph node metastasis in 3, pulmonary metastasis and distant metastasis aside from the chest cavity in 3, local relapse of the primary tumour in 2 and pulmonary metastasis and surgical margin relapse of pulmonary metastasis in 1 patient. Seventeen patients underwent third PM, 5 patients underwent fourth PM, 2 patients underwent fifth PM and 1 patient underwent sixth PM.The characteristics of the third PM are presented in Table 5.One patient (6%) experienced postoperative complications (atrial fibrillation).After the third PM, 13 (76%) patients experienced recurrence.At the time of writing this report, 6 patients were alive without disease, 1 patient was alive with disease and 10 patients died.The 3-year RFS and OS rates after the third PM for patients who underwent the third PM (n ¼ 17) were 22% and 53%, respectively (Fig. 3a/b). The details of the 6 patients who eventually had no evidence of disease are shown in Supplementary Material, Table S2.Of note, 3 out of 17 patients (18%) who underwent the third PM achieved 3-year disease-free survival. DISCUSSION In this study, the 5-year RFS and OS rates after the second PM were 27% and 48%, respectively.The interval between the first PM and subsequent detection of pulmonary metastasis ( 12 months) was identified as a poor prognostic factor for both RFS and OS after the second PM.Seventeen patients underwent a third PM, of whom 3 (18%) achieved 3-year disease-free survival. Repeat PM has been conducted in various types of cancers, with a particularly large number of reports concerning sarcoma and CRC.Reported rates of patients who undergo a second PM among those who have already undergone the first PM are a Data from the preceding pulmonary resection procedure were used for 6 patients who underwent two-stage pulmonary resection for bilateral metastases.b In 6 patients who underwent two-stage pulmonary resection for bilateral metastases, data from the preceding pulmonary resection procedure were used.PM: pulmonary metastasectomy; SD: standard deviation.26-43% in sarcoma and 15-19% in CRC [15][16][17][18].Many reports have demonstrated favourable outcomes after the second PM. In terms of postoperative complications, Forster et al. [19] reported that the postoperative cardiopulmonary complication rate and median length of stay were not significantly different between first and second PM.In terms of long-term outcomes, the survival of patients who underwent a second PM is reportedly equivalent to or better than that of patients who underwent the first PM alone [20].Furthermore, the survival after the second PM (calculated from the second PM) is reportedly almost the same as or better than that after the first PM (calculated from the first PM) [17,21].Some reports showed repeat resection to be a good prognostic factor in patients who underwent first PM [18,22].In the present study, the 5-year RFS and OS rates after second PM were 27% and 48%, respectively.Sixteen patients did not experience any recurrence after the second PM.Of these, 13 patients received no additional treatment, and 8 achieved 5-year disease-free survival.Based on these observations, it is considered that an almost 'cured' status can be achieved at a certain rate with repeated PM. THORACIC ONCOLOGY There is no firm evidence of the effectiveness of repeat PM, other than observational studies.Fiorentino and Treasure [23] argued on this issue, maintaining that reselection of the most favourable patients for repeat PM is the likely reason for any differences in survival between the initial and subsequent PM procedures.In addition to PM, stereotactic body radiation therapy and radiofrequency ablation are currently available local treatment options for pulmonary metastasis.However, while these treatments provide a favourable local control rate, the comparison of PM and these treatments is difficult because the indications for these treatments are generally limited to patients who cannot tolerate surgery [24,25].The present study demonstrated that the survival after pulmonary recurrence after the first PM was significantly better in patients with a second PM group than in those without a second PM (5-year OS: 53% vs 41%, P ¼ 0.01).However, it should be noted that this difference does not directly mean that repeat PM is a better mode of treatment than other treatments. In this situation, retrospective studies comparing repeat PM and nonsurgical treatment using well-matched controls are currently the best evidence available.Chudgar et al. conducted a weight-based propensity score-matched analysis of 341 STS patients who experienced pulmonary recurrence after the first PM.Even after controlling for the characteristics of the primary tumour and metastatic disease, the survival of patients who underwent repeat PM was still significantly better than that of those who underwent nonsurgical treatment [15].Hishida et al. [18] analysed data from 216 patients who experienced limited lung recurrence after initial resection of pulmonary metastases from CRC.In their study, 132 (61%) patients underwent repeat lung resection, and their 5-year OS rate was 75.3% while that of the patients who did not undergo repeat lung resection was 23.3%.Furthermore, as a result of multivariable analyses of factors predicting survival in patients with lung-limited recurrence after first PM for metastasis from CRC, repeat lung resection was identified as an independent predictor of better survival in their study.Based on the findings of previous studies and our own experience, repeat PM is currently regarded as a first-choice treatment for patients who develop pulmonary recurrence after PM and who meet the widely used criteria for PM [4]. In the present study, it was demonstrated that the interval between the first PM procedure and subsequent detection of pulmonary metastasis ( 12 months) was a poor prognostic factor for both RFS and OS after the second PM.Although substantial information has been gathered regarding prognostic factors after the first PM, data on prognostic factors after the second PM are limited [2].Kandioler et al. [26] conducted a study on prognostic factors after the second PM, reporting that a disease-free interval (DFI) >1 year was significantly associated with a survival advantage beyond the last operation.Reports on prognostic factors after the second PM published after 2016 are shown in Supplementary Material, Table S3 [15,18,20,27,28].Four reports included CRC patients, and 1 included STS patients.Among them, Ihn et [27] analysed the outcomes of 39 patients with CRC who underwent a second PM and showed that a recurrent DFI of <12 months was a poor prognostic factor for OS.Given these present and previous findings, patients with DFI >12 months are expected to demonstrate favourable longterm outcomes and are thus considered good candidates for a second PM. To date, there have been few reports of third PM.In the present study, the site of relapse was pulmonary metastasis alone in 30 out of 52 patients (58%) who experienced relapse after the second PM, suggesting that some patients suffer from pulmonary metastasis alone and no other site of relapse, so even a third PM can be performed in select patients.Reportedly, only 5% of patients who undergo PM initially undergo the procedure !3 times [11].Recently, Mills et al. [13] analysed the short-term outcomes of 117 patients who underwent a third PM (60 patients with sarcoma and 37 with CRC).In their report, the estimated blood loss did not differ markedly between the first and second PM procedures; however, it significantly increased during the third procedure.The rate of wound complications at the third PM was also significantly higher than that at the second PM, and the likelihood of prolonged air leakage increased incrementally in each subsequent operation.In the present study, the operation time tended to be longer, blood loss tended to be higher and the rate of postoperative complications tended to be higher in the second PM group than in the first PM group.In terms of the third PM, operation time and blood loss were not markedly different from those at the second PM, and the rate of postoperative complications was lower than that at the second PM.This observation in our study may be attributed to differences in patient characteristics, that is, patients who underwent a third PM were younger than those who underwent a second PM and thus had fewer comorbidities. To our knowledge, there has been little information on the mid-to long-term outcomes after third PM.Kr€ uger et al. [12] reported the outcomes of repeat PM from a multicentre trial.In their report, a total of 621 patients underwent first PM, and of these, 64 patients underwent a second PM, 35 underwent a third PM, 12 underwent a fourth PM and 6 underwent a fifth PM.They reported favourable long-term outcomes of repeat PM, with the following 5-year survival rates after each procedure: first PM, 63.3%; second PM, 50.9%; third PM, 74.4%; fourth PM, 83.3%; and fifth PM, 60.0%.In this study, 17 patients underwent a third PM procedure.As shown in Supplementary Material, Table S2, 6 of these patients eventually had no evidence of disease, and 3 patients (18%) achieved a 3-year disease-free survival.Although the number of patients was very small, the third PM seems beneficial in certain select patients, based on our experience.Based on Kr€ uger's report and our own experience, it is considered that third PM can be regarded as a viable treatment option for recurrent pulmonary metastasis. Limitations Several limitations of the present study warrant mention.First, the study analysed patients who were treated for over 2 decades, during which there were some advances in radiological and therapeutic modalities.In particular, outcomes of PM were largely affected by changes in modalities for assessing extrapulmonary metastases.Second, the follow-up period was relatively short (median, 36 months).Third, this was a retrospective singlecenter analysis and the number of patients was limited.This study is associated with the inherent limitations of retrospective studies, and it would be ideal to conduct a multivariable analysis in a large cohort.To clarify the prognostic factors more precisely, further multicentre studies with larger patient numbers are needed. CONCLUSION Patients with a period of >12 months between the first PM and the subsequent detection of pulmonary metastases showed favourable outcomes and are thus considered good candidates for second PM.A third PM may be beneficial for selected patients. Figure 1 : Figure 1: A survival analysis of 68 patients who underwent pulmonary metastasectomy a second time.PM: pulmonary metastasectomy.(a) Relapse-free survival after second pulmonary metastasectomy.(b) Overall survival after second pulmonary metastasectomy. Figure 2 : Figure 2: The clinical course after the first pulmonary metastasectomy procedure.AWD: alive with disease; DOAD: died of another disease; DOD: died of disease; NED: no evidence of disease; PM: pulmonary metastasectomy; Pts: patients. Figure 3 : Figure 3: A survival analysis of 17 patients who underwent pulmonary metastasectomy a third time.PM: pulmonary metastasectomy.(a) Relapse-free survival after third pulmonary metastasectomy.(b) Overall survival after third pulmonary metastasectomy. Table 2 : Surgical factors at first and second pulmonary metastasectomy Table 3 : Analyses of factors influencing the relapse-free survival after second pulmonary metastasectomy CI: confidence interval; PM: pulmonary metastasectomy. Table 4 : Analyses of factors influencing the overall survival after second pulmonary metastasectomy Table 5 : Characteristics at third pulmonary metastasectomy
2024-03-02T06:17:36.094Z
2024-02-29T00:00:00.000
{ "year": 2024, "sha1": "b3e406822061fddc425cf30f98d10e9ca6333f37", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/icvts/advance-article-pdf/doi/10.1093/icvts/ivae028/56810488/ivae028.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c3174f0842d308703c425f544d914e0dcef601dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238533157
pes2o/s2orc
v3-fos-license
Maternal malnutrition and anaemia in India: dysregulations leading to the ‘thin-fat’ phenotype in newborns Maternal and child malnutrition and anaemia remain the leading factors for health loss in India. Low birth weight (LBW) offspring of women suffering from chronic malnutrition and anaemia often exhibit insulin resistance and infantile stunting and wasting, together with increased risk of developing cardiometabolic disorders in adulthood. The resulting self-perpetuating and highly multifactorial disease burden cannot be remedied through uniform dietary recommendations alone. To inform approaches likely to alleviate this disease burden, we implemented a systems-analytical approach that had already proven its efficacy in multiple published studies. We utilised previously published qualitative and quantitative analytical results of rural and urban field studies addressing maternal and infantile metabolic and nutritional parameters to precisely define the range of pathological phenotypes encountered and their individual biological characteristics. These characteristics were then integrated, via extensive literature searches, into metabolic and physiological mechanisms to identify the maternal and foetal metabolic dysregulations most likely to underpin the ‘thin-fat’ phenotype in LBW infants and its associated pathological consequences. Our analyses reveal hitherto poorly understood maternal nutrition-dependent mechanisms most likely to promote and sustain the self-perpetuating high disease burden, especially in the Indian population. This work suggests that it most probably is the metabolic consequence of ‘ill-nutrition’ – the recent and rapid dietary shifts to high salt, high saturated fats and high sugar but low micronutrient diets – over an adaptation to ‘thrifty metabolism’ which must be addressed in interventions aiming to significantly alleviate the leading risk factors for health deterioration in India. Introduction India is home to almost one-fifth of the world's population. People living in each of its twenty-nine states and seven union territories differ in ethnic origins, cultures, religions and socio-economic means, which are exposed to a wide variety of often difficult climatic and ecological conditions as well as to numerous other factors affecting their health (1,2) . A recent survey (3) shows that the overall disease burden per person varies considerably between states, the burden rate due to the major diseases ranging five to ten times amongst states. However, contrarily to the all too often repeated view presenting India as the 'diabetes capital of the world' (4,5) , it is maternal and child malnutrition and anaemia which are the leading risk factors for the burden of health problems in India (3) . The primary consequences of these are insulin resistance and infantile stunting and wasting; diabetes and obesity Abbreviations: 5-mTHF, 5-methyltetrahydrofolate; BAT, brown adipocyte tissue; EAA, essential amino acids; FA, fatty acid; GSH, glutathione; Hcy, homocysteine; LBW, low birth weight; PE, phosphatidylethanolamine; SAM, S-adenosyl methionine; TG, triacylglycerol; WAT, white adipocyte tissue appearing as low prevalence secondary consequences and certainly not as primary causes for the disease burden (3) . Dietary behaviours, including foods preferentially consumed, meal frequency and timing, are heavily influenced by the ecology, demography, regions, religions, traditions, seasons, cultural specificities, economic burden and psychosocial beliefs (6) . Such beliefs around food choices are extremely deep-rooted and are mostly practised by women especially during pregnancy and lactation (7) . These practices determine what they eat, how much, why and when. Consequences of such beliefs are seen in health of women of childbearing age, in newborn babies and in infants and adolescents (8) . Across India, dietary intakes of children and adults in rural and urban areas show gross inadequacy of all nutrients and poor quality of protein (9) . Maternal and child malnutrition is characterised by low energy intake (eating less often and small portions) and low dietary diversification. The usual diets are low in proteins, vitamins and micronutrients but rich in carbohydrates and saturated fats. Concurrently, hygiene conditions can vary from very poor to excellent not only between rural areas but within urban centres as well (10) . A suboptimal prenatal environment, in particular global nutrient restriction during the periods of placental and embryonic development, is increasingly being recognised as programming physiology, enhancing predisposition for metabolic diseases in adult life (11,12) . Low birth weight (LBW) offspring of women suffering from malnutrition, clinical anaemia and chronic micronutrient shortages, including vitamins, are characterised by elevated subcutaneous adiposity but very low visceral adiposity (thin-fat phenotype) (13,14) . They also exhibit insulin resistance and infantile stunting and wasting, together with increased risk of developing cardiometabolic disorders in adulthood, hence promoting a self-perpetuating, highly multifactorial disease burden which cannot be remedied through uniform dietary recommendations (15) . To propose coherent modes of interventions likely to alleviate this multifactorial disease burden, it appears necessary to first understand the physiological roots of the pathological phenotypes encountered within affected populations and communities. To this effect, we implemented a previously described (16,17) systems-analytical approach (Computer-Assisted Deductive Integration [CADI]) which had already proven its efficacy in multiple biological contexts (18)(19)(20)(21) . We utilised the results of previously published field studies (22)(23)(24)(25)(26)(27) , undertaken in rural as well as urban populations, addressing qualitative phenotypic biomarkers together with the corresponding maternal and infantile metabolic and nutritional parameters (glucose tolerance, circulating levels of individual amino acids, haematocrit, haemoglobin levels, morphological indices and inflammatory parameters) to precisely define the range of pathological phenotypes encountered and their individual biological characteristics. These characteristics were then integrated, via extensive literature searches, into metabolic and physiological mechanisms to identify the dysregulations most likely to underpin the 'thin-fat' phenotype and its associated self-perpetuating high disease burden. Our studies reveal hitherto poorly understood maternal nutrition-dependent mechanisms most likely to promote and sustain this self-perpetuating situation, suggesting clear avenues for interventions likely to significantly alleviate the leading risk factors for health deterioration in India. However, in this context, since B12-dependent physiological processes such as odd carbon chain-length fatty acid (FAs) metabolism, serine-glycine interconversion (see later) and nucleic acids synthesis are clearly functional, low vitamin B12 circulating levels, while certainly indicative of vitamin intake deficiency (28) , may be representative of high cellular uptake for metabolic purposes rather than low availability (29) . Furthermore, it seems highly unlikely that deficiency in vitamin intake could address B12 only. It appears more likely that all essential vitamins would be similarly affected, in particular vitamins A, B1, B2 B6, B8, B9 and C (30) . Circulating levels of non-essential and essential amino acids (NEAA and EAA, respectively) were generally low, with the notable exception, most particularly in pregnant women, of aspartic acid which was extremely elevated, followed by elevated serine, threonine and histidine (in descending order, respectively, Table 1). Micronutrient deficits and their effects Micronutrients, and in particular zinc and selenium, act as key regulators of metabolic and immune functions (31,32) . Zinc deficiency in human subjects is now known to be an important malnutrition problem worldwide. It is more prevalent in areas of high cereal and low animal food consumption not because the diet could be low in zinc and selenium but because phytic acid is the main known inhibitor of zinc absorption (33) , while selenocysteine, the organic form of selenium most easily absorbed by human subjects, dominates in products of animal origin (32) . Compared to adults, infants, children, adolescents, pregnant and lactating women have increased requirements for zinc and selenium and are at increased risk of deficiencies. Zinc deficiency results in growth failure, while epidermal, gastrointestinal, central nervous, immune, skeletal and reproductive systems are the organs clinically most affected by zinc and selenium deficiencies (34,35) . Hence, in a context characterised by significant maternal malnutrition, micronutrient deficiency during the periconceptional period, and in particular zinc and selenium deficiencies, is likely to result in widespread, low level but persistent maternal as well as foetal metabolic dysregulations with deleterious consequences upon placentation and embryogenesis, negatively affecting foetal development as a whole (36) . Indeed, an adequate supply of these trace elements is essential for healthy foetoplacental development. For instance, zinc plays major functional roles in zinc-dependent enzymes, zincbinding factors and zinc transporters required in a variety of complex mechanisms during cell replication, maturation and adhesion, such as DNA and RNA metabolism, signal recognition and transduction, gene expression and hormone regulation (37) . Among the proteins encoded in the human genome which require zinc for their physiological function are 397 hydrolases, followed by 302 ligases, 167 transferases, 43 oxidoreductases and 24 lyases/isomerases. Proteinases include carboxypeptidases, aminopeptidases, matrix metalloproteinases and peptide hormone processing enzymes/convertases, indicating a wide role of zinc in proteostasis. Hydrolases also include zinc-dependent phosphodiesterases, phospholipases, alkaline and acid phosphatases and pyrophosphatases with roles in regulating second messenger metabolism and signal transduction pathways. Zinc is used in DNA and protein (histone) modification in demethylases and deacetylases, in DNA and RNA metabolism and in DNA repair enzymes. Another significant group of zinc enzymes is involved in regulation: transferases such as geranyl and farnesyl transferase, palmitoyl transferases, ligases such as E3 ubiquitin-protein ligases, SUMO conjugating enzymes and the corresponding hydrolases (38,39) . In parallel, selenium is vital for efficient antioxidant defence in both mother and foetus. Weak placental antioxidant defence due to low maternal plasma selenium concentration increases the risk of small for gestational age infants (40) . Many of the beneficial effects of selenium are attributable to its presence as selenocysteine in the selenoproteins, a small but vital group of proteins (41,42) . Selenoproteins W and N (SELENOW and SELENON) are both required for muscle growth, differentiation and regeneration, as well as satellite cell maintenance in skeletal muscle (43)(44)(45)(46) . Selenoprotein S (SELENOS) is involved in the degradation process of misfolded endoplasmic reticulum (ER) luminal proteins (47) , while selenoprotein T (SELENOT) is involved in the control of glucose tolerance by contributing to prolonged adenylate cyclase-activating polypeptide 1 (ADCYAP1/ PACAP)-induced insulin secretion (48) while also contributing to increased quantitative insulin sensitivity (49) . Growth retardation, poor appetite and mental lethargy are some of the manifestations of chronically zinc-deficient human subjects (50) . Furthermore, low serum zinc has been reported as a major predictor of anaemia mediating the effects of low selenium upon oxidative stress-dependent haemoglobin denaturation and erythrocytes osmotic fragility (51) , whereas vitamin B12 and folate deficiencies were found not to be associated with anaemia (52) . Table 2 lists the selected key enzymes that are either zinc-/selenium-dependent or the activities of which are controlled by zinc/selenium and are of prime relevance in the context of maternal malnutrition. Growth retardation, poor appetite and mental lethargy are some of the manifestations of chronically zinc-deficient human subjects (50) . Furthermore, low serum zinc has been reported as a major predictor of anaemia mediating the effects of low selenium upon oxidative stress-dependent haemoglobin denaturation and erythrocytes osmotic fragility (51) , whereas vitamin B12 and folate deficiencies were found not to be associated with anaemia (52) . The foods with the highest zinc contents include meat, shellfish, eggs, nuts and seeds such as hemp, flax, pumpkin or squash. Legumes, such as chickpeas, The data presented were collected during previously published field studies (22)(23)(24)(25)(26)(27) . V1 = enrolled at 1st trimester (n 75); V2 = 2nd trimester + 30 newly enrolled during their 2nd trimester of pregnancy (n 72 + 30); 34w = 34th week of pregnancy (n 30 remaining out of 92). a Essential amino acids. lentils and beans, all contain substantial amounts of zinc, while whole grains like wheat, quinoa, rice and oats contain some zinc (USDA Food Composition Databases). The foods with the highest selenium content include Brazil nuts, fish, meat, dairy products, eggs and bananas (USDA Food Composition Databases). However, low-quality and less varied plant-based diets in low-income communities have a high content of phytic acid [myo-inositol hexaphosphate (InsP6)] and associated magnesium, potassium and calcium salts, termed 'phytate', which together constitute potent inhibitors of iron and zinc absorption and bioavailability (79)(80)(81) . Hence, the origin of high to severe anaemia in Indian women and children stands to be a direct consequence of malnutrition compounded by low micronutrient intake, and in particular low availability of zinc and selenium resulting in suboptimal haemoglobin synthesis associated with oxidative stress-dependent haemoglobin denaturation and osmotic fragility of erythrocytes (see earlier). Furthermore, maternal micronutrient deficiency in association with chronic malnutrition will most probably provoke significant maternal metabolic skewing. Maternal metabolic skewing and its consequences Transmethylation cycle (one-carbon metabolism) alterations. Maternal dietary methyl donor intake (methionine, folate and choline) and cofactor (zinc and vitamins B2, B6 and B12) play crucial roles in one-carbon metabolism and DNA methylation in the foetus and placenta, impacting foetal growth and lifelong health outcomes (82) . However, in a context characterised by significant maternal malnutrition, not only such dietary intakes will be highly restricted, but the deficiencies in cofactors intake apparently promote down-regulation of the THF-transmethylation cycle. The resulting elevation in Hcy and 5mTHF concurrently with low GSH circulating levels observed in the populations studied probably does not arise from low vitamin B12 availability but from attenuation of zinc-dependent methionine synthase and betainehomocysteine S-methyltransferase enzymatic activity. Here, high Hyc circulating levels lead to GSH depletion through uncoupling of the activities of NOX (NADPH-dependent) from those of XO [S-adenosylhomocysteine (SAH)dependent] and NOS (BH4-dependent), both of which are Table 2. List of the key enzymes that are either zinc-/selenium-dependent or the activities of which are controlled by zinc/selenium and are of prime relevance in the context of maternal malnutrition Physiological roles References Zn-dependent enzymes Aminolevulinic acid dehydrase (ALAD) Catalyses the second step in the synthesis of the haem portion of haemoglobin, thus playing a key role in haematopoiesis. (53)(54)(55) Methionine synthase (MTR, also B12-dependent) and homocysteine S-methyltransferase (BHMT) Both MTR and BHMT play key roles in the transmethylation-tetrahydrofolate cycle (THF), the attenuation of which leads to Hcy and 5-mTHF accumulation. (36,56,57) Glyoxalase I (GLOI) Involved in metabolic detoxification, is active in erythrocytes, and requires GSH as a cofactor. (58,59) Placental alkaline phosphatase (ALPP) Plays key roles in placental development and nutrients transfer. ( Thyroxine deiodinase type III (DIO3) Highly expressed in placenta, regulates circulating foetal thyroid hormone concentrations throughout gestation. Essential for the regulation of thyroid hormone inactivation during development (76,78) In selenium deficiency, there is a strict hierarchy of selenium supply to specific tissues and also to different selenoenzymes within a tissue. Concentrations of selenium and selenoenzymes are greatly decreased in liver, kidney and muscle, whereas those in the brain and endocrine organs such as the thyroid gland are less affected. Within different organs, specific selenoproteins are retained at the expense of others, presumably to preserve the most important aspects of metabolism in selenium deficiency. For example, in the selenium-deficient rat, DIO1 is better retained than cytoplasmic glutathione peroxidase in thyroid, liver and kidney, presumably in order to preserve thyroid function and iodothyronine de-iodination and to limit changes in plasma T4, T3 and TSH (76) . Many of the beneficial effects of selenium are attributable to its presence as selenocysteine in the selenoproteins, a small but vital group of proteins (41,42) . Selenoproteins W and N (SELENOW and SELENON) are both required for muscle growth, differentiation and regeneration, as well as satellite cell maintenance in skeletal muscle (43)(44)(45)(46) . Selenoprotein S (SELENOS) is involved in the degradation process of misfolded endoplasmic reticulum luminal proteins (47) , while selenoprotein T (SELENOT) is involved in the control of glucose tolerance by contributing to prolonged adenylate cyclase-activating polypeptide 1 (ADCYAP1/ PACAP)-induced insulin secretion (48) while also contributing to increased quantitative insulin sensitivity (49) . negatively affected by dysregulation of the transmethylation pathway (83) . This dysregulation, resulting in S-adenosyl methionine (SAM) deficiency, besides reducing methylation potential, also has an effect upon choline biosynthesis from phosphatidylethanolamine (PE), each step of which requires methyl groups donated by SAM (84) (Fig. 1). In pregnancy, increased maternal Hcy levels are associated with increased risk of adverse pregnancy outcomes such as intrauterine growth restriction leading to small size for gestational age at birth and LBW (85) . Dietary protein restriction in animals and marginal protein intake in human subjects cause characteristic changes in one-carbon metabolism that are further exacerbated by micronutrient deficiency, negatively impacting the health of the mother, impairing growth and reprogramming metabolism of the foetus, and causing long-term morbidity in the offspring (82,86) . Amino acid metabolism alterations. Four amino acids (aspartic acid, serine, threonine and histidine) show elevated serum levels in malnourished pregnant women, and in particular aspartic acid is extremely elevated (see Table 1). Under protein deficiency, the NEAA aspartate becomes a significant metabolic hub, a major product of the oxidative glutaminolysis pathway and a required substrate for other anabolic pathways, including the synthesis of purines and pyrimidines (87) . Besides playing a key role in the urea cycle as well as in pyrimidine synthesis, aspartate carries reducing equivalents in the malate-aspartate shuttle, which utilises the ready interconversion of aspartate and oxaloacetate in order to maintain mitochondrial oxidative phosphorylation (Fig. 2). Hence, extremely elevated serum aspartate levels could be a consequence of high biosynthesis and interconversion rates. Aspartate can be synthesised by the transamination of oxaloacetate using either alanine or glutamine, yielding aspartate and an α-keto acid (88)(89)(90) . Interestingly, in malnourished pregnant women alanine, and even more so glutamine, show low circulating levels, possibly comforting hypothetically increased aspartate de novo synthesis and interconversion levels. However, such mechanisms would be at the expense of amino acid supply and cannot be sustained indefinitely. Furthermore, glutamate dehydrogenase, which, in this scheme, catalyses the oxidative deamination of glutamate to α-ketoglutarate and ammonia, is zinc-dependent and its activity would be down-regulated by zinc deficiency. This could be alleviated by increased FA oxidative catabolism, provided that sufficient L-carnitine is available. Since diets low in meat and dairy products lead to low L-carnitine uptake, this could be compensated by de novo L-carnitine biosynthesis, which takes place mainly in skeletal muscle, kidney and liver, using L-lysine, an EAA, as primary substrate (91) (Fig. 3). In malnourished pregnant women, serum lysine and methionine levels are lower than those of most other EAAs, indicative of active de novo L-carnitine biosynthesis. However, this methylation-dependent mechanism implicates SAM as methyl group donor (92) , hence placing further demands on the transmethylation cycle. In malnourished pregnant women, serum serine levels also are elevated, while glycine levels are not, suggesting down-modulated activity in serine hydroxymethyltransferasemediated interconversion to glycine (synthesis of 5,10-methylene tetrahydrofolate from tetrahydrofolate) for the cytoplasmic synthesis of thymidylate, purines and methionine regeneration (94)(95)(96) . Furthermore, high circulating levels of serine might also be indicative of down-regulated PE biosynthesis (97) which, together with SAM deficiency, could lead to low phosphatidylcholine synthesis. This may take particular importance in a context where the availability of choline-rich food items such as fish, crustaceans, meat and eggs are highly limited, since, in pregnancy, choline deficiency could worsen placental dysfunctions while promoting foetal slow growth as gestation progresses. In human subjects, choline and phosphatidylcholine are synthesised de novo via the PE N-methyltransferase (PEMT) pathway (98) but biosynthesis is not enough to meet physiological requirements (99) . In the hepatic PEMT pathway, 3-phosphoglycerate (3PG) receives two acyl groups from acyl-CoA forming a phosphatidic acid. It reacts with cytidine triphosphate to form cytidine diphosphate-diacylglycerol. Its hydroxyl group reacts with serine to form phosphatidylserine which decarboxylates to ethanolamine and PE forms. A PEMT enzyme moves three methyl groups from three SAM donors to the ethanolamine group of the PE to form choline in the form of a phosphatidylcholine. Three SAHs are formed as a by-product (99) . Most of the physiological requirements for phosphatidylcholine are met by channelling dietary choline through the CDP pathway which utilises adenosine triphosphate (ATP), cytidine triphosphate (CTP) and diacylglycerol to generate phosphatidylcholine (Fig. 4). In a context dominated by maternal malnutrition, low protein diet will lead to inhibition of mammalian target of rapamycin complex 1 (mTORC1), thereby promoting autophagy as a mechanism maintaining EAA availability for protein synthesis and protective mechanisms (see later) while supplying ketogenic and glucogenic precursors, such as glutamine and alanine, for ATP generating pathways (101)(102)(103) . These effects would be amplified during pregnancy, triggering the placental mammalian amino acid response pathway and thereby programming the growth capacity of offspring not only in utero but also long after gestational protein restriction (104,105) . Elevated plasma threonine, a member of the EAA group, could reflect zinc and/or vitamin B6 deficiencies, since the initial step in threonine catabolism to Kreb's cycle precursors requires vitamin B6, the activation of which, via pyridoxal kinase, is zinc-dependent (106,107) . However, this would also suggest sufficient dietary supply in EAAs which, in the context addressed here, is highly improbable. Hence, it appears more likely that elevated plasma threonine, arising from maternal autophagy, could supply the developing foetus with an immunostimulant which promotes thymus growth while concurrently promoting maternal innate immune defence functions (108) . Elevated plasma histidine, another member of the EAA group, might also play significant protective roles benefiting both the mother and the foetus, particularly in a context dominated by chronic anaemia. Histidine is essential in globin synthesis and erythropoiesis and has also been implicated in the enhancement of iron absorption from human diets. Histidine-deficient diets predispose healthy subjects to anaemia and accentuate anaemia in chronic uraemic patients (109) . Furthermore, histidine plays key roles in the detoxification of cytotoxic oxidative stress metabolites such as reactive carbonyls (110) . FA metabolism alterations. Under conditions characterised by deficit in methyl donors and increased homocysteine levels, β-oxidation becomes deficient and hypertriglyceridaemia ensues (111) as reflected by high circulating TGs in anaemic Indian women (see earlier). Low amino acid availability, and in particular lysine, may further contribute to high circulating TG levels. Low lysine levels would lead to low L-carnitine de novo synthesis, while diets low in meat and dairy products would lead to low L-carnitine uptake (112) . This, in turn, would further impede mitochondrial β-oxidation of FAs without affecting cytoplasmic FA synthesis from excess carbohydrates intake (113) . Under these conditions, mitochondrial β-oxidation of medium-chain FA, including odd medium-chain FA, the catabolism of which requires the activity of B12-dependent methylmalonyl-CoA mutase, an enzyme indispensable in human metabolism (114) , would remain functional. However, both microsomal α-oxidation, which requires Fe 2+ , vitamin C/GSH and thiamine (vitamin B1) as cofactors (115,116) , as well as ω-oxidation, which requires haem iron protein such as microsomal or mitochondrial cytochrome P-450 (117) together with zinc-dependent alcohol dehydrogenase, are also likely to be impeded. Hence, peroxisome-mediated β-oxidation of dietary long-and very long-chain FA and α-oxidation of dietary branched-chain FA stand to be favoured. However, this process would also increase oxidative stress since the first step in peroxisome-mediated β-oxidation results in the generation of H 2 O 2 and subsequent increase in Fe 2 + -dependent catalase activity (118) . Here, the activity of the main ROS-controlling enzymes (SODs, GLO1, GPXs and TXNRDs) will be attenuated through zinc and selenium deficiencies, while GSH will be subjected to Hcy-mediated depletion. These phenomena stand to exert negative impacts upon placental functions. Functional placental alterations. During pregnancy, the characteristics of maternal blood biochemistry will necessarily constitute the nutritional supply provided to the developing foetus. Maternal hypertriglyceridaemia during pregnancy is correlated with foetoplacental endothelial dysfunction (119) . This can be expected to result in constitutive mild placental (and consequently foetal) hypoxia and subsequent ER stress, which would then affect metabolic control via ATF4 and ATF6β (120) . This would stand to further worsen the direct (38,39) . DHFR, dihydrofolate reductase; THF, tetrahydrofolate; MTHF, methylenetetrahydrofolate; ATP, adenosine triphosphate; ADP, adenosine diphosphate; SAM, S-adenosyl methionine; SAH, S-adenosylHcy; CSE, cystathionase; GSH, glutathione; H2S, hydrogen sulphide. Figure adapted from Azzini et al. (85) . effects of maternal anaemia. Additionally, most of the serological maternal characteristics will also be transferred to the foetus via the foetoplacental endothelial system. Hence, in a context dominated by maternal malnutrition, the developing foetus will be constitutively supplied with low vitamins, micronutrients, unbalanced amino acid supply, high TGs and Hcy. Furthermore, due to maternal selenium deficiency, the foetus will also experience a drastically reduced supply of thyroid hormones and in particular low bioactive T3 supply. Furthermore, maternal deficiencies in protein intake will trigger the placental mammalian amino acid response pathway, thereby programming the growth capacity of offspring not only in utero but also long after gestational protein restriction (104,105) . Foetal developmental dysregulations leading to the LBW 'thin-fat' phenotype The cord blood of LBW infants is characterised by low adiponectin levels which correlate with hyperinsulinaemia and differential distribution of fat depots giving rise to the newborn's thin-fat phenotype characterised by insulin resistance and increased subcutaneous fat but decreased intra-abdominal fat. Given the marked differences in metabolic and pathophysiological characteristics which differentiate subcutaneous and visceral fat depots (121,122) , this situation is radically different from that observed in normal-weight obese individuals of Asian and Indian descent (123,124) , characterised by high visceral adiposity and disproportionately lower subcutaneous adiposity (124,125) . The fat overflow hypothesis invoked to explain this phenotype (126) does not correlate with the anatomical and pathological consequences observed in association with the 'thin-fat' phenotype of these LBW infants studied here. Indeed, the lipid overflow/ectopic fat model states that excess visceral fat accumulation, while causally related to the features of insulin resistance, might also be a marker of a dysfunctional adipose tissue being unable to appropriately store the excess calories. According to this model, the body's ability to cope with the surplus of energy (resulting from excess caloric consumption, a sedentary lifestyle or a combination of Fig. 2. Interplay between amino acid metabolism and redox homoeostasis. The synthesis and catabolism of amino acids is interwoven into redox homoeostasis. The malate-aspartate shuttle, besides transporting NADH between the cytosol and the mitochondrial matrix, also moves the amino acids glutamate and aspartate between the two compartments and is functionally connected to the TCA cycle. When aspartate is removed from this cycle to synthesise asparagine, arginine or nucleosides, this would disrupt the cycle, requiring additional carbon input. Coupling oxaloacetate (OAA) production in oxidative TCA cycle activity with glutamate production by mitochondrial glutaminase maintains flux of α-ketoglutarate into the TCA cycle while removing OAA to permit continued activity, producing reducing potential to generate ATP. In this way, two metabolites central to homoeostasis, ATP and aspartate, are synthesised in parallel. These links go further, given that aspartate, glutamate, α-ketoglutarate and malate are functionally coupled through the malate-aspartate shuttle, which is important for moving reducing potential between the matrix and the cytosol. NADH oxidation reactions and NAD+ reduction reactions, which affect the connectivity of this network, are shown in green and red, respectively. Amino acids are represented in blue, while proteins (transporters and electron carriers) are in orange. αKG, α-ketoglutarate; 1,3 BPG, 1,3-bisphosphoglycerate; 3PG, 3-phosphoglycerate; 3PP, 3-phosphopyruvate; AcCoA, acetyl CoA; Ala, alanine; Asn, asparagine; Asp, aspartate; Cit, citrate; G3P, glyceraldehyde 3-phosphate; Gln, glutamine; Glu, glutamate; Gly, glycine; Lac, lactate; Mal, malate; NAD+, nicotinamide adenine dinucleotide; NADH, reduced NAD+; OAA, oxaloacetate; P5C, pyrroline 5-carboxylate; Pro, proline; Pyr, pyruvate; Ser, serine. Figure adapted from Vettore et al. (87) . both factors) might ultimately lead to metabolic syndrome presentation. There is evidence suggesting that if the extra energy is channelled into insulin-sensitive subcutaneous adipose tissue, the individual, although in positive energy balance, will be protected against the development of metabolic syndrome. However, in cases in which adipose tissue is absent, deficient or insulin-resistant with a limited ability to store the energy excess, the triacylglycerol surplus will be deposited at undesirable sites such as the liver, the heart, the skeletal muscle and in visceral adipose tissue, a phenomenon described as ectopic fat deposition. The resulting metabolic consequences include visceral obesity, insulin resistance, atherogenic dyslipidaemia and a pro-thrombotic, inflammatory profile (127) . This clearly cannot be invoked to explain the thin-fat phenotype addressed here, characterised by elevated subcutaneous adiposity but very low visceral adiposity (thin-fat phenotype), exhibiting infantile insulin resistance, stunting and wasting, together with increased risk of developing cardiometabolic disorders in adulthood. Roles of adipokines in the inception of the LBW 'thin-fat' phenotype Adiponectin is an adipocyte-derived plasma protein with insulin-sensitizing and anti-atherosclerotic properties. There is no correlation between cord adiponectin levels and maternal body mass index, cord leptin or insulin levels, and there is no correlation between cord and maternal adiponectin levels. However, high cord blood adiponectin levels, compared with serum levels in children and adults, positively correlate with In mammals, certain proteins such as calmodulin, myosin, actin, cytochrome c and histones contain N'-trimethyl-lysine (TML) residues. N-methylation of these lysine residues occurs as a post-translational event. This reaction is catalysed by specific methyltransferases, which use S-adenosyl methionine as a methyl donor. Lysosomal hydrolysis of these proteins results in the release of TML, which is the first metabolite of carnitine biosynthesis. TML is first hydroxylated on the three-position by TML dioxygenase (TMLD) to yield 3-hydroxy TML (HTML). Aldolytic cleavage of HTML yields 4-trimethylaminobutyraldehyde (TMABA) and glycine, a reaction catalysed by HTML aldolase (HTMLA). Dehydrogenation of TMABA by TMABA dehydrogenase (TMABA-DH) results in the formation of 4-Ntrimethylaminobutyrate (butyrobetaine). In the last step, butyrobetaine is hydroxylated on the three-position by γ-butyrobetaine dioxygenase (BBD) to yield carnitine. Figure adapted from Vaz and Wanders (93) . foetal birth weights. Taking fat mass-related parameters such as the birth weight/birth length ratio into consideration, plasma adiponectin concentrations exhibit a significant inverse correlation with insulin concentrations. The high adiponectin levels in newborns may be due to lack of negative feedback on adiponectin production resulting from lack of adipocyte hypertrophy, low percentage of body fat or a different distribution of fat depots in the newborns as compared to children and adults (128)(129)(130)(131) . This indicates that adiponectin in cord blood is derived from foetal and not from placental or maternal tissues. During pregnancy, leptin and adiponectin seem to act in an autocrine/paracrine fashion on the placenta and adipose tissue, playing a role in the maternal-foetal interface and contributing to glucose metabolism and foetal development (132) . Hence, the low cord blood adiponectin levels observed in LBW births and its correlation with hyperinsulinaemia and differential distribution of fat depots giving rise to the newborn's thin-fat phenotype clearly suggest that the metabolic skewing resulting from maternal malnutrition and anaemia induce significant changes in foetal metabolism. Cumulative effects of maternal malnutrition and placental alterations in the development of the LBW 'thin-fat' phenotype Maternal nutrition, particularly micronutrients, vitamins and omega-3 FAs play a role in modulating the activity of peroxisome proliferator-activated receptors (PPARs) during placentation and angiogenesis, which affects placental and foetal growth (133) . In placental angiogenesis, PPARγ signalling causes increased vascular endothelial growth factor receptor 2 (VEGFR2) expression. VEGF binding to VEGFR2 then mediates angiogenic signalling involving increases in NOS activity (134,135) . Hcy suppresses PPARγ signalling and expression (136) while impeding NO production. Hence, during pregnancy, the multiple methylation network dysregulations resulting from micronutrient and vitamin deficiencies are likely to primarily result in placental vascularisation defects, while endothelial ER-stress resulting from chronically elevated Hcy levels (137) is in turn likely to result in elevated placental leptin production. These factors, together with the low circulating levels of vitamins, micronutrients and amino acids together with high TGs, are now likely to have serious consequences upon foetal development, predisposing the newborn to insulin resistance, dyslipidaemia (138) and preferential subcutaneous adipocyte patterning, whereas imbalance in the availability of amino acids and low T3 hormone production, resulting from selenium deficiency, will promote growth retardation (73) . Differential adipocyte patterning and dyslipidaemia mechanisms in the development of the LBW 'thin-fat' phenotype In mammals, individual white and brown adipocyte tissues (WAT and BAT, respectively) depots appear at different times in development and have unique functional characteristics. The distinction between subcutaneous and visceral fat may be oversimplified because evidence suggests that metabolic properties vary between some visceral fat depots, while heterogeneity exists even within a single fat depot (139) . Furthermore, metabolic and environmental challenges highlight the extraordinary plasticity of the mammalian adipose organ. Two distinct subtypes of preadipocytes have been characterised in human fat (Myf5 + and Myf5 − ), the proportions of which vary among depot locations (140,141) . Despite the heterogeneity in the adipocyte precursor cell compartment, it appears that the Myf5 + lineage may selectively differentiate in the BAT, subcutaneous and retroperitoneal WAT (sWAT and rWAT, respectively), while Myf5 − lineages selectively give rise to most adipocytes in the inguinal and visceral WAT (ingWAT and vWAT, respectively) (142) . Up-regulation of the PI3K-Akt-mTORC1 pathway (PTEN silencing) dramatically redistributes body fat such that interscapular WAT (iWAT), sWAT and rWAT (the Myf5 + lineage depots) expand, while the ingWAT and vWAT (the Myf5 − lineage depots) disappear (142) . In other words, the adipocytes of Myf5 + lineage expand (causing lipohypertrophy of BAT, sWATs and rWAT) at the expense of Myf5 − lineage (i.e. inguinal and visceral WAT). Hence, the thin-fat phenotype of LBW infants, characterised by increased subcutaneous fat but decreased intra-abdominal fat, is clearly indicative of metabolic skewing towards AKT-mediated mTORC1 up-regulation, probably as a result of elevated Hcy and insulin supply, concurrently with foetal hypoxia and oxidative stress, as a result of placental dysfunction cumulatively with significant maternal anaemia, during in utero development (143) . As a direct consequence, adipogenesis (lineage commitment, clonal expansion and terminal differentiation of preadipocytes) and lipogenesis in adipose tissue will be promoted, while β-oxidation and ketogenesis will be attenuated, leading to dyslipidaemia and insulin resistance (144) , a situation which shall remain dominant after birth. Should the affected individual be then exposed to chronic malnutrition, these developmental phenomena will then have predisposing effects towards stunting/wasting and the development of cardiovascular diseases later in life. Post-birth nutrition and stunting/wasting in 'thin-fat' phenotype individuals Following weaning, LBW infants are subjected to the same restrictive dietary conditions experienced by their parents, namely malnutrition characterised by diets low in proteins, vitamins and micronutrients but rich in carbohydrates and saturated fats, leading to metabolic dysfunctions, including significant anaemia, which will be considerably worsened by increased consumption of low-quality processed products rich in saturated fats, salt and sugars and low in vitamins and micronutrients (ill-nutrition, or so-called junk food). In this context, the deficiencies in micronutrients and metabolic cofactors, and in particular selenium and vitamins, appear to play a key role. Recurrent nightly leg muscle cramps, muscle weakness and fatigue, are indicative of significant L-carnitine deficiency (145) and subsequent dysregulation of FA metabolism (see earlier), non-ketotic hypoglycaemia and muscle wasting (146) . The latter effect stands to be further reinforced by selenium deficiency. Selenium bioavailability, which, in muscles, plays a key role in oxidative stress defence and calcium transport control, and the development of nutritional muscular dystrophies affecting cardiac and skeletal muscles have long been established both in human subjects and livestock. Skeletal muscle degeneration leads to muscle weakness or stiffness, postural instability or walking disability, while cardiac muscle degeneration is associated with respiratory distress, cardiogenic shock, enlarged heart, cardiac arrhythmias, congestive heart failure and ultimately sudden death (147) . Concurrently, selenium deficiency will also result in deficient thyroid hormone supply (73) which, conjunctly with malnutrition, will promote stunting. It is important to note that the results of malnutrition, such as deficiencies in amino acids, vitamins and micronutrients together with an oversupply of saturated fats, do not lead to the full inhibition or full activation of metabolic reactions or pathways. These deficiencies and/or oversupplies are not absolute but only relative and act as 'rheostats', slowing down or facilitating particular metabolic reactions or pathways. These will in turn have discrete metabolic skewing effects, the cumulative result of which will swing development in a particular direction while opening the door to predisposing effects towards context-dependent pathologies over longer period. Conclusion According to the earlier analysis, significant maternal malnutrition leads to deficiencies in amino acids, vitamins and micronutrients together with an oversupply of saturated fats. These deficiencies result in alterations affecting key maternal metabolic processes, and in particular the amino acid interconversion, transmethylation, FA oxidation and redox control pathways. These alterations, together with the deficiencies in micronutrients, result in high maternal anaemia, exacerbated during pregnancy and placental dysfunctions. The ensuing alterations in oxygen and nutrients supply to the embryo give rise to in utero foetal metabolic alterations. This results in LBW children characterised by high subcutaneous but low abdominal adipocyte deposits and already existing insulin resistance ( Figure 5). Being raised in the same environment as their parents promote significant anaemia, accompanied by stunting and wasting in late childhood, repeating the same cycle as in their parents. Should these individuals shift, during mid to late childhood, to a food environment which primarily consists of processed foods low in proteins, vitamins and micronutrients but rich in saturated fats, salt and sugar (so-called junk food), their condition stands to be worsened by the appearance of significantly increased insulin resistance and the pathogenesis of cardiovascular and metabolic disorders. These individuals, after reaching sexual maturity, are now likely to perpetuate this deleterious cycle via their own children. However, attempts to remedy this situation must take into consideration the history of Indian diets. This history demonstrates gradual transitions over the centuries from a low energy diet of large quantities of indigestible fibre carbohydrate, small amounts of digestible carbohydrate, moderate fat and moderate protein, to an increasing intake of low fibre and refined carbohydrates associated with increased fat and decreasing intake of animal proteins interspersed with variable periods of starvation. There were fourteen recorded famines in India between the 11th and 17th centuries, while those that took place over the course of the 18th, 19th and early 20th centuries resulted in more than 60 million deaths (148) . Currently, food intake patterns show that most Indians are vegetarians consuming poor and monotonous cereals-based diets and that food items rich in micronutrients (pulses, other vegetables, fruits, nuts, oilseeds and animal foods) are generally consumed less frequently (149) . However, from 1947 onwards there has been an increase in the frequency of intake and quantities of low fibre and refined carbohydrates, with protein intake improving only marginally while intakes of industrially processed foods containing high salt, high saturated fats and high sugar but low micronutrients kept increasing (148,150,151) . Hence, Indian populations are most probably genetically as well as epigenetically adapted to forms of 'chronic malnutrition' characterised by low energy, low refined carbohydrates, low protein and moderate fat intake with seasonal variation in micronutrient intake (thrifty metabolism). Hence, it most probably is the metabolic consequences of the recent and rapid dietary shifts over an adaptation to 'thrifty metabolism' which must be addressed. The task is made more challenging by the complexity of political, economic, climatic, social and cultural factors twined together. The resolution of malnutrition-associated problems will require a product well suited to the cultures addressed and developed in light of the actual needs as depicted by the biological evidence. To alleviate the consequences of malnutrition, improvement in affordability, accessibility, delivery to end-users, knowledge and awareness of social and cultural constrains and, most importantly, in the convergence between demand and supply must also be addressed. The solution therefore cannot merely consist in providing a 'one size fits all' supplement but in the use of a well-designed supplement which suits the nutritional requirements along with a social and behaviour change approach, training the beneficiaries in the basics of nutrition based on what is locally available and accessible. Fig. 5. The mechanisms whereby maternal chronic malnutrition compounded by ill-nutrition leads to the 'thin-fat' phenotype in newborn. Maternal diet chronically low in proteins, micronutrients and carbohydrates but high in fat, compounded by deficient hygiene (left panel) leads to multiple maternal metabolic alterations resulting in high maternal anaemia, exacerbated during pregnancy, together with placental dysfunction (middle panel). Subsequently, these maternal metabolic and placental dysfunctions induce in utero foetal development alterations resulting in low birth weight children, characterised by high subcutaneous but low abdominal adipocyte deposits and already existing insulin resistance ('thin-fat' phenotype, right panel). Being raised in the same environment as their parents promote significant anaemia, accompanied by stunting and wasting in late childhood, repeating the same cycle as in their parents. Should these children then be exposed, during mid-to-late childhood, to a food environment primarily consisting in highly processed products low in vitamins and micronutrients but rich in salt, sugar and fat (ill-nutrition, so-called junk food), their condition stands to be worsened by the appearance of significantly increased insulin resistance and heightened susceptibility to the pathogenesis of cardiovascular (CVD) and metabolic disorders. These individuals, after reaching sexual maturity, are likely to perpetuate this deleterious cycle via their own children. (No. JCB/2019/000013) from the Science and Engineering Research Board, Government of India. P.P., F.I. and S.G. conceived and coordinated the study. P.P. collected and formatted the data, F.I. performed the systems analyses, and S.G., P.P. and F.I. wrote the MS. Ethical standards disclosure is not applicable; analysis performed using data from previously published studies.
2021-10-11T13:09:54.996Z
2021-10-11T00:00:00.000
{ "year": 2021, "sha1": "2dd08a9014043d3542bd8829dc4ab1f3af507af2", "oa_license": "CCBYNCND", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/555A4631E26ED19DC62F166AF2919E62/S2048679021000835a.pdf/div-class-title-maternal-malnutrition-and-anaemia-in-india-dysregulations-leading-to-the-thin-fat-phenotype-in-newborns-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2dd08a9014043d3542bd8829dc4ab1f3af507af2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
153629723
pes2o/s2orc
v3-fos-license
ANALYZING THE POSTPONEMENT OF TIME PRODUCTION SYSTEMS IN MAKE-TO-STOCK AND SEASONAL DEMAND The supply chain management, postponement and demand management functions are of strategic importance to the economic success of organizations because they influence the production process, when viewed in isolation and empirically may hinder understanding of their behavior. The aim of this paper is to analyze the influence of the postponement in an enterprise production system with make-to-stock and with seasonal demand. The research method used was a case study, the instruments of data collection were semi-structured interviews, documentary analysis and site visits. This research is restricted to analysis of the influence that different levels of delay and the company's position in the supply chain have on the practice of demand management in the productive segment graphic, product spiral notebook and also in relation to geographical focus (region of the state Sao Paulo), in which it will seek to interview the managers and directors. As a way to support the research on the analysis of case study and the final considerations will be discussed the following issues: supply chain management, postponement, demand management and production system make-to-stock. The demand management can be understood as a practice that allows you to manage and coordinate the supply chain in reverse, i.e. the consumer to the supplier, in which consumers trigger actions for the supply of products can make the process more efficient. The purpose of managing the supply chain is able to allow the addition of value, exceeding the expectations of consumers, it is necessary to develop a relationship with suppliers and customers win-win. The postponement strategy must fit the characteristics of the turbulent environment within the markets along with demands that require variety of customized products and services and reasonable costs, aiming to support decision making. The postponement of time can be a way to soften the increase in inventory of finished product in the company, which may have a high value, being necessary to reduce the lead time and also suppliers to change their production strategy of make- to-stock to make-to-order. The production system make-to-stock shows enough interest to organizations that are operating in markets with high demand variability, i.e. variations in seasonal as a way of trying to protect their production and be more responsive to market needs. INTRODUCTION Organizations today must be concerned not only with their production, with demand for its products by the consumer market and its supply chain, as this tripod cannot create differential in relation to its competitors.This difference that the consumer is always looking for and demanding, they are: price, quality and availability. How to reduce production costs without affecting the quality and availability of product on the market?From the inventory postponement strategy.This strategy coupled with the management may allow a reduction in product cost, since this is talking about reducing the risk of loss of finished product because it has a high added value, increased flexibility in adapting to market needs. Independent Journal of Management & Production (IJM&P) v. 2, n. 2, July -December 2011 ISSN: 2236-269X DOI: 10.14807/ijmp.v2i2.23 According Edalatkhah (2006), in the new economy, supply chains are needed to address various markets around the world, set up delivery of customized products, planning for change, never together with speed and precision considered possible before.Managers need to work with various partners to monitor the activities being performed together, in order to solve problems and delays that may occur. According to The Global Supply Chain Forum, "the supply chain management is the integration of key processes, from consumers to producers of raw materials. SCM involves many areas such as demand forecasting, procurement, manufacturing, distribution, inventory and transport, interacting prospects strategic, tactical and operational (MCAD;MCCORMACK, 2001). According to Tan (2002), GCS en ¬ volves the integration of business processes through the supply chain, including the coordination of ac ¬ ities and processes not only within an organization ¬ tion alone, but of all that make up the supply chain. According to Nascimento Neto, Oliveira and Ghinato (2002), the CPFR is a tool to facilitate collaborative planning among the participating companies through the reduction in inventory levels, combined with improved service levels, in order to address issues such as: the influence of promotions on sales forecast and inventory management, influences of changes in the pattern of demand, supply inventory to ensure availability of products on the shelf, to enable greater coordination between enterprises in the chain, allow greater synchronization between the various processes in the industrial manufacturing processes and forecasting. The CPFR can be defined as a set of rules and procedures by the Voluntary Interindustry Commerce amparo Standards (VICS) committee was founded in 1986 with the aim of increasing the efficiency of supply chains, specifically the retail sector, these standards that aim to facilitate physical and information flows (NASCIMENTO NETO; OLIVEIRA; GHINATO, 2002). For Rodrigues and Oliveira (2009) According Widiarta and Berghen (2004), there are several assumptions that must be considered in the modeling of demand management: a) There is no possibility of coordinated replenishment.It is assumed that all suppliers are sorted independently; b) It is assumed that demand is stochastic and independent manner with a known probability distribution; c) The parameters are stationary.The parameters in our systems are updated occasionally and the general trend in demand for the product is more constant; d) There are multiple items with limited storage capacity.Storage limitation is represented by the number of available palettes and shelves for those that can be used on a particular item; e) The demand is seasonal in some cases.The demand for a product finished the month with a large percentage of zero values (often 30 percent or more), with values greater than zero, randomly mixed; f) The replenishment time is always constant during a predetermined period. Therefore, if two or more replenishment orders are simultaneously outstanding, they should be processed in the same order in which they are placed.In another word, it may not cross; g) There is no quantity discount with regards to the number of quantity ordered by the company; and h) There is only one point providing.The products are provided at the same place and share a common facility inventory. For Lambert, Cooper and Pagh (1998), demand management is one of the eight cases and requires interfaces with the other seven as shown in Figure 1.We describe the processes that make strategic and operational management of demand, including sub-processes and activities.More over, one can identify interfaces with According to Van Hoek and Dierdonck (2000), Verol (2006) and Zhang and Tan (2010) According to Wallin, Rungtusanatham and Rabinovich (2006), Bailey and Rabinovich (2006) and Drohomeretski, Cardoso and Costa (2008), the strategy of postponement of time assumes that the product will be requested from the supplier will only arise when a client request, which will enable the reduction of inventory levels and product obsolescence.The semi-structured interviews with the director of industrial managers of industrial units and the manager of PCP, we could observe the reports of the production order, inventory and production lead time.The on-site visits were conducted with the oversight of some employees. Independent The company currently works with five product categories, and each has peculiarities relating to inventory management and are classified as follows: a) Appointment book: they are products that have an expiration date, or are dated, and has seasonally, as is their peak from April to December, production will gradually grow; b) School: by being sold for back to school, have a season and is their peak from July to December; c) Office: for the audience of small businesses, professionals or offices.As a way of illustrating the manufacture of college notebooks, will be shown a sequence of pictures, starting with the processing of paper rolls that arrive at the factory, which weigh about a ton. These coils are placed on machines pautadeiras, that process has started manufacturing the notebooks.The process is fully automated.The first step is to pautação leaves, those blue lines that usually guide a linear writing, after being ruled, the paper is cut into sheets, which will form the core of the book, the leaves are merged and run on a treadmill, where they will receive the dividers and tabs of material inside the notebook.Further, the brains are cut and drilled and are waiting to receive the cover, the palette of adhesives, plastic cover and back cover.At this moment there is the process espiralização notebook, which goes to the box and then boxed is to follow your destiny. The company has capacity to produce, per month, about 400 000 notebooks. To meet the season back at school the school year in Brazil, which goes from January to March, the production begins in September.At the end of the period back to school, all the production is focused on meeting the demand of the Northern Hemisphere. Product models that compose the product lines for the year 2011 will be approximately 1,500, divided according to Table 1, which also has its representation on the numbers of items.In Figure 4 is presented in graphic form, the distribution of product models and their representation.The models that comprise the product lines are not managed in one way, first because the company has developed a planning approach for product line and, second, respecting seasonality and the criticality of each product category.In school category, the 700 models that are produced, exported about 300, to meet both external customers, as the holding companies, or so-called intercompany sales. The delimitation of the study will be in the company product line school.Being a line in which no seasonality and production system adopted is the Make-To-Stock, will be reviewing the process of manufacture of the product model spiral notebook. The production strategy employed is to make to stock (make-to-Stock), since the product undergoes spiral notebook with the seasonality of the period back to school the beginning of the school year and production capacity can not immediately meet all requests that will be generated in this period. Based on the seasonality in production capacity and information systems, management, together with the support of managers in the areas of marketing, sales, production, finance and supplies formulate the demand forecasts. Adopted are two ways to plan the production: the first is through the planning process in materials, inventory, production routings, production times.The product spiral notebook, having a turnover of high raw material, usually have relatively few weeks of raw material in stock and process using the FIFO concept for the raw paper, corrugated cardboard, paint and varnish and for LIFO other materials (wire, plastic accessories), both in inventory management and in accounting.Being a very large volume of material and for not having an area proportional to warehousing and stock coverage varies between three and six weeks. In relation to the finished product, working with manufacturing to stock, which are stored at the distribution center and should be adopted FIFO concept, finished products are packed in boxes, pallets and then being put on the shelves, which should be labeled as identification. The PCP team's main function is to analyze the quantity of raw materials held in stock, which the productive capacity, which should be produced and at what time, firing the purchases of raw materials.There is a team dedicated to sales forecasting, observing the market and makes sales forecast for the next month, and from this information determines how much should be produced. From the general plan of sale, the PCP makes the production plan, capacity analysis, analysis of critical resources, materials analysis and critical analysis of best screenplay, setting a schedule with shorter maturities, or a monthly schedule, as way of measuring the volume produced which will signal in the volume of raw materials purchased, the machine downtime due to lack of product, setup, maintenance.This check is done weekly. The monthly schedule is divided in four weeks, to be passed to the factory planning week, when he was studying the material to be used, defined roadmaps for manufacturing and the possible backlash. In the monthly schedule is also planned to set up times for equipment and assemblies, as way to have a better use of downtime. It can be observed that the postponement of inventory and production occurs across the board, or the postponement of how the product will only begin to be produced is achieved when the average stock of covers, back covers and Although the postponement of such a product was observed that the models are designed with an advance of up to 6 months, so you can plan to purchase, payment and delivery of raw materials. In the process of postponement so there is concern in programming in a very thorough delivery of raw materials throughout the production period, since the time of delivery may vary according to the productive period, according to Table 2 and Figure 5. Noting the delay in a process focused on the customer became clear that there is no type of procedure, because the product spiral notebook has a relatively high demand of about 35 million and that the client and the consumer is not willing to await the arrival of product, wanting to buy and consume immediately purchase. FINAL REMARKS The method of deciding the level of postponement of the stocks of raw materials and finished products can be improved, since the decisions are linked to the feeling of executives on the attitude of the consumer market. It was noted that CPFR can be the tool being used wrongly, since it might notice that there is a scientifically reliable method to validate the information given by the customer and supplier of level 1. The formulation of the production needs may be influenced by the interests of suppliers and departments with predicted postponement bad estimate, since the control over information that is not so fastidious. The team of PCP could be more comprehensive and collaborative with the departments and thus influencing the definition of the ABC classification of products, creating benchmarks and measuring productivity and inventory postponement. As a way to improve demand forecasting and supply chain management, could be adopted Vendor Management Inventory (VMI), allowing you to create the practice of shipping raw materials and finished products on the date and amount required at the client level and joint management of reserves. With the use of tools CPFR, VMI and QR as a complement to the postponement of form and time, will better manage the risks of uncertainty, and allows the application of statistical tools. As a complement to CPFR and VMI, could be adopted, the quick response (QR), which allow the practice of encouraging customers / consumers to exercise a level of effective management of supply chain and, consequently, improve order management, spare inventory, handling and transportation and exchange of information, thus reducing costs and improving levels of delay. the other management processes, supply chain and other companies. Figure 1 : Figure 1: Supply Chain Management: Implementation Issues and Research Opportunities Source: Adapted from Lambert, Cooper e Pagh (1998) This activity manages the integration between the supplier, the business and consumer, is responsible for proper planning of all the demands generated, external or internal, with the aim of which has a balance between what the supplier can deliver, production can and what the market needs (FAVARETTO, 2001).According toCroxton et al. (2001), the process of demand management has strategic and operational elements, as shown in Figure2.In the strategic process, the team provides a structure for managing the process.The operating process is the updating of demand management.The implementation of the strategic process is a necessary first step for integrating the company with other members of the supply Figure 2 : Figure 2: The Supply Chain Management processes Source: Adapted from Croxton et al. (2001)Daru and Lacerda (2005) describe that manufacture for stock is a common practice, where one can forecast demand, and can enjoy moments of the crop to be produced, using resources better and more balanced way of loading.But this policy , the concept of postponement is that risk and uncertainty are the costs of the differentiation (form, place or time) of products that occurs during manufacturing activities, storage and delivery, being based on the characteristics of the product / process in the supply chain: (a) product design: the specific content of the operation postponed (delayed), (b) process: the time when the activities are delayed in the process, and (c) place: the location where the delay happens.Ng and Chung (2008) commented that the strategic placement of the decoupling point the supply chain, the strategy of postponement can be used.The purpose of the postponement is to increase the efficiency of the supply chain, moving product differentiation (at the point of dissociation) closer to the end user.Because the risk and uncertainty are the costs of goods differentiation and differentiation could occur in the product itself and / or the geographical dispersion of inventories.Yang and Burns (2003) describe the illustration shows three different supply chain strategies which are closely related to the concept of postponement.The dotted line in Figure 3 reflects how the delay is associated with the customer's point of disengagement, in which the end customer influence the supply chain and distinguishes prediction oriented activities.The postponement of way is to manufacture a product or standard basis in sufficient quantities to achieve economies of scale, while the characteristics of completion, such as color, packaging, etc.. are postponed until customer orders are received and are classified into four levels: tagging or labeling, packaging, assembly and manufacturing (FERREIRA; BATTLE, 2007).Mendes et al. (2008) based on Zinn (1990) describe and 4 in the existing subdivisions in the postponement of discourse form which a brief definition.Independent Journal of Management & Production (IJM&P) v. 2, n. 2, July -December 2011 ISSN: 2236-269X DOI: 10.14807/ijmp.v2i2.23 a) Postponement of Labeling: The products are stored without any kind of classification.Labels and tags will be displayed when a request is made, and the client specifies the brand that will identify the final product; b) Postponement of manufacture: The last manufacturing steps are completed only after confirmation of the customer's request.Semi-finished products or even in the form of inputs are stored for the occurrence of differentiation of the commodity at a time or location nearest to the demand; c) Postponement of product: The products can be designed according to a logic modules, or standardized components to facilitate the further differentiation; and d) Postponement of the process: The production and distribution can be designed in ways that allow product differentiation downstream and upstream supply chain.Yang, Burns and Backhouse (2003), Engelseth (2007) discuss the postponement of place involves the delay of freight down the chain until orders are received, thus keeping the goods centrally, and not have them in one place specific. Figure 3: Postponement strategies and different chains of supplies Source: Adapted from Yang e Burns, (2003) Do not have a seasonal pattern, being produced almost all year round; d) Home: products that have a very distinct feature, for domestic use.Do not have a seasonal pattern, being produced almost all year.It is a very specific niche and exploited in the United States that is being developed in Brazil, where the company is developing products and doing some benchmark to detect your level of acceptance in the Brazilian market; and e) Cards and envelopes party: this new product line was incorporated into the productive process, through the acquisition of another company Independent Journal of Management & Production (IJM&P) v. 2, n. 2, July -December 2011 ISSN: 2236-269X DOI: 10.14807/ijmp.v2i2.23 located in São Paulo.This product line according to the board came to complete the mix of products and alignment with the strategies adopted by the U.S. group.This product has a seasonal pattern with gaps between periods commemorative relatively small. Figure 4 : Figure 4: Representativeness% in the number of models and what is exported the demand management is a practice that allows you to manage and coordinate the supply chain in reverse, ie the consumer to Table 1 : Value of representativeness of the model and export products The second is Table 2 : Change in days for delivery of raw materials Figure 5: Graph of variation in days for delivery of raw materials
2018-12-12T13:33:00.038Z
2011-12-05T00:00:00.000
{ "year": 2011, "sha1": "b8d9ed496e8f75d641dc1d7683945ef541150f09", "oa_license": "CCBYNCSA", "oa_url": "http://www.ijmp.jor.br/index.php/ijmp/article/download/23/23", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "b8d9ed496e8f75d641dc1d7683945ef541150f09", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
119739825
pes2o/s2orc
v3-fos-license
On the fundamental group of complete manifolds with almost Euclidean volume growth It is proved that the fundamental group of a complete Riemannian manifold with nonnegative Ricci curvature and certain volume growth conditions is trivial or finite. Introduction Throughout the paper M denotes a complete noncompact Riemannian n-manifold with nonnegative Ricci curvature. Let V p (r) be the volume of the metric ball B p (r) origin at p with radius r in M. By Bishop-Gromov volume comparison , V p (r) ω n r n is a decreasing function, where ω n is the volume of unit ball in R n . So the limit is existent as r goes to infinite. Denote the volume growth of M by The α M is independent of p and so a global geometric invariant. Moreover, the volume comparison also implies that 0 ≤ α M ≤ 1 and α M = 1 if and only if M is isometric to R n . We say that M has Euclidean volume growth (or large volume growth) if α M > 0. The volume comparison theorem implies that for all r > 0 and Euclidean volume growth condition implies that The main result of this note is Theorem 1.1. Given n, there is a constant C(n) < 2 n such that if an open n-manifold M satisfies 1) for some p ∈ M and all r > 0, then M is simple connected. 2) for some p ∈ M, then the fundamental group π 1 (M) is finite. We should note that even though M has Euclidean volume growth, one can not deduce that M is simple connected. So formula (1.1) holding for all r > 0 is important. Set ǫ(n) = n − log 2 C(n)+2 n 2 . We see immediately that 2) of Theorem 1.1 implies the following Corollary 1.2. Given n, there is a constant ǫ(n) such that if an open n-manifold M satisfies This shows a gap phenomenon for a well-known result of Peter Li [3] and Anderson [1] states that π 1 (M) is finite provided M has Euclidean volume growth. On the other hand, Anderson has proved that (see Theorem 1.1 in [1]) under condition (1.3), every finitely generated subgroup of π 1 (M) is actually of polynomial growth of order ≤ ǫ < 1. In [6] Bingye Wu proved that under condition (1.3) π 1 (M) is finitely generated. But every infinite group of finitely generated has polynomial growth of order at least 1 (I thank the referee for pointing out this fact. See Section 3). So Corollary 1.2 is also a consequence of Anderson and Wu's results. Acknowledgment: I would like to thank the referee for his (her) invaluable suggestions. The referee's explanation clarify some understanding of mine on Anderson's paper [1]. A related volume ratio In this section we prove an estimate on the volume ratio V p (2r)/V p (r) related to certain generated elements of π 1 (M) (Lemma 2.4 below). The main ingredients are Sormani's uniform cut lemma [5] and some ideas due to Shen [4]. 2.1. A uniform estimate. Let g ∈ π 1 (M, p) and π : M → M be the universal cover. Following [5], we say that γ is a minimal representative geodesic loop (based at p) of g if γ = π •γ, whereγ is a minimal geodesic connectingp and gp. So L(γ) = d M (p, gp). Given a group G, we say that {g 1 , g 2 , g 3 , · · ·} is an ordered set of independent generators of G if every g i can not be expressed as the previous generators and their inverses, g 1 , g −1 1 , ·· ·, g i−1 , g −1 i−1 . In [5] Sormani proved the following two lemmas. Lemma 2.1. (halfway lemma) There exists an ordered set of independent generators {g 1 , g 2 , ·· ·} of π 1 (M, p) with minimal representative geodesic loops γ k of length d k such that In particular, we have a sequence of such generators if π 1 (M, p) is infinitely generated. Then there is a constant S n depending on n such that if x ∈ ∂B p 0 (rD) where r ≥ 1 + 2S n , then d(x, γ(D)) ≥ (r − 1)D + 4S n D. The main idea of proof of uniform cut lemma is to lift geodesic loop to the universal covering space and research carefully the related excess function. It contains a nice application of Abresch-Gromoll's estimate on excess function [2]. The above two lemmas allow her to show that Milnor conjecture holds for the manifold with so called small linear diameter growth. Let γ be a minimal representative geodesic loops based at p of L(γ) = d satisfying Lemma 2.1. The below estimate is important for our purpose. Then We also note that d M (γ(d/2), σ(d)) ≥ d/2. So similarly one has It follows that h = min(h 1 , h 2 ) ≥ S (n)d. A volume's ratio. Continuing with notations p, d in Lemma 2.3, we shall prove Lemma 2.4. We have the following ratio of volume Before giving the proof of Lemma 2.4, (following [4]) we introduce some necessary notations. Let Σ p be a close subset of unit tangent sphere S p M ⊂ T p M. Let B Σ p (r) be the set of points x ∈ B p (r) such that there exists a minimal geodesic γ from p to x with γ ′ (0) ∈ Σ p . We write V Σ p (r) for the volume of B Σ p (r). We denote by Σ p (r) the set of unit vectors v ∈ S p M such that γ(t) = exp p (tv) is minimal on [0, r). 3. A proof of theorem 1.1 We set V p (r) > C(n) for all r > 0, then there is no nontrivial generator satisfying Lemma 2.4. So M is simple connected. Thus the first part of Theorem 1.1 is proved. The proof of second part of Theorem 1.1 is divided into two steps. Firstly, π 1 (M, p) is finitely generated. We argue by contradiction. Assume π 1 (M, p) is infinitely generated, then by Lemma 2.2, there is a sequence for all k ≥ 1. This contradicts to condition (1.2). Secondly, condition (1.2) implies that V p (r) ≥ C · r n−ǫ for some ǫ < 1 and sufficiently large r. So by Anderson's result [1], π 1 (M) has polynomial growth of order ≤ ǫ < 1. An algebraic fact: If Γ is an infinite group with generators S = {g 1 , · · ·, g k }, then ♯U(r) ≥ r for all r ∈ N, where U(r) is the set of elements with word length ≤ r with respect to S . In particular, Γ has polynomial growth of order at least 1. (This proof is provided by referee) To see this we argue by contradiction. Let r be the smallest integer so that ♯U(r) < r, then r − 1 ≤ ♯U(r − 1) ≤ ♯U(r) < r. This shows U(r) = U(r − 1). In other words, any word of length r can be expressed as a word of length ≤ r − 1. It follows that Γ = U(r − 1), which is finite, a contradiction. The second part of Theorem 1.1 follows from above immediately.
2019-02-14T10:31:22.000Z
2019-02-14T00:00:00.000
{ "year": 2019, "sha1": "e82e253a9bb7853dce006c76f4f9ac9352372151", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1902.05292", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e82e253a9bb7853dce006c76f4f9ac9352372151", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
238008098
pes2o/s2orc
v3-fos-license
Discounting Females, Denying Sex, and Disregarding Dangers from Self-ID: A Reply to Mezey, Upadhyay, and Sherrick In this reply, I address responses to my article “Scrutinizing the Equality Act”, where I argued that we should provide protections to LGBT+ individuals without prioritizing gender self-identification over biological sex. The bill would eliminate sex-based rights and the protected nature of women’s provisions as a result of in-the-moment self-ID. Here, I reiterate my key points to correct misunderstandings and to refocus on the complex, contested issues raised in my article about balancing competing rights. I conclude by encouraging feminists to engage with the Equality Act’s deficiencies. females to be largely free of the threat of men's violence and sexual objectification. Of course, some men will break rules about sex-separation; crucially, however, we can object to their presence and remove them if they do so. By eliminating females' right to exclude predatory males based on sex, the EA will eliminate these remedies to female disadvantages, which promote safety and equal social participation. Disregarding Self-ID, Ignoring Male Violence, and Inverting the Burden of Proof Mezey claims that I "did not make a sufficient case that the Act's interpretation [read: redefinition] of sex would harm women." Acknowledging that I did not suggest transwomen pose any particular danger, Mezey perceives my argument to be that 'the Act would allow, even encourage cisgender men to present as women and invade women's sex-segregated spaces to prey on them,' but ' [t]here are ample data' from 'jurisdictions with anti-discrimination laws that cast doubt on these assertions.' Further, she suggests, my raising these concerns 'perhaps unwittingly…perpetuate [s] discrimination against the transgender community'. I address each point in turn. First, Mezey's claim that I fear that males will increasingly 'present as women' to access women's spaces is incorrect. As I discussed in my article, predatory males don't have to present as women to self-ID into women's spaces-say-so is sufficient. Some transwomen have beards, wear suits, and present in stereotypically masculine ways; some of these transwomen are indistinguishable from predatory men, and women cannot tell the difference. But the point is, even if we could somehow differentiate transwomen from predatory males, under the EA, it would not matter because saying 'I identify as a woman" is sufficient to gain access into women's spaces. We are a society that does not operate on trust and self-ID, which is why obvious adults have to show government ID to buy a beer, differentially abled persons have to procure and display a permit to utilize parking spaces set aside for them, and, when access relates to concerns about safety, we employ onerous requirements for proof of who we say we are (e.g., for air travel, for driving licenses). I remain mystified that in this society my critics think allowing any male to self-ID into female spaces is not only acceptable policy, but that it is transphobic to raise concerns about it. To not think that predatory, voyeuristic males will use self-ID to prey upon women and girls is to disregard females because of course they will; they already have. And, as I have noted, self-ID would also undermine the safety of transwomen. Second, I dispute Mezey's claim I did not provide sufficient evidence of harm (a discussion that was extended with more examples in the online supplemental information). I presented evidence that gender self-ID has caused physical (sexual assault) and psychological (felt insecurity) harm to females, including violations sufficient to stimulate reversal of gender self-ID policies. However, I must ask, what constitutes sufficient evidence of harm? Male predatory violence against women is well-established; indeed, that is a primary rationale for the sex separation of spaces. What proportion of incarcerated females need say that it would undermine their psychological well-being, including felt security, to have to share a prison cell with a male? How many females need be sexually harassed or assaulted by males in prison for the harm to be sufficient? How many women need express that they want a femaleonly rape-crises shelter in order for at least some to be deemed justifiably sexseparated? How many predatory males need use self-ID to prey on women and girls in formerly sex-separated spaces for this to be considered too great a cost, given alternatives (such as gendered spaces)? Once again, quoting Kathleen Stock (2018), I object to letting females "be the automatic collateral in sweeping social changes such as those proposed". Mezey states that "[t]here are ample data from jurisdictions with anti-discrimination laws that cast doubt on [my] assertions [read: concerns].' However, Mezey did not cite a single peer-reviewed study in support of this 'ample data' claim because robust scientific evidence does not exist. 2 This follows from the persistent failure to consider the policy impact on females. Government-funded studies have assessed difficulties that transwomen experience in sex-separated spaces, for example, and how this affects their safety and psychological well-being. Yet, no studies have been conducted to examine how women in prisons and shelters might be affected by gender self-ID. These studies are needed because the voices of these women are rarely heard. These women -who are disproportionately poor and minority and who have experience high rates of male sexual violence-matter. Additionally, I object to Mezey's putting the burden of proof on females to prove that eliminating their sex-based provisions will cause sufficient harm. Insofar as provisions are set aside for a protected group (females) to exclude another group (males), the burden of proof appropriately lies with those seeking exceptions to demonstrate that said exceptions do not increase harm. Instead, Mezey appears to be claiming that women should have to prove, in advance, the harms that may be caused by gender self-ID. This is inequitable and illogical, in my view. Finally, to Mezey's worry that my discussing the fact that some males will use gender self-ID to prey on females may 'perpetuate discrimination against transgender people', I can only say that need not follow from recognition of facts about male predation (see my discussion of this point: Burt 2020, p.392). Given that concerns about male predation are the primary justification for including transwomen in females' protected spaces, we agree that male predators pose a threat to safety. We disagree that only transwomen can raise concerns about male predatory behavior. Mezey also interprets me as holding the belief that females' "oppression is greater" than transwomen's oppression. This is not my argument. I noted that the incidents of harassment and violence against females are much more numerous than those directed against transwomen, but I acknowledged that this is a function of females' outnumbering transwomen by ~100-fold. Getting mired in an adjudication about which group is "more oppressed" is a futile distraction. Even if transwomen were "more oppressed" than females, it would remain unnecessary-and therefore morally objectionable-to sacrifice the safety of females in order to protect transwomen when alternatives exist that protect both groups. I outlined ways to extend such protections recognizing that both groups are vulnerable and should be protected. 4 These proposals have been largely ignored by my critics. First, this does not necessarily follow, as gendered or unisex spaces would be open to anyone. Second, as I discussed, this potential pain of exclusion is counterbalanced by the harm both psychological (loss of felt security and privacy) and physical (violence) from gender self-ID. In the end, given the reality of male sexual violence, I argue it is both ill-advised and morally indefensible to reject a compromise policy prioritizing safety at a cost to inclusivity in favor of a self-ID policy prioritizing inclusivity at a cost of safety. Sex Denialism Professor Upadhyay's (2021) response contains a rich description of the diversity of gender across cultures and contexts. Their critique is rooted in their claim that my article 'espouses transphobic discourses [that] invariably reproduce colonial and white supremacist frameworks of patriarchy and gender violence.' Deeming my approach 'transphobic' and 'cisnormative' 'white feminism', they argue I endorse a heteropatriarchal, racist, colonial view of gender as a binary. I do not. Nowhere have I suggested that gender is a binary. In fact, across several pages in my article (pp. 379-384), I make the opposite argument-that gender is a social construction imposed on male and female bodies that should be challenged. In my reading, there is much upon which Upadhyay and I agree: traditional notions of 'womanhood' and 'manhood' are artificial, restrictive, and perpetuate racist and heterosexist ideas that reinforce power structures and there is no right way to be a woman or a man. Our disagreement is rooted in divergent views about the material reality of sex and the distinction between sex and gender (a topic discussed at length in my article, which was ignored). I believe sexual dimorphism is real and sex matters in some contexts; they disagree. In Upadhyay's account, sex is replaced with socially constructed gender, and the reality of sexual dimorphism is transmuted into a belief in sexual dimorphism, which serves to perpetuate 'cis-white' heteropatriarchy. We agree that hegemonic ideals of 'womanhood', which I explicitly reject, perpetuate inequalities. However, one can-and I do-reject hegemonic norms of femininity and masculinity as defining of 'womanhood' and 'manhood' without denying the material reality of sex and, by implication, the existence of sexism (see Jones 2020). Stripped of all its gendered trappings, male-female differences and females' reproductive burden remain. Upadhyay charges me with hypocrisy for expressing concerns for how a policy of gender self-ID would affect incarcerated females because incarcerated women are disproportionately racial/ethnic minorities. I do not follow their logic. Recognizing that disadvantaged, minority females are overrepresented in sex-separated spaces, such as prisons and shelters, and that, therefore, minority females will be made disproportionately vulnerable to predatory males by gender self-ID policies is not hypocritical, in my view. Finally, Upadhyay notes they "hopes that in the future, cis-white feminists reflect on their own colonial epistemological frameworks before they analyze trans peoples and issues and reinforcing [sic] transphobia". Why Upadhyay deems it appropriate to label me 'cis' is unclear, but it is clear that Upadhyay is suggesting that the replacement of sex with gender identity under the law is a 'trans issue' about 'trans people'. Manifestly, it is about sex and gender; however, since Upadhyay has dismissed the reality of sex as something socially constructed from gender, sex-based rights don't exist. Therefore, there is no conflict of rights between sex and gender identity. This is a matter about which we will have to agree to disagree. However, that self-ID undermines the safety of women (however defined) is a topic I hope we can agree is pressing and deserves more discussion. Spectral Sex, Identity Validation, and Unrealistic Solutions In her response, Sherrick interprets my objection to the Equality Act's form as due to an 'expressed concern that in allowing gender to supersede sex, the Equality Act will endanger ciswomen'. She avers that I argued 'only ciswomen should be allowed to enter [women's] spaces.' 5 Then, emphasizing trans people's high 'victimization and suicide rates', Sherrick argues that gender should supersede sex 'to ensure trans people are protected and that their identities are validated". Correcting the Record As I explained in my article, my concern is not merely with the safety of 'ciswomen' (incidentally a term I do not use in my article.) As I have noted, protected spaces cannot exist without gatekeeping. Unchallengeable gender self-ID policies allow male predators to opt in and thereby undermine the safety of everyone in those spaces (females and transwomen). If predation weren't sexed, this situation would be different; however, predation is sexed. Relevant facts Sherrick and others continue to neglect include that males account for >90% of sexual assaults and have, as a group, significantly greater size and strength, which allows them to overpower most females if they so choose. Females, as a group, are physically vulnerable to males, which is a primary rationale for sex separated spaces and provisions. Sherrick ignores my policy suggestions to protect transwomen, including gendered spaces. Notably, I did not argue that all spaces should remain sex separated; rather, I proposed the coexistence of gendered and sex-separated spaces with the specification that for some provisions where sex can be salient (e.g., rape crisis or domestic violence shelters), at least one should be female only. Females have unique needs and experiences like transwomen, and both would benefit from provisions that are set aside for them. Still Two Sexes Arguing that the 'science behind gender and sex' is 'neither simple nor binary', Sherrick objects to my recognizing that humans are a sexually dimorphic species. In support of her claims, she cites a widely promulgated (and panned) article by a freelance journalist (Ainsworth 2015), which depicts discrete differences in sexual development as existing on a continuum. As I discussed in my article, sex is not a continuum. People with DSD conditions are sexed. The spectral view of sex implies that people with DSD conditions are 'less female' or 'less male', an implication that is invariably neglected. Safety First, Identity Validation (Maybe) Later Sherrick claims I 'invalidate trans identities' when I point out the so-called 'Wrong Body Model' (i.e., that a person might be a female trapped in a male body) is scientifically unsubstantiated. 6 The idea that objecting to the 'Wrong Body Model' is invalidating or transphobic is belied by the fact that prominent trans scholar activists concur. Bettcher (2013), and others, conceive of this model as 'naturalizing sex and gender in a troubling way,' even as 'frightening' (p.234). Arguably, sex denialism invalidates trans identities because without sex, transgender (as sex-gender mismatch) does not exist. Transwomen are by definition not (biological) females, which is why females cannot be transwomen. Many, perhaps most, trans people acknowledge the reality of sexual dimorphism (see Hayton 2018). Throughout Sherrick prioritizes the validation of trans people's identities in the crafting of public policies. As I explained in the paper, I disagree. Here the issue is how to balance protections and rights for two groups: females and transwomen. Separate female spaces exist for reasons I discuss in the paper, including but not limited to physical safety from males. Given that my critics are not arguing for unisex spaces, that separate female provisions are necessary and/or justified appears accepted. The crucial question thus emanating is: What is the justification for allowing any male to self-ID into female spaces rather than creating new gender/neutral spaces alongside sex-separated ones. Echoing Mezey before her, the answer is: identity validation. In their view, women (however defined) should allow any male to self-ID into their (formerly) protected spaces on a say-so to validate the identities of transwomen. The fact that males constitute >93% female rapists is apparently of no consequence because, as Sherrick argues, some females rape too. That female-female sexual assault occurs at a rate >20-fold lower than that of male-female sexual assault is also evidently of no concern. I don't pretend to understand why (and she doesn't explain). There is no doubt that many transwomen suffer. But many people suffer, and social policy should be crafted in a manner that ameliorates suffering while reducing harm. The Equality Act fails in this regard. A solution that would require a female with PTSD from male sexual assault (and evidence suggests that 30% of females suffer from PTSD more than 9 months after sexual assault) to share a room in a women's shelter with a bepenised male against her wishes (and this has already happened) is dismissive of her experiences; it invalidates her pain, felt security, and well-being or at least dismisses her lesser import. There is a term for subordinating female needs, well-being, pain, and accommodations, and the term is not 'feminist' or 'equality'. It bears repeating that access to women's provisions is not the solution to the problems that transwomen face. Jenness et al. (2019) documented the appallingly high rates of sexual assault against transwomen in prison; yet, when asked if they would choose to be housed in the women's estate if given the option, more than 60% said they would opt to remain in the men's prison. Unrealistic Solutions Which brings me to Sherrick's 'solutions', which include laudable policy suggestions that I endorse. As solutions to the immediate problem of predatory males, however, they fall well short of potentially efficacious. One solution involves limiting access to women's spaces to people without histories of sexual assault. However, given that more than 90% of sexual assaults are not reported, the majority of hidden offenders would be unrestricted? Women's shelters spaces are already woefully overburdened and underfunded. Another involves 'training police officers how to intervene and deescalate sexual violence situations'; I am not sure how this is supposed to work. To my knowledge few to zero sexual assaults occur in the presence of police officers who could intervene. The reason these solutions fail is quite simple; the goal of balancing "validating trans identities" with "keeping women's spaces safe" is inequitable. I'm unconvinced that public policies should be in the business of promoting identity validation, but if they are, validation should be secondary to physical and psychological safety. Conclusion My article addressed sensitive topics without perfect solutions, requiring compromise. I remain resolute that feminist criminologists have a responsibility to consider how sweeping legislation that would prioritize unchallengeable gender self-ID over biological sex would affect females. This does not imply a prioritization of females or a lack of concern, much less hatred, for transgender people. those who engaged with the substance of my article with logic and reason rather than misrepresentation or name-calling, whether or not they agree. Author Bio Callie Burt is an associate professor in the Andrew Young School of Policy Studies at Georgia State University. Much of her research focuses on the developmental effects of social inequalities, especially the effects of social risk and protective factors in adolescence, from a biopsychosocial perspective. She has a longstanding interest in sex differences and how these differences are shaped by gender as a social force. More information on her background and her work can be found at www.callieburt.org. Footnotes 1. The imperfect, albeit useful, analogy would be a law defining age by selfdeclaration where the statement "I am 21 years old" would make one 21 years old under the law and thus bestow to said person all the rights of 21 -ear-olds, such as eligibility to purchase alcohol. This represents not a blurring but a legal redefinition of age into something else. ↩ 2. Although Mezey's initial claim about 'ample evidence' was made without citations, later in the paper three citations were added to same claim. All were nonempirical, non-scholarly pieces by or quoting organizations with particular political aims (namely, treating gender self-ID as sex), including a CNN article that did not include the terms 'homicide' or 'violence' and an HRC report consisting primarily of anecdotal 'stories' ↩ 3. The Court did not extend its holding to other statutes, such as Title IX, or to sex distinctions allowed in Title VII, noting, for example "we do not purport to address bathrooms, locker rooms, or anything else of the kind." ↩ 4. For example, I noted: "Both females and trans people should be protected from mistreatment and discrimination. How we do this in a manner that does not elevate one group's disadvantages or challenges as more important or worthy of addressing than other is complicated" (Burt 2020, p.389). ↩ 5. Sherrick states in footnote 1 that my "use of 'female spaces' excludes GNCT individuals". It does not; 'female spaces' exclude males; GNCT females are included. ↩ 6. My quote, in context, is as follows: "The [gender-as-innate-sex] account is incoherent both because it does not define what "gender identity" is and because explaining how one might have a gender identity-sex mismatch requires a biologically implausible sexed mind-body dualism. Equally important, the idea that "gender identity is an innate, fixed property of human beings that is independent of biological sex-that a person might be a 'a female trapped in a male body'-is not supported by scientific evidence". If a brain is in a male body, it is a male brain (and vice versa)." (Burt 2020, pp.386-7). ↩
2021-08-19T19:59:33.457Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "f97f22092c1eaa7463ab3ff4b65d804d7d3e6e60", "oa_license": "CCBY", "oa_url": "https://www.crimrxiv.com/pub/gsp2blxf/download/pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "3ac403a54641c835f5523f9fe3459559a50e8486", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
236907549
pes2o/s2orc
v3-fos-license
Non-Traditional Systemic Risk Contagion within the Chinese Banking Industry : Systemic risk contagion is a key issue in the banking sector in maintaining financial system stability. This study is among the first few to use three different distance-to-risk measures to empirically assess the domestic interbank linkages and systemic contagion risk of the Chinese banking industry, by using bivariate dynamic conditional correlation GARCH model on data collected from eight prominent Chinese banks for the period 2006–2018. The results show a relatively high correlation among almost all the banks, suggesting an interconnectedness among the banks. We found evidence that the banking system is exposed to significant domestic contagion risks arising from systemic defaults. Given that Chinese markets deliver weak signals of forthcoming stress in banking sectors, new policy intervention is crucial to resolve the hidden stress in the system. The results have important policy implications and will provide scholars and policymakers further insight into the risk contagion originating from interbank networks. Introduction Since its path-breaking initiatives of reforms launched over four decades ago, China's economic development has been miraculous. According to the World Bank, China's GDP grew from USD 149.5 billion in 1978 to USD 14.3 trillion by 2019, with real GDP growth averaging nearly 10% a year despite the recent slowdown. The GDP value of China accounts for 11.8% of the world economy. Since 2010, China has surpassed Japan to become the world's second-largest economy by nominal GDP, and overtaken the United States as the world's largest economy in terms of purchasing power parity (PPP) since 2014. Equally remarkable has been the incredible expansion of China's banking system, accounting for 11.7% of the top 1000 banks worldwide [1]. The Chinese banking system is critical to the functioning of the Chinese economy and plays a pivotal role in monitoring the practices of state-owned enterprises to ensure that these enterprises comply with sound commercial principles. Hence, maintaining the stability and soundness of the banking system hinges on the regular and timely assessment and measurement of bank risk. Its importance is also highlighted by the global financial crisis in 2008 and subsequent policy measures to reform global banking regulations in response to the perceived lessons of this crisis [2]. In its 13th five-year plan (2016-2020), China has shifted its focus away from unfettered growth rates towards initiatives to improve the quality of China's economic growth, particularly in the financial sector [3]. In 2019, the economic growth of China was projected to be 6.2% due to a strong and stable traditional financial sector that could only be multiplied by the new 'One Belt One Road' initiative [4,5]. This tremendous current and future growth in the Chinese financial and banking sector requires a better understanding of their banking sector's systemic risk from a local and global standpoint [6,7] given the systemic risk spillover between China and other countries [8], especially that the growing Chinese economy cannot be sustained with fragile and backward banking infrastructure. Distance to Default The DD measure is a market-based measurement approach of default risk derived from the Merton [21] model. Following the model, a larger DD value that results in a lower default risk thus is a better indicator for the financial institutions. The Merton model has been modified and summarized in subsequent empirical research to condense a wider range of financial activities. It measures both liquidity risk and solvency risk at an entity level [22]. This has been an important advantage of this model above others. Thus, we have seen regulators having a keen interest in implementing the outcome from the model [23]. However, as the model uses significant theoretical simulations to generate the risk measures, the model can sometimes over depend on the interpretation of the theory rather than reality. Based on the Merton [24] model, the daily DD at time t can be calculated as follows: where the values of A t and L t are the asset and liability values at time t, respectively, with risk-free rate noted as R f . The equation also uses volatility of asset value σ A . We followed [25] to compute asset, liability, risk-free rate, and asset volatility. Previous researchers in the contagion research field have also preferred this procedure [18,19]. This procedure gives us a sample of 3130 daily observations for our period. We can calculate the daily return of these values using Equation (2) following the footsteps of the previous researchers in our field studying the contagion effect inside the Chinese financial sector [26,27], thus effectively ensuring the data normality and stationarity as discussed later. In this paper, we refer to the DD return values as DD values. The descriptive statistics of these values are given in Table 2 with a mean of approximately between −0.01 and 0.1. It can be seen, the DD value for all the sample banks shows significant similarity in the descriptive statistics including the stationarity test for time series analysis using Dickey-Fuller p-value. We can deduce that the sample is acceptable for time series analysis. A timeline of these DD return values is shown in Figure 1. In Figure 1, we can observe significant fluctuations at the beginning of 2008 for most banks except ABC and SGP. Throughout the data, SGP has shown significant stability compared to others. Distance to Insolvency Volatility is a variable that is extensively used for measuring default risk. According to the Merton [24] model, an entity will be in default if the asset value falls below a default threshold level. Consequently, the proximity of an entity to the default threshold level is a function of the anticipated difference between values of assets and volatility as well as debt commitments. Higher expected volatility for a given capital structure and asset value suggests a greater probability regarding the failure of future asset values to meet debt commitments [28]. The extended version of Merton's model incorporates decisions on investments, while not considering long-term borrowing. In contrast, the Leland model [29] comprises long-term strategic bankruptcy and debt financing. From here based on the structural models of credit risk proposed by [24,29], [30] proposed a robust and intuitive approach for obtaining the financial soundness of individual entities by using data on Distance to Insolvency Volatility is a variable that is extensively used for measuring default risk. According to the Merton [24] model, an entity will be in default if the asset value falls below a default threshold level. Consequently, the proximity of an entity to the default threshold level is a function of the anticipated difference between values of assets and volatility as well as debt commitments. Higher expected volatility for a given capital structure and asset value suggests a greater probability regarding the failure of future asset values to meet debt commitments [28]. The extended version of Merton's model incorporates decisions on investments, while not considering long-term borrowing. In contrast, the Leland model [29] comprises long-term strategic bankruptcy and debt financing. From here based on the structural models of credit risk proposed by [24,29,30] proposed a robust and intuitive approach for obtaining the financial soundness of individual entities by using data on equity volatility, termed distance to insolvency (DI). DI is defined as the ratio of a measure of the percent difference between the asset value of an entity and liabilities at time t (known as leverage) to annualized percent standard deviation of innovations concerning the asset value of an entity at time t (known as asset volatility). According to [30], DI recapitulates the distortions to incentives of equity owners that reasonably occur when the entity gets financially distressed. As the DI computation requires only equity volatility data, it can be computed for a wide-range set of cross-sectional and temporal data compared to other measuring approaches. DI as a measure still inherits the limitations of DD but as an extension, it supersedes DD's contribution in risk management. Similarly, a larger DI value results in a lower default risk and is thus a better indicator for financial institutions. The DI at time t can be defined as: where A t and L t are the asset and liability at time t, respectively, and σ A is the asset volatility. Although in a default scenario, the L t of an entity will be over the current value of A t , A t ≥ L t is true for perfect conditions. Thus, firm leverage can be defined as the percentage difference between A t and L t . Following the procedure of the previous section, we can calculate the daily return on these values using Equation (4). In this paper, we refer to the DI return values as DI values. These DI values are shown in Table 3, followed by Figure 2 with the timeline of the sample DIs. This procedure gives us a sample of 3129 daily observations with properties similar to DD, including stationarity. Figure 2 illustrates that DIs are significantly less volatile than DDs. This phenomenon was most likely caused by ongoing balance sheet stress rising from poor asset quality and increased provisions required by the regulators [31,32]. Mismatches in maturity have resulted in the disclosure of interest rate risk and liquidity. Distance to Capital The DD has acceptance among the market-based measures due to its predictive ability to segregate rating downgrades for banks [33,34]. However, the DD concept acts as an absolute default risk measuring approach when applied to banks, which has some limitations [19]. First, the risk inherent in a bank's leverage varies substantially compared to a non-financial entity, as the former is more leveraged for an assigned level of credit risk. Second, the DD measure considers the total equity capital of a bank as a buffer, though bank regulators typically take necessary actions before losing its total equity capital [35,36]. For instance, it is recommended by the BASEL Committee on Banking Supervision (BASEL) that banks should possess excess capital over a regulatory minimum because of risk factors [37]. The distance-to-capital (DC) measure is an alternative to the DD measure originated from the structural model of corporate debt proposed by [24,38]. The DC measure considers a level of default point (i.e., a dissimilar distance measure of risk). It considers capital thresholds (as outlined by the Prompt Corrective Action [PCA] framework) that permit early intervention by bank regulators [39] rather than considering the face value of a bank's liabilities (L) as the pertinent barrier. Similar to DD, it also uses theory to simulate the risk prediction, and a larger DC value results in a lower risk. It can be stated as Equation (5) below, following [19] where CAR = capital adequacy ratio at a given time t. Previous researchers in our field have also followed the same procedure [40] Distance to Capital The DD has acceptance among the market-based measures due to its predictive ability to segregate rating downgrades for banks [33,34]. However, the DD concept acts as an absolute default risk measuring approach when applied to banks, which has some limitations [19]. First, the risk inherent in a bank's leverage varies substantially compared to a non-financial entity, as the former is more leveraged for an assigned level of credit risk. Second, the DD measure considers the total equity capital of a bank as a buffer, though bank regulators typically take necessary actions before losing its total equity capital [35,36]. For instance, it is recommended by the BASEL Committee on Banking Supervision (BASEL) that banks should possess excess capital over a regulatory minimum because of risk factors [37]. The distance-to-capital (DC) measure is an alternative to the DD measure originated from the structural model of corporate debt proposed by [24,38]. The DC measure considers a level of default point (i.e., a dissimilar distance measure of risk). It considers capital thresholds (as outlined by the Prompt Corrective Action [PCA] framework) that permit early intervention by bank regulators [39] rather than considering the face value of a bank's liabilities (L) as the pertinent barrier. Similar to DD, it also uses theory to simulate the risk prediction, and a larger DC value results in a lower risk. It can be stated as Equation (5) below, following [19] where CAR = capital adequacy ratio at a given time t. Previous researchers in our field have also followed the same procedure [40]: and then, Following the procedure of the previous sections, we can calculate the daily return on these values using Equation (6). In this paper, we refer to the DC return values as DC values. These DC values are shown in Table 4 and presented in Figure 3. This procedure gives us a sample of 3129 daily observations with properties similar to DD including stationarity where we can observe the same mean, standard error, and Dickey-Fuller p-value of most of the cases. Figure 3 illustrates that DCs are significantly less volatile than DDs and DIs. Most of the curves in this figure are picked at the same time as DC is risk-adjusted using BASEL requirements; such change in the value is similar to all the sample banks as per regulatory shocks. Following the procedure of the previous sections, we can calculate the daily return on these values using Equation (6). In this paper, we refer to the DC return values as DC values. These DC values are shown in Table 4 and presented in Figure 3. This procedure gives us a sample of 3129 daily observations with properties similar to DD including stationarity where we can observe the same mean, standard error, and Dickey-Fuller p-value of most of the cases. Figure 3 illustrates that DCs are significantly less volatile than DDs and DIs. Most of the curves in this figure are picked at the same time as DC is risk-adjusted using BASEL requirements; such change in the value is similar to all the sample banks as per regulatory shocks. Methodology Model Specifications We employ the DCC-GARCH model, proposed by [41,42], to examine the contagion risks among Chinese banks. The model calculates the correlation coefficients of the standardized residuals and continually regulates the correlation for time-varying volatility, and allows simultaneous modeling of the variances and conditional correlations of several series. Despite the limits of DCC, such as no regularity conditions and asymptotic properties of consistency with asymptotic normality [43][44][45], it remains a popular representation of dynamic conditional correlations because of its dynamic structure of the correlation [46] and its inherent ability to handle a large set of computational data [47]. Furthermore, DCC-GARCH can provide a superior measurement for the correlation that accounts for heteroskedasticity straightforwardly. The bivariate DCC-GARCH model is derived as follows: where z i,t = standardized residual, R i,t = mean, and h i,j = conditional variance. The conditional variance-covariance matrix can be specified as: where H t = 2 × 2 conditional covariance matrix, P t = conditional correlation matrix, and D t = diagonal matrix with time-varying standard deviations. and where Q t = 2 × 2 symmetric positive definite matrix and Q t = (q t ij ) and is defined as in Equation (13). where ρ i,j,t = dynamic conditional correlation between assets. The diagonal bivariate GARCH model considers that ρ i,j,t = 0 for all i and j. In contrast, the constant conditional correlation assumes P ij = ρ ij and P t = P. Table 5 reports the estimation results based on the bivariate DCC-GARCH (1,1) model for the period 2006-2018 using the DD data. Panel A of Table 5 exists between ABC and BOC (0.92), whereas HUX and ABC are the least correlated pair. Overall, the DCC results in Table 5 for DD show highly significant and positive correlations between the Chinese banks. To visualize the results reported in Table 5, the DD correlation patterns (pair-wise) over the considered study period for eight Chinese banks are presented in Figure 4. As apparent in Figure 4, regardless of the fluctuations, the results follow a straight-line pattern around 0.5 and similar for all pairs. This proves that the correlation between Chinese banks is significantly stable compared to the rest of the world. We can also clearly observe that the larger banks are more stable than their smaller counterparts in the pair-wise comparison. For example, BOC and CCB are more stable in correlations than ABC and SGP. Based on these, it is safe to infer that DCC correlations and systematic risk contagion are extremely high in Chinese banks regardless of the period. Table 6 reports the estimation results of the DI using the bivariate DCC-GARCH (1,1) model. As can be seen in Panel A of Table 6, the intercept terms in the mean and variance equation are moderately significant for half of the banks. However, the parameter estimates for the correlation equation in Panel C are very high, positive, and significant, even at the one percent level. CMS-CCB stands out as the most correlated pair in the sample, with a value of 0.79, whereas the weakest correlation in the sample is 0.3 between CCB-ABC. Similar to the results for DD in Table 5, the DCC results for DI further confirm that Chinese banks are highly prone to systematic risk contagion among themselves. Table 6 reports the estimation results of the DI using the bivariate DCC-GARCH (1,1) model. As can be seen in Panel A of Table 6, the intercept terms in the mean and variance equation are moderately significant for half of the banks. However, the parameter estimates for the correlation equation in Panel C are very high, positive, and significant, even at the one percent level. CMS-CCB stands out as the most correlated pair in the sample, with a value of 0.79, whereas the weakest correlation in the sample is 0.3 between CCB-ABC. Similar to the results for DD in Table 5, the DCC results for DI further confirm that Chinese banks are highly prone to systematic risk contagion among themselves. Distance-to-Insolvency Contagion The DI correlation patterns (pair-wise) over the considered study period for the eight Chinese banks are presented in Figure 5. Plots here depict a pattern very similar to that for DD in Figure 4. Most of the plots are stable for the entire time as expected. However, the number of extreme fluctuations from the mean result increases compared to the DD. Overall, our results for DI confirm the presence of contagion risk among Chinese banks, suggesting that there is a high vulnerability of the Chinese banking system to spillover effects of risk among each other. The size of the banks' effects holds a similar finding from the DD. (9), ω i0 of variable Equation (8), and ρ i,j,t from time-varying DCC correlation from Equation (14) from the regression model. *, **, and *** indicate significance at the 10, 5, and 1 percent levels, respectively. Distance-to-Capital Contagion The results of the DCC-GARCH (1,1) model for DC are tabulated in Table 7. As can be seen in Panel A and B of Table 7, the coefficients show significant similarity to the DI results. In addition, the results of the correlation equation for DC are consistent with the previous findings on DD and DI. All the coefficients are positive and significant for all the banks' pairs at the one percent level. Compared to the results for DD and DI, the results for DC are even stronger where all the correlations are above 0.5. The BOC-ICC pair reports the highest correlation (0.87) in our sample on DC. The DI correlation patterns (pair-wise) over the considered study period for the eight Chinese banks are presented in Figure 5. Plots here depict a pattern very similar to that for DD in Figure 4. Most of the plots are stable for the entire time as expected. However, the number of extreme fluctuations from the mean result increases compared to the DD. Overall, our results for DI confirm the presence of contagion risk among Chinese banks, suggesting that there is a high vulnerability of the Chinese banking system to spillover effects of risk among each other. The size of the banks' effects holds a similar finding from the DD. (9), ω i0 of variable Equation (8), and ρ i,j,t from time-varying DCC correlation from Equation (14) from the regression model. *, **, and *** indicate significance at the 10, 5, and 1 percent levels, respectively. Distance-to-Capital Contagion The patterns of pair-wise DC correlations for the sample Chinese banks over the considered period 2006-2018 are depicted in Figure 6. For all the plots involving ABC, correlations are comparatively high and stable. Although there are some rare spikes closer to 0 for some of the plots during the early years, most of the time they are stable above or around 0.5, indicating their low sensitivity to contagion risk. Overall, the plots in Figure 6 validate the high exposure of Chinese banks to the spillover effect of the systematic risk in support of DD and DI. Conclusions This Conclusions This paper presents a comprehensive analysis of the systemic risk contagion of Chinese banks. The DD, DI, and DC results indicate the achievement of the soundness of banks while showing a continuous deterioration for all banks post-2008 and recovery only after 2010. The results attained from the bivariate DCC-GARCH (1,1) model suggest that the correlation parameters are statistically significant at the one percent level for all risk measures. The patterns of pair-wise correlations for distance measures show relatively high and stable correlations among most of the banks during the period. Overall, the results imply that there is remarkable interconnectedness among the banking sectors in China, which is largely consistent with the existing studies [11,15,48]. Although these Chinese banks have remained largely isolated from the global financial crisis, risks exist from within its banking system due to a high level of non-performing assets, extended credit dispersed to non-banking financial companies, muted corporate demand for credit, and corporate governance issues. Further, there is evidence that some banks were susceptible to the global financial crisis, as a trough is observed for DD, DI, and DC during the period post-2008 to 2010. The world economy and financial sectors have experienced significant changes during the current global pandemic [49,50]. A strong and resilient banking system is the foundation for sustainable economic growth, as banks are the hubs for credit intermediation and a well-acknowledged connection for service activities [51]. The results presented in this study suggest that the banking system is exposed to significant domestic contagion risks arising from systemic defaults supported by other authors in the field [52]. This is because the Chinese markets provide weak signals of forthcoming stress in banking systems. Thus, new policy interventions are needed to overcome the hidden stress in the system. From the viewpoint of structure and activities, the Chinese banking system has changed over the past decade. Therefore, it is recommended that the regulatory oversight of interbank exposures and interbank market structures be prioritized. As a policy implication, regulators can clearly distinguish that the Chinese banking industry is more connected than the peer banking communities. However, a shock in one bank will transmit quickly and severely to other banks. Given these parameters, regulators need to monitor all banks closely rather than on one or two underperforming ones. Furthermore, they can also control the capital and investment flow between banks to mitigate spillover risk within the banking system. Funding: The fourth author wishes to acknowledge the financial support of the Sumitomo Foundation. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
2021-08-04T03:29:25.956Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "664187326cfc7cd7bf9e2c432e43f901f48b7d9d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/13/14/7954/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ba2905ce256ad96fd899a7990fea21af800720f4", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
241592533
pes2o/s2orc
v3-fos-license
Treatment of distal tibia end fractures by minimally invasive plate osteosynthesis DOI: 10.4328/ACAM.20115 Received: 2020-01-20 Accepted: 2020-01-29 Published Online: 2020-02-04 Printed: 2020-04-01 Ann Clin Anal Med 2020;11(Suppl 1): S33-36 Corresponding Author: Quang-Tri Le, Director of 7A Military Hospital, Head of Department of Orthopedics, Head of Department of High-Tech Diagnostics, 466 Nguyen Trai Street, Ward 8, District 5, Ho Chi Minh City, 72706, Viet Nam E-mail: lqtri@ntt.edu.vn; tsbstri@yahoo.com P: (+084)0839241868, (+084)0913126229 Corresponding Author ORCID ID: https://orcid.org/0000-0002-2777-0828 Abstract Aim: Distal tibia fractures are common in daily accidents such as traffic, work, and sports accidents. There are various treatment methods for that injury, such as casting, external fixation, and AO screw and plate fixation. However, in the cases of distal tibia fractures occurred near the ankle joint with damaged soft tissues near the fractures, minimally invasive plate osteosynthesis using the locking compression plate is a suitable fixation method. This study aimed to evaluate the bone union, functional rehabilitation, and assessment of complications of treatment of distal tibia fractures by minimally invasive plate osteosynthesis in 7A Military Hospital. Material and Methods: This prospective cohort study was performed on 38 patients with distal tibia fractures treated from October 2014 to October 2019 in our hospital. Patients were treated with internal fixation using minimally invasive plate osteosynthesis with locking compression plate, and they were monitored for bone fusion for functional rehabilitation, and the complications were evaluated. Results: The average monitor time was 12.5 months after the operation and the outcomes were assessed based on the American Orthopedic Foot and Ankle Score, as a result of which 24 cases (63.2%) were “very good” 10 cases (26.3%) were “good” and 4 cases (10.5%) were “fair”. Discussion: Minimal invasive plate osteosynthesis provided both postoperative and rehabilitation generally satisfied outcomes. The results were comparable to other authors. Further researches with a larger sample size are needed for better evaluation and optimal protocol conclusion. Conclusion: Minimally invasive plate osteosynthesis was a suitable method for the treatment of distal tibia fractures as indirect reduction technique and small incision reduce damages on soft tissues during the operation. Introduction Distal tibia fractures are common in orthopedics accounting for 1-5% of the lower limb and 7-10% of lower leg fractures [1,2]. Due to the limit of soft tissues and poor blood supply nature of the area, treating distal tibia fractures is a challenge and quickly poses complications such as open fractures, infections, and slow bone reunion. Careful consideration of various aspects such as fracture patterns, bone quality, and soft tissue injuries is needed when deciding appropriate treatments to ensure optimal results. There is a variety of treatments to be used, such as casting, open reduction with screw-plate fixation, and external fixation. Minimally invasive plate osteosynthesis (MIPO) was developed to eliminate the extensive surgery to expose the fracture site. Only two small incisions are made near both ends of the fracture site. A plate then inserted through an epi-periosteal tunnel that links the two incisions and is screwed in both ends to stabilize the fracture. MIPO helps significantly reduce operation and hospitalization time comparing to the open surgery approach. It also permits the conservation of soft tissues and feeding blood vessels, reduces infection and non-union risks, and enables quick ankle joint movement after surgery. MIPO treatment for distal tibia fractures with positive outcomes was reported worldwide [2][3][4][5][6]. For further research of this technique, we carried out the study on the treatment of distal tibia end fractures using minimally invasive plate osteosynthesis in 7A Military Hospital (Ho Chi Minh City, Viet Nam) in order to evaluate bone union, functional rehabilitation, and to assess the complications of treatment using this method. Experimental participants This study was performed on 38 patients with distal tibia fractures (based on A/O classification) treated with MIPO using locking compression plate from October 2014 to October 2019. The criteria for selection were patients aged over 18 with closed distal tibia fractures based on A/O classification, having no contra-indication against surgery or anesthesia, and monitored for at least six months. The criteria for rejection include patients with pathologic fractures or multiple injuries. Research methods This study was a prospective cohort study with longitudinal description. The patients were given explanations about the surgery and underwent pre-operative examinations and tests. C-Arm machine, orthopedics operative table, and tools were prepared. Spinal anesthesia or endotracheal anesthesia was applied to the patients. A surgical tourniquet was employed at the thigh base for a maximum of 90 minutes and should exert a pressure of 350 -400 mmHg. The surgical procedure was performed in the following way [3,6]: 1. The patients lied in supine posture on the operation table. 2. If the patients had accompanied fistula fracture, the fistula had to be treated first with screw-plate or K-wire fixation. 3. The compressive locking plate must fit the size of the distal tibia fracture and had to be prepared beforehand. 4. The skin-projected position of the plate was located and the positions of the proximal and distal ends of the plate were identified for the convenience of screw installation. 5. Skin incisions were made at the distal and proximal ends of the plate following the anterior-medial line of the tibia; the incisions were 2-3 cm apart each other and dissected all subcutaneous layers. 6. An internal dissection of the subcutaneous tissues was made and the plate was inserted below the skin layer. 7. The bone axis and the large fragment were manually reduced (with the assistance of towel clips) by the sagittal plane. 8. Sagittal and coronal planes of the fractures were monitored using C-Arm machine to ensure a proper reduction of the bone fragments. 9. One distal screw was inserted and the distal deviation was micro-adjusted. 10. One proximal screw was inserted. 11. The sagittal and coronal deviations were micro-adjusted and the remaining screws were inserted. 12. The skin was sutured. Post-operative care was as follows: antibiotics, analgesics, and anti-inflammatory agents were administered to the patients. The leg was put in a high position to prevent swelling and to take care of the surgical wound. The patients then had early post-operative ankle joint exercise. The patients could be discharged 3 -5 days after the operation when the surgical wound became clean and dry, swelling reduced and active ankle and knee joints movements were possible. Post-operative physical therapy was as follows: the patients had light angular exercise of the ankle joints 24 hours after surgery. After two weeks as swelling reduced, the plate could be removed and back and ankle angular exercises and ankle rotational exercises were started. After one month, the patients started ankle exercise with a gradual increase of resistance. After three months, the patients began walking and allowed weight bearing on the treated limb. After six months, the patients could walk normally. Treatment outcomes assessment was based on post-operative radiography, bone union and rehabilitation, and complications. Patient general information The patients were from 18 to 58 years of age; the average age was 36.4 years. There were more males (28 patients, accounting for 73.7%) than females (10 patients, 26.3%), and male/female ratio was 2.8/1. The leading cause for fractures was traffic accidents, 20 cases accounted for 52.6%, then workplace accident (15 cases, 39.5%), and sport accident (3 cases, 7.9%). The average hospital stay-in was 4.9 days, and post-operative monitor time was 12.5 months. Types of distal tibia fractures based on A/O classification The distal tibia fracture types of studied patients were based on A/O classification [7,8]. A1 type was the most frequent fracture types with 14 cases and accounted for 36.8%; A2, A3, and C1 types were in 11 cases (28.9%), 10 cases (26.4%), and 3 cases (7.9%), respectively. Fibula accompanying fractures In this study, 20/38 cases (52.6%) had accompanying fibula fractures. Amongst them, 16 cases (80%, n=20) had fibula fractures with significant deviation and unstable ankle joints; these cases were treated with the screw-plate fixations for the fibula. Four remaining cases (20%, n=20) only had small deviation and needed no fibula fusion, only required additional foot casting. Post-operative radiography results In the post-operative radiography results [9], the majority was "Very good" in 22 cases, accounting for 57.9%; "Good" and "Fair" results were 11 cases (28.9%) and 5 cases (13.2%), respectively. There was no case with the "Poor" result. Bone union results based on A/O classification In our study, all 38 patients had bone union in minimum of 16 weeks and maximum of 23 weeks. The time for bone union by fracture type was listed in Table 1. A1 fractures had significantly faster union than A2, A3, and C1 (p < 0.001, ANOVA test). Functional rehabilitation outcomes Rehabilitation based on patients' subjective remarks: 34 cases had no ankle pain, and 4 had slight ankle pain within an acceptable level. For daily activities, 37 cases reported no difficulties in regular walking, 1 case reported challenges in moving up and down the stairs. Rehabilitation based on objective criteria: The dorsiflexion, plantarflexion, eversion, inversion ankle range of motion were 18.4o, 47.8o. 16.9o and 17.4o, respectively. Angular motion usually recovered faster than other kinds of motion. Based on American Orthopedic Foot and Ankle Score [2,3,9], "Very good" and "good" outcomes took place in 89.5% cases and "fair" in 10.5% cases. Complications Infections took place in one patient and slow union in one patient. No patient had joint stiffness or non-union. Discussion Minimally invasive surgery is a surgical method commonly applied in many sectors [10,11], including treatment of bone injuries as, for example, it minimizes damages on soft tissue, improves healing time, reduces decubitus complications, and enables weight bearing when possible [12,13]. In this study, the minimally invasive plate osteosynthesis was employed to treat the distal tibia fragment of 38 patients in the 7A Military Hospital, and the observed results were promising. Bone union was achieved in all the cases; radiographs showed positive outcomes generally. Post-operative complications were very few with one infection and one slow union in thirty cases; few or no complication was also reported in other literature [1,[13][14][15] and the complications were treated successfully. The average time for bone union in this study within an average time of 18.7 weeks is comparable with Sanjay and Sanjay (2018) study (about 18 weeks); This result is faster than in the reports by Chandrakant and Martand (2018) [16] and Daolagupu et al. (2017) [17] (about 19 weeks and 21.7 weeks, respectively); and slower than in the study by Serban et al. (1997) [1] (about 17 weeks). Our result can be considered as comparable with the mentioned authors, but further researches with larger samples are necessary. Postoperative rehabilitation outcomes were generally positive in this study. Pain level was mostly nonexistence or very little and there was no interference on postoperative daily activities. Ankle range of motion in dorsiflexion and plantarflexion were 18.4o and 47.8o, respectively, which was better than the study of Helfet et al. (1997) (14 o and 42 o, respectively) [13]. Based on American Orthopedic Foot and Ankle Score, the "good" and "very good" outcome in this study (89.5%, 34/38 patients) was similar to results report by Supe 30 patients). The study by Chandrakant and Martand (2018) reported an average score of 89.9 in 32 investigated patients [16]; the results were also compatible with our research. In general, our study result was satisfactory; still, further researches are needed to evaluate the treatment outcomes better and devise a more accurate treatment protocol for these kinds of injuries [18]. Conclusion Thirty patients with distal tibia fractures were treated using minimally invasive plate osteosynthesis with locking compressive plate from October 2014 to October 2019 at the 7A Military Hospital, Ho Chi Minh City, Vietnam. Based on this study, it can be concluded that this method only required small incisions, reduced infection risks, promote bone union rate, enabled early post-operative ankle joint exercises. The treatment outcomes of this method were very encouraging. Further researches with larger samples are necessary to evaluate the treatment outcomes better and devise a more accurate treatment protocol for this kind of injuries. Scientific Responsibility Statement The authors declare that they are responsible for the article's scientific content including study design, data collection, analysis and interpretation, writing, some of the main line, or all of the preparation and scientific review of the contents and approval of the final version of the article. Animal and human rights statement All procedures performed in this study were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.
2020-03-11T00:58:49.894Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "78b145502b2f5869cd85e8c9f7453ec34615e1cc", "oa_license": null, "oa_url": "https://doi.org/10.4328/acam.20115", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "78b145502b2f5869cd85e8c9f7453ec34615e1cc", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [] }
236224226
pes2o/s2orc
v3-fos-license
Orexin A peptidergic system: Comparative physiology, morphology and population between brains of nomal and Alzheimer’s disease mice Sleep disturbance is common in patients with Alzheimer’s disease (AD), and orexin A is a pivotal neurotransmitter for bidirectional regulating the amyloid-β (Aβ) deposition of AD brain and poor sleep. In the present study, we examined the characteristic of sleep-wake architecture in APPswe/PSldE9 (APP/PS1) and Aβ-treated mice using electroencephalogram (EEG) and electromyographic (EMG) analysis. We compared the expression of orexin A, distribution, and morphology of the corresponding orexin A neurons using innovative methods including three-dimensional reconstruction and brain tissue clearing between Wild type (WT) and APP/PS1 mice. Results from our study demonstrated that increased wakefulness and reduced NREM sleep were seen in APP/PS1 and Aβ treated mice while the expression of orexin A was signicantly upregulated. Higher density and distribution of orexin A activated neurons were seen in APP/PS1 mice, with a location of 1.06 mm to 2.30 mm away from the anterior fontanelle compared to 1.34 mm to 2.18 mm away from the anterior fontanelle in WT mice. These results suggested that the population and distribution of orexin A may play an important role in the progression of AD. Introduction Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disorder characterized by diffused extracellular amyloid plaques deposition and intracellular neuro brillary tangles (NFT) that results in progressive dementia associated with cognitive impairment, memory loss, and other behavioral abnormalities (Dey et al. 2017;Thal et al. 2019). Sleep disturbances are commonly seen in patients with AD and affect approximately 25%-60% of patients (Lim et al. 2014). Compared with healthy older adults, individuals with AD suffer from shorter bouts of repid eyes movement (REM) sleep and more slow-wave sleep fragmentation (Vitiello and Prinz 1989;Mander et al. 2016). Insomnia and excessive daytime sleepiness were also common characteristics seen in AD patients (Roth and Brunton 2019;Hamuro et al. 2018). Animal and human studies have demonstrated that the accumulation of the amyloid-β (Aβ) peptide, a primary cause of amyloid plaques, is a critical event in the pathogenesis of AD as well as poor sleep (Vanderheyden et al. 2018;Brown et al. 2016). Intracerebroventricular administration of Aβ has been identi ed to serve as a useful AD model which can trigger cognitive impairment, memory defects, and other AD-like alterations in the brain (Zhang et al. 2019b;Facchinetti et al. 2018). Many transgenic ies of AD models overexpressing Aβ peptides have shown signi cantly disrupted sleep-wake patterns including increased time awake and decreased sleep, including in transgenic APP and presenilin 1 (APP/PS1) mouse (Kent et al. 2018). However, the factors regulating this process are only clearly understood. The orexinergic nervous system consists of two peptides: the orexin-A/hypocretin-1 and orexin-B/hypocretin-2, which are both synthesized by a cluster of neurons in the lateral hypothalamus and perifornical. These two orexins bind to two G-coupled protein receptors i.e. orexin receptors 1 (OX1R, HCRTR-1) and 2 (OX2R, HCRTR-2), and participate in regulating the vital body functions, including sleep/wake architecture, food intake, cognition, and memory (Kukkonen et al. 2002;Thal et al. 2019;Burdakov 2019;Li and de Lecea 2020). A previous study demonstrated that orexin is primarily associated with interstitial Aβ level and wakefulness in transgenic AD mice (Kang et al. 2009). Orexin levels of cerebrospinal uid (CSF) from AD patients were found to be higher than those seen in normal people and they are responsible for regulating wakefulness maintenance and prevent undesirable transitions into sleep (Liguori et al. 2016;Um and Lim 2020). Overexpression of orexins can lead to non-REM sleep fragmentation and REM sleep suppression during daytime (Willie et al. 2011;Makela et al. 2010). Orexin A, which has been shown to promote wakefulness, was recently highlighted on Aβ metabolism in animals and humans (Kang et al. 2009;Liguori et al. 2014). However, some studies carried out in humans have displayed con icting conclusions. The activity of orexin A and its involvement in sleep/wake cycle alterations remain largely unknown, especially, in AD brain. Postmortem analysis revealed that the number of orexin-positive neurons in the hypothalamus and the concentration of orexin in ventricular CSF were reduced in patients with AD when compared with the controls (Fronczek et al. 2012). Some other studies demonstrated higher CSF orexin-A levels in patients with AD when compared to the control group (Dauvilliers et al. 2014;Liguori et al. 2014;Wennstrom et al. 2012). This might be related to sleep deterioration and neurodegeneration (Liguori et al. 2014). Further evidence is needed to understand the link of orexin A to the underlying neurodegenerative process (Aβ deposition) or secondary to sleep/wake cycle alterations. Determining the morphology, distribution, and neural network for understanding the physiological function of orexin A neuron is essential for developing new clinical treatment strategies for poor sleep and AD. Therefore, the main objective of this study was to investigate sleep-wake features and the expression changes and distribution of orexin A underlying AD models. group. All mice were housed in speci c pathogen-free conditions under a 12h:12h light-dark cycle (lights on at 7 AM and lights off at 7 PM, illumination intensity ≈100 lx) at an ambient temperature of 22±0.5℃ in the laboratory animal center of Jiangnan University. Materials And The mice used in this study were approved by the Institutional Animal Care and Use Committee at Jiangnan University, Jiangsu, China. Aβ 1-42 (Sigma-Aldrich) was prepared as a stock solution at a concentration of 2 mg/ml in sterile normal saline, and aliquots were stored at −20°C. Aβ 1-42 was aggregated by incubation at 37°C for 4 days before use according to the instructions provided by the manufacturer and previous study (Zheng et al. 2013). Mice were anesthetized with intraperitoneal (i.p.) injection of chloral hydrate (350 mg/kg, i.p). A guide cannula (Ø = 0.5 mm, length = 15 mm) was stereotaxically implanted into the right lateral ventricle of the mice. The aggregated form of Aβ 1-42 (410 pmol/mouse) was administered by intracerebral ventricular, through the planted guide cannula with the ow rate 1 μL/min. The coordinates of the guide tip were as follows: anteroposterior = −0.6 mm; mediolateral +1.1 mm and dorsoventral = −1.0 mm from bregma; 1 mm above the lateral ventricle for mice according to the atlases (Zheng et al. 2013). The length of the injection was 1.5 cm. Polygraphic Recordings and Sleep-Wake States Analysis For the sleep-wake cycle recording assay, four stainless steel screw cortical electrodes were screwed through the skull into frontal and parietal cortices to record electroencephalogram (EEG). The cortical electrodes were inserted into the dura through two pairs of holes located, respectively, in the frontal (1 mm lateral and anterior to the bregma) and parietal (1 mm lateral to the lambda) cortices. Three wire electrodes were directly inserted into the neck musculature for EMG recording. The ground electrode was placed on the skull over the cerebellum. Following the surgery, mice were housed in 12 hour dark and 12 hour light for 10 days. All mice were habituated to the recording cages for 3 days before starting the recording. The record was done since 06:00. After saline or Aβ administration for WT mice, mice of each group were placed in a sound-attenuated, ventilated and electrically isolated chamber. EEG and EMG activities were ampli ed (2000) and ltered (0.5-60 Hz for EEG Model 3500, A-M Systems, WA, USA), and digitalized at a resolution of 256 and 128 Hz and recorded continuously with CED 1401 MKII (Cambridge Electronic Design Limited (CED), London, UK). The behavior of the mice during light and dark phases in the chamber was monitored and recorded using an infrared video camera. We visually scored polygraphic records by 30-s epochs for wakefulness (W), sleep (NREM), and REM according to previously described criteria validated for mice using a Spike 2 sleep-score script (CED) and with the assistance of spectral analysis by the fast Fourier transform(FFT) (Tsuneki et al. 2010;Harris et al. 2005). Immunohistochemistry of sequence sections After polygraphic recordings and sleep-wake states analysis were done, 10 mice chosen randomly from each group were anesthetized with sodium pentobarbital (i.p. 80 mg/kg) and then sacri ced by intracardiac perfusion with cold phosphate-buffered saline (PBS) followed by 4% paraformaldehyde. After perfusion, the whole brain was removed, post-xed in the same xative for 2 days, cryoprotected in 30% sucrose at 4℃ for 2 days. Brains were embedded in optimal cutting temperature compound (OCT) and cut on a freezing microtome (Leica CM1850; Leica Microsystems UK, Milton Keynes, UK) for acquiring coronal sections (30 μm) of the entire hypothalamus. All the sequence sections of these brains were processed for immunostaining. Each section was washed in PBS (3 × 5 minutes) and then processed 30 min in TritonX-100 in PBS, blocked with 5% bovine serum albumin (BSA) and 0.2% TritonX-100 at room temperature for 1 h. The sections were then incubated with mouse anti-orexin A (1:1000) for 48 h on a shaker. The sections were washed thrice in PBS and were incubated with biotinylated goat antirabbit IgG. The brown chromogen deposition was shown with 3, 3′-diaminobenzidine tetrahydrochloride (DAB). 2.7 Tissue optical clearing tracing and deep imaging 500-μm-thick coronal blocks of the brain were cut off and cleared using RapiClear 1.49 (SunJin Lab Co.) by immersing them overnight in the clearing reagent at room temperature. Cleared tissues were mounted on a custom-made sample holder. Brain tissue from each group (10 mice/group) was dissected out and xed in a 24-well plate with 4% paraformaldehyde solution on an orbital shaker for 2 h at room temperature. The samples were then transferred overnight into 2% PBST (2% Triton X-100 in PBS solution) for permeabilization. They were kept overnight in 10% BSA on an orbital shaker at 4°C. The samples were then incubated with the primary antibody on an orbital shaker at 4°C for 2 days after which they were incubated with secondary antibody at 4°C for 1 day. After being washed with PBST, stain nuclear with DAPI, the imaged was acquired using the microscope (BX60, Olympus, Tokyo, Japan) and software of Imaris 9.3.0. Computer-assisted 3D reconstruction analysis The distribution of stained cells was examined and reconstructed three-dimensionally. As described previously (George Paxinos et al. The Mouse Brain in Stereotaxic Coordinates), 130 slices including the hypothalamus that were at a distance of -1.34 to -2.18 mm from the bregma were reconstructed in each brain of WT mice. Around 150 slices including the hypothalamus which were at a distance of -1.06 to -2.30 mm from the bregma were reconstructed in the APP/PS1 mice. Operate a computer-assisted image processing and three dimensional reconstructions using the ImageJ-win64 software, Adobe Photoshop CS4, and Amira 6.3.0. Statistic Analysis Data were expressed as mean±SEM and analyzed using Prism 7 (GraphPad; San Diego, CA, USA). Student's t-test was performed to compare the continuous variable between two groups com, and the differences among three or more groups were performed using the analysis of variance (ANOVA). P value less than 0.05 was considered to be statistically signi cant in all the experiments. The Sleep-Wake Architecture was disturbed in APP/PS1 and Aβ-injected mice We assessed the sleep state of APP/PS1 mice and Aβ challenged mice to investigate the acute impact of Aβ on the sleep architecture. Compared to the control groups, APP/PS1 mice increased wakefulness by 42.9% in the 12-h light phase, 12.1% in 24 h total. Similarly, wakefulness was increased by 43.57% in the 12-h light phase, 17.62% in the 12-h dark phase, and 26.12% in 24-h total, respectively in Aβ treated mice ( Figure 1A). The increase in wakefulness was concomitant with the reduction in non-rapid eye movements (NREM) sleep, NREM sleep was decreased by 16.9% during 12-h dark period and 10.3% for 24-h total in APP/PS1 mice. In Aβ treated mice, it was decreased by 18.41% in the 12-h light phase, 21.12% in the 12-h dark phase, and 19.62% in 24-h total ( Figure 1B). However, no difference was observed in APP/PS1 or Aβ-treated mice in REM during the 12 h light, 12 h dark phases, or 24 h total when compared to the control group respectively ( Figure 1C). Compared to the control groups, the increase increased awakening time in the 12-h light phase of APP/PS1 mice was due to the increase of average duration, while the decrease of NREM was due to the decrease of average duration. There was no signi cant difference in the average duration and number of REM during the 12 h light and 12 h dark phases. Similarly, the increase of awakening caused by treatment with Aβ in the 12-h light phase was due to the increase of average duration, while the decrease of NREM was due to the decrease of average duration and the increase of occurrence number. The increase of awakening caused by treatment with Aβ in 12-h dark period was due to the increase of average duration. The decrease of NREM was due to the decrease of occurrence number. The average duration and number of REM in the 12-h light phase were not statistically signi cant. However, the average duration of REM decreased in the 12-h dark phase while there was no signi cant change in the number of REM ( Figure 1D-G). 3.2 The number of activated orexin A neurons and expression of orexin A were both upregulated in APP/PS1 and Aβ-injected mice c-Fos gene, an immediate early gene that is transcribed when neurons are activated, has been extensively used as a marker of the activated neuron (Joo et al. 2016). We assessed the activated hypothalamic orexin neurons by using immuno uorescence staining for c-Fos and orexin A. As shown in Fig 3A and B, the number of activated orexin A neurons was signi cantly increased in APP/PS1 and Aβ-injected mice when compared with their relative control groups respectively. Next, we examined the expression of orexin A and found that the mRNA levels of preproorexin, which is the common precursor of orexin A, were signi cantly upregulated in the APP/PS1 as well as the Aβ-injected mice ( Figure 3C). The density and distribution orexin A positive neurons were increased in the brain of APP/PS1 mice To demonstrate the rostral to caudal distribution of orexin A neurons in the brain of WT and AD mice, sequence coronal sections of different sectors were selected and analyzed using an immunohistochemistry assay. Orexin A positive neurons were distributed in an ellipsoid shape which was located in upper lateral of fornix, lower lateral of mammillothalamic tract, and symmetrically located on both sides of the third ventricle. The density of orexin A positive neurons in the central sector of tuberal hypothalamus was much higher when compared to the anterior and posterior sectors ( Figure 4A, C, and E). Compared to WT mice, the number of orexin A neurons totally and in HL and VMH in the anterior sector was signi cantly different ( Figure 4A and B). On the other hand, there was a statistically higher density of orexin A neuron totally and in LH of the central sector ( Figure 4C and D); and slightly decreased density of orexin A positive neurons totally and in LH as well as PeF in the posterior sector of tuberal hypothalamus in APP/PS1 mice when compared to the WT mice ( Figure 4E and F,). The total density and distribution range of orexin A positive neurons were signi cantly higher in the brain of APP/PS1 mice compared to that of WT mice. Three-dimensional reconstruction showed increased orexin A neurons in the brain of AD mice Three-dimensional reconstruction was conducted based on the immunohistochemistry image of orexin A positive neurons. Orexin A positive neurons were mainly located in the tubercular hypothalamic region and were accompanied with fornix and mammillothalamic tract in both WT and APP/PS1 mice. The distribution and density of orexin A-positive neurons in APP/PS1 mice were higher in APP/PS1 mice when compared with WT mice (Figure 5A and B). These results indicated that orexin A positive neurons were higher in the brain of AD mice when compared with WT mice. Maps accurately showed the distribution of orexin A positive neurons It has been identi ed that orexin A positive neurons were mainly located in the tuberal hypothalamus (Peyron et al. 1998), and we further analyzed the detailed distribution of orexin A positive neurons using the serial slices from the rostral to caudal of WT and APP/PS1 mice. The results demonstrated orexin A positive neurons were seen at a distance of 1.06 mm from the bregma that in the APP/PS1 and their number was higher once the distance from the bregma was increased. Most orexin A positive neurons were seen in the lateral hypothalamus area (LH). A few were seen in the ventromedial hypothalamic nucleus (VMH) and supraoptic nucleus (SOR) in 1.22 mm far away from the bregma, while there has no orexin A positive neurons until this location in WT mice ( Figure 6A). The range of orexin A positive neurons in APP/PS1 mice was still limited in the area of LH, VMH, and SOR even the number of orexin A neurons increased in 13.4 mm far away from the bregma, where several orexin A neurons were found in LH of the brain of WT mice. Then, the distribution location changed as the distance increased. Till a distance of 2.06 mm from the bregma, the orexin A neurons in WT mice were mainly found in the parabrain nucleus (PSTH), perifornix nucleus (PEF), and posterior hypothalamic nucleus (PH). Orexin A positive neurons in the APP/PS1 mice displayed a broader range, locating in another region of medial tuberal nucleus (Mtu) except for PSTH, PEF, and PH ( Figure 6B). At a distance of 2.30 mm from the bregma, orexin A positive neurons in the brain of APP/PS1 mice were found only in LH, while no orexin A positive neurons were seen in the brain of WT mice at the same distance ( Figure 6C). Taken together, the above data suggested that the orexin A positive neurons appeared earlier, but disappeared later from the rostral to caudal of brain in APP/PS1 mice when compared to WT mice. 3.6 RapiClear cleared the brain of WT and APP/PS1 mice. Generally, the serial slice may arise some deformation and damage of tissues, and inaccuracy may occur during the three-dimensional reconstruction. Thus, we adopted the RapiClear technique to further examine the distribution of orexin A positive neurons, which can keep the tissue structure to analyze the three-dimensional and topological morphology of neurons. As shown in Figure 7A, the brain tissue becomes uniformly transparent after immersion in refractive-index-speci ed solutions. The tissue structure, cellular architecture, and uorescence signals were preserved well in RapiClear cleared brain ( Figure 7B). It was also shown that the density of orexin A neurons in the brain of APP/PS1 mice were signi cantly higher compared to WT mice ( Figure 7C). We then observed the morphology of orexin A neurons and found the diameter of these neurons was always around 10-25 μm and did not show any signi cant difference between the brains of WT and APP/PS1 mice ( Figure 7D-F). Discussion Sleep abnormalities have been observed for decades in AD (Weng et al. 2020). In the present study, we examined the sleep-wake cycle and evaluated the relationship between immunoreactivity and expression of orexin A and Aβ in the tuberal hypothalamus with a special focus on the spatial distribution of labeled neurons of AD models. Sleep dysfunction is considered a core component of AD. 6 month-old APP/PS1 AD model has previously been shown to exhibit increased wakefulness during the 12 h light phase (Zhang et al. 2019a;Zhurakovskaya et al. 2019); some Some other studies have reported that 9 month-old APP/PS1 female mice displayed reduced REM and NREM sleep stages across both light and dark phases (Roh et al. 2012); reduced NREM and increased wakefulness during 12 h light phase also has been observed in PLB1 Triple (Platt et al. 2011). There is mounting evidence that Aβ amyloidosis plays a key role in the bi-directional regulation of AD pathology in the brain and sleep disorder (Boesp ug and Iliff 2018). In addition, reduced NREM sleep was also reported to be associated with high cerebrospinal uid Aβ42 levels in cognitively normal elderly subjects (Varga et al. 2016). Sleep-weak episode number and mean duration have not been previously performed though several studies explored sleep disorder using EEG/EMG in transgenic AD mice. Results from our study demonstrated increased wakefulness with ascendant mean duration and decreased sleep in the time of lights-on and light-off translation in APP/PS1 mice using sleep episode and duration analysis. Our study also revealed that Aβ-administrated mice demonstrated excessive awakening and less NREM during the light and dark phase, thus providing direct evidence that the accumulation of Aβ is a crucial factor to promote wakefulness in the progression of AD. The sleep-wake cycle is accurately regulated by many brain areas and neural circuits. This includes the brainstem, midbrain, thalamus, hypothalamus, and basal forebrain. Recently, the orexinergic system is receiving extensive attention in AD for its vital function. High levels of orexin in the cerebrospinal uid with sleep impairment were seen in patients with AD (Gabelle et al. 2017) (Liguori et al. 2014). Some studies have demonstrated that blocking of the orexinergic system might signi cantly contribute to a reduced level of Aβ and subsequent awakening (Hagan et al. 1999;Lee et al. 2005). Treatment of sleep disorder has been identi ed to be an effective strategy for improving pathological changes seen in AD patients with poor sleep (Cousins et al. 2019). Our results demonstrated that the amount of activated orexin A neurons labeled with c-Fos were upregulated, accompanied by enhanced expression of preproorexin in APP/PS1 and Aβ-treated mice. Based on these alterations of physiology and biologic level, we established the anatomical display for spatial distribution, cell density, and cellular morphology of orexin A neurons from the three-dimensional structure. The data demonstrated that neurons immunoreactive to orexin A were distributed in an ellipsoid shape centered in the tuberal hypothalamus and located dorsally to the fornix, with fewer neurons in the anterior and posterior sectors. Central sectors of the brain displayed a higher population of orexin A-positive neurons, which were observed with no signi cant increase or even small decrease in anterior and posterior sectors of the brain when APP/PS1 mice were compared to the WT mice. This phenomenon may be formed of extended area orexin A neurons in APP/PS1 mice leading to a relative lower average subpopulation of each slice in the corresponding sectors, with a location of 1.06 mm to 2.30 mm away from the anterior fontanelle in APP/PS1 mice compared to 1.34 mm to 2.18 mm away from the anterior fontanelle in normal mice. It has been well established that orexin A positive neurons in LH can regulate sleep-wake cycle behavior via several neural circuits. Orexin receptors exist in monoaminergic neurons including noradrenaline (NE) neurons in LC, histamine (HA) neurons in tuberomammillary nucleus (TMN), and 5-hydroxytryptamine (5-HT) neurons in dorsal raphe nucleus (DRN), which enables orexin neurons to project to them and maintain wakefulness [J Neurosci, 2011, 31(17): 6518-6526.]; Secondly, orexin neurons in the LH region also receive projections from sleep-related nuclei. GABAergic neurons in the ventrolateral preoptic nucleus (VLPO) and median preoptic area (mnpo) densely project to orexin neurons in the LH region. Light stimulation of GABAergic neurons in these two nuclei during sleep can effectively inhibit orexin neurons in the LH region, to maintain sleep state [Front Neural Circuits, 2013, 7: 192]. Here in our study the increase and earlier appearance of distribution and orexin A positive neurons may explain the sleep disorder in AD mice. Additionally, in the AD model, a few orexin A neurons exist in the region of the periventricular nucleus (PE), the margin of the third ventricle. These neurons might contribute to the progression of AD via activation of related mediator-regulated signal protein in CSF of the third ventricle. The structure, distribution, and projection characteristics of orexin A neurons are the basis for various physiological functions, which determine the properties of synaptic bioelectrical signals and the ability to transmit information. It was reported that there is a functional dichotomy for orexin neurons. Those which are located in perifornical and dorsomedial hypothalamic areas (PFA-DMH) regulate arousal, waking, and response to stress while those which are present in the lateral hypothalamus and ventral tegmental area are involved in reward-based learning and memory (Harris and Aston-Jones 2006). Using a dual retrograde tracer strategy, another study proposed that orexin neurons can be classi ed based on their downstream projections, whereas these classi cations do not show a topographic location within the hypothalamus. Orexin neurons projected to the LC and TMN were mainly involved in wakefulness/arousal populations), and those projected to NAc and VTA regulate reward populations (Iyer et al. 2018). A few results were seen for the structure and distribution of orexin A neurons shown from three-dimensional morphology, which was just limited in two-dimensional space (Luna et al. 2017;Cheng et al. 2003). In the current study, we compared the expression, population, and morphology by different experiments including tissue optical clearing tracing, deep imaging, and three-dimensional reconstruction, determining increased population and more extensive distribution in the AD model. Orexin neurons in some regions are indistinct and their function in the regulation of sleep and progression of AD needs to be explored further. Additionally, previous studies have shown that orexin A neurons could be both single and bipolar cells with round, oval, and spindle shapes, exhibiting an average cell diameter of about 21 μm in the brain of rats (Cheng et al. 2003). We found the shape of orexin was also diversi ed in mouse brains with a diameter of 10-25 which was slightly smaller than that of rats. There is increasing which indicates that the orexin system is strongly implicated in sleep disorder and AD pathogenesis (Kang et al. 2009;Liguori et al. 2020), (Kang et al. 2009;Liguori et al. 2020). Future studies should investigate the possibility of potential therapeutic strategy by direct pharmacological intervention at the differentially distributed orexin A neurons in patients with AD. Figure 1 Sleep characteristics of WT, APP/PS1, saline, or Aβ-treated groups. For these four indicated mice groups, (A, B, and C) Total time spent in W, NREM, and REM during the 12 h light, 12 h dark phases, and 24 h total was calculated; (D, E, F, and G) Time course of the changes in wakefulness, REM sleep, and NREM sleep were calculated. Data expressed as means±SEM (n=6-8).*P 0.05, **P 0.01, ***P 0.001. Data were analyzed using Student's t-test. Figure 2 Sleep characteristics of WT, APP/PS1, saline, or Aβ-treated groups. (A, B, C, and D) Representative spectrograms of EMG and brain state from 06:00 to 11:00 are shown. Data expressed as means±SEM (n=6-8).*P 0.05, **P 0.01, ***P 0.001. Data were analyzed using Student's t-test. (C, D) The slices from the central sector were used for immunohistochemical staining for orexin A (brown) and quantitative analysis orexin A positive neurons. (E, F) The slices from the posterior sector were used for immunohistochemical staining for orexin A (brown) and quantitative analysis orexin A positive neurons. LH: lateral hypothalamus area; PSTH: parabrain nucleus; PEF: perifornix nucleus; PH: posterior hypothalamic nucleus; VMH: ventromedial hypothalamic nucleus; MCLH: magnocellular nucleus of the lateral hypothalamus; dorsomedial hypothalamic nucleus. Scale bar: 100μm. Data expressed as means±SEM (n=6-8). *P 0.05, **P 0.01, ***P 0.001. Data were analyzed using Student's ttest. Sections are ordered from rostral to caudal, the hypothalamus was inconspicuously subdivided by the splitting dotted line, schematic drawings showing the general morphology of the nucleus orexin A immunoreactive distributed in the (A) ORX-ir neurons were shown from distence of -1.06mm and -1.22mm away from the bregma (the reference zero point) in APP/PS1 mice. (B) ORX-ir neurons were shown from distence of -1.34mm to -2.18mm away from the bregma. (C) (A) ORX-ir neurons were shown from distence of -2.18mm and -2.30mm away from the bregma in APP/PS1 mice (6 mice/group).
2021-07-26T00:06:17.207Z
2021-06-07T00:00:00.000
{ "year": 2021, "sha1": "fbf28a6c2c84690b42ae8f0559b34201ec8de7f7", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-220153/v1.pdf?c=1631898720000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "93056088cc58a48df272c70af13d284dab135104", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
22692860
pes2o/s2orc
v3-fos-license
Micronuclei Induced by Low Dose Rate Irradiation in Early Spermatids of p 53 Null and Wild Mice Micronucleus / Spermatid / p53 / Radiation / Dose-rate effect To obtain evidence of the dose-rate effect on induction of micronuclei in early spermatids, we observed frequencies in wild-type p53(+/+), heterozygous p53(+/–) and null p53(–/–) mice 14 days after gamma rays irradiation at a high (1,020 mGy/min) or a low (1.2 mGy/min) dose-rate. A doseand doserate-related increase in micronuclei was seen in early spermatids with no difference between the different p53 status. These data were found to be best fitted by a linear-quadratic dose-response model at a high dose-rate, and by a linear dose-response model at a low dose-rate. The yields at 1.2 mGy/min were significantly lower than those at 1,020 mGy/min in the same manner, independent of p53 status. Testis weight declined significantly after 3 Gy irradiation, but did not depend on dose-rates. In our other studies, we observed the complete elimination both of malformation in fetuses and CD34 mutant T-lymphocytes in p53(+/+) mice, but not in p53(–/–) mice after irradiation. This indicates that concerted DNA repair and p53-dependent apoptosis are likely to completely eliminate mutagenic damage from the irradiated tissues at low doses or dose-rates in teratogenesis and lymphocytes. In the germ cell, however, irradiation at 1.2 mGy/min was mutagenic, independent of p53 status. INTRODUCTION The major risks of low-level radiation are mutagenesis, teratogenesis and carcinogenesis.The p53 gene is implicated in cell cycle delay and the activation of apoptosis after exposure to DNA-damaging agents, and would therefore be expected to influence radiosensitivity.A p53 deficient environment results in increased survival of cells bearing DNA damage, thereby leading to an increased mutation frequency and ultimately predisposing malignancy and teratogenesis.ng 4) reported that in the late period of organogenesis, however, the p53(-/-) embryonic mice were more sensitive to the killing effect than were the p53(+/-) and p53(+/+) mice.These results show that the function of p53 is restricted not only to specific tissues, but also to specific stages in the development of tissues. It was also reported that p53 is important in the regulation of spermatogenesis of normal and irradiated testis, either by regulation of cell proliferation or by regulating the apoptotic process in spermatogonia 5,6) .The experiments reported here investigated whether the differences in sensitivity to induction of micronuclei in early spermatids after irradiation are related to differences in the p53-gene status and dose-rate. Animals and irradiation Wild-type p53(+/+), heterozygous p53(+/-) and null p53(-/-) mice of 6-12 weeks of age at the time of irradiation were used, as previously described [1][2][3] . Te experiments were carried out under the control of the Ethics Committee of Animal Care and Experimentation in accordance with the Guiding Principle for Animal Care and Experimentation, N. KUNUGITA ET AL. S206 University of Occupational and Environmental Health, Japan.Mice were exposed to 1, 2 or 3 Gy 137 Cs γ-rays at a high dose-rate of 1,020 mGy/min, using a Gammacell 40 Exactor (Nordion International, Canada), and at a low doserate of 1.2 mGy/min, using an Exposure Instrument SK-951 (Sangyo-Kagaku, Osaka, Japan).At least five animals were used per dose and dose-rate. Micronucleus assay in early spermatids Spermatids were harvested 14 days after irradiation.This time corresponds to the treatments of cells at the developmental stage of pre-leptotene (primary spermatocytes in G1 or S stage).The spermatid harvests, staining and scoring of micronuclei in round spermatids were performed as previously described 7) .Briefly, the testes were excised from each animal, tunicas were removed, and the seminiferous tubules were gently minced in HBSS medium.The cell suspensions were incubated in 2 mg/ml collagenase (Wako Pure Chemical Industries, Osaka) for 20 min at 33°C in a shaking water bath.The cell suspensions were filtered through a stainlesssteel filter, washed, and fixed in 10% neutral buffered formalin.Cells were stained with DAPI (4', 6-dioamidino-2phenyl indole dihydrochloride, Sigma Chemical, St. Louis, MO, USA) at a dose of 1.5 µg/ml, and the micronucleus fre- quency was calculated in 1000 spermatids per animal under a fluorescence microscope. Statistical analysis Student's t-test was used to compare the micronucleus frequencies at each radiation dose.The two-way analysis of variance followed by the Scheffe method for contrasts was used to detect significant differences in dose-rate effect at 3Gy between p53 gene status.Differences were considered to be statisitically significant when P<0.05. RESULTS AND DISCUSSION Spontaneous micronucleus frequency yielded no significant p53-dependent increase.The frequencies of micronuclei increased dose dependently in all p53 gene status when given at 1,020 mGy/min.These data were found to be best fitted by a linear-quadratic dose-response model.The frequencies at 1.2 mGy/min also increased dose dependently in all p53 gene status, however, these were best fitted by a linear dose-response model (Fig. 1).Similar dose-rate dependence has already observed in the frequency of hypoxanthine phosphoribosyl transferase (HPRT) deficient T lymphocytes in the spleen of the irradiated C57BL mouse 8) .However, there are no report observed dose-rate effect on micronuclei induction in early spermatids.The yields at 1.2 mGy/min were significantly lower than those at 1,020 mGy/min (P<0.001) and higher than those of control (P<0.001) in the same manner for all p53 gene status (Fig. 1, 2). Testis weight declined significantly (P<0.001) after 3 Gy irradiation but did not depend on dose-rates in each p53 gene status (Fig. 3). Apoptosis induced in male germ cells following radiation is dependent on functional p53 6,9) .Hasegawa et al 9) showed that the p53 is induced in spermatogonia and has been shown to play a central role in DNA damage in induced spermatogonial apoptosis after irradiation.However, in p53deficient mice spermatogonial apoptosis can still be induced by ionizing radiation 6,9) .Recently it was reported that p53 independent apoptotic pathways involving p73 exist in spermatogonia, although less efficiently than the p53 route 10) .
2018-04-03T02:04:38.249Z
2002-12-01T00:00:00.000
{ "year": 2002, "sha1": "8dd7db33186b667787097e2bf1cbf3c13f1deb3f", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jrr/article-pdf/43/Suppl/S205/3866661/jrr-43-S205.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8dd7db33186b667787097e2bf1cbf3c13f1deb3f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
51638918
pes2o/s2orc
v3-fos-license
In vitro-transcribed guide RNAs trigger an innate immune response via the RIG-I pathway. Clustered, regularly interspaced, short palindromic repeat (CRISPR)–CRISPR-associated 9 (Cas9) genome editing is revolutionizing fundamental research and has great potential for the treatment of many diseases. While editing of immortalized cell lines has become relatively easy, editing of therapeutically relevant primary cells and tissues can remain challenging. One recent advancement is the delivery of a Cas9 protein and an in vitro–transcribed (IVT) guide RNA (gRNA) as a precomplexed ribonucleoprotein (RNP). This approach allows editing of primary cells such as T cells and hematopoietic stem cells, but the consequences beyond genome editing of introducing foreign Cas9 RNPs into mammalian cells are not fully understood. Here, we show that the IVT gRNAs commonly used by many laboratories for RNP editing trigger a potent innate immune response that is similar to canonical immune-stimulating ligands. IVT gRNAs are recognized in the cytosol through the retinoic acid–inducible gene I (RIG-I) pathway but not the melanoma differentiation–associated gene 5 (MDA5) pathway, thereby triggering a type I interferon response. Removal of the 5’-triphosphate from gRNAs ameliorates inflammatory signaling and prevents the loss of viability associated with genome editing in hematopoietic stem cells. The potential for Cas9 RNP editing to induce a potent antiviral response indicates that care must be taken when designing therapeutic strategies to edit primary cells. Introduction interferon beta (IFNβ) and interferon-stimulated gene 15 (ISG15) in a variety of human cell types. This activity depends upon RIG-I and MAVS but is independent of MDA5. The extent of the immune response depends upon the protospacer sequence, but removal of the 5'-triphosphate from gRNAs avoids stimulation of innate immune signaling. The potential for Cas9 RNP editing to induce an antiviral response indicates that care must be taken when designing therapeutic strategies to edit primary cells. Results To investigate if mammalian cells react to IVT gRNA/Cas9 with an innate immune response, we first performed genome editing in human embryonic kidney 293 (HEK293) cells using Cas9 RNPs. To separate innate immune response from genome editing, we performed these experiments with a nontargeting gRNA that recognizes a sequence within blue fluorescent protein (BFP) and has no known targets within the human genome [26]. Constant amounts of recombinant Cas9 protein were complexed with different amounts of nontargeting IVT gRNA, and RNPs were transfected into HEK293 cells using CRISPRMAX lipofection reagent [27]. We harvested cells 30 h after transfection and measured transcript levels of interferon beta 1 (IFNB1) and ISG15 by quantitative real-time PCR (qRT-PCR; Fig 1A). Introduction of gRNAs caused a dramatic increase in both IFNB1 and ISG15 levels, and the presence of Cas9 protein did not have an effect on the outcome. Cas9 on its own did not induce IFNB1 or ISG15 expression. To our surprise, as little as 1 nM of gRNA was sufficient to trigger a 30-50-fold increase in the transcription of innate immune genes. We further found that a commonly administered amount of 50 nM gRNA can induce IFNB1 by 1,000-fold, which is equal to induction by canonical IFNβ inducers such as viral mRNA from Sendai virus [28] or a hepatitis C virus (HCV) PAMP [21,29] (Fig 1B). RNPs can be delivered into cells via different transfection methods, and while lipofection is cost-effective and easy to use, many researchers prefer electroporation for harder-to-transfect cells. We wondered if the transfection method would affect the IFNβ response and compared gRNA transfection via lipofection (Lipofectamine 2000 and RNAiMAX) to nucleofection (Lonza) (Fig 1C). Lipofection led to a strong increase in IFNB1 and ISG15 transcript levels after as little as 6 h posttransfection, and the response was sustained for up to 48 h. Nucleofection also caused an increase in innate immune signaling at early time points, but the response was milder than in lipofected samples and was greatly diminished by 48 h. Next, we asked if the innate immune response to gRNAs is a common phenomenon across different cell types and compared IFNβ activation in seven commonly used human cell lines of various lineages: human embryonic kidney cells 293 SV40 large T antigen (HEK293T), HEK293, Henrietta Lacks cells (HeLa), Jurkat, HCT116, HepG2, and K562 (Fig 2A). While the magnitude of induction varied between cell lines, all tested cell lines responded to IVT gRNA transfection with activation of IFNB1 expression. The sole exception was K562 cells, which have a homozygous deletion of the IFNA and IFNB1 genes [30]. We also measured transcript levels of two major cytosolic pathogen recognition receptors, RIG-I (DExD-H-box helicase 58 [DDX58]) and MDA5 (interferon induced with helicase C domain 1 [IFIH1]), and noticed that all cell lines except K562 up-regulated these transcripts in response to introduction of gRNAs. We also confirmed these results on the protein level in HEK293 cells (Fig 2B). The RIG-I and MDA5 receptors complement each other by recognizing different structures in foreign cytosolic RNAs, but the exact nature of their ligands is not yet fully understood [31,32]. To investigate if IVT gRNAs are recognized via RIG-I or MDA5, we generated clonal knockout (KO) cell lines for RIG-I, MDA5, and their downstream interaction partner MAVS in HEK293 cells using CRISPR-Cas9. As the expressions of both RIG-I and MDA5 are themselves stimulated by IFNβ, we confirmed successful KO after transfection with gRNAs by genomic PCR, Sanger sequencing, and western blot (S1A-S1C Fig). MAVS KO cells were confirmed by western blot (S1D Fig). Strikingly, activation of IFNB1 expression after introduction of gRNAs was absent in RIG-I and MAVS KO cells, while MDA5 KO cells did not show a significant decrease in IFNB1 transcript levels (Fig 2C). This indicates that IVT gRNAs are exclusively recognized through RIG-I to trigger a type I interferon response. As the structural requirements of RIG-I ligands are still not completely understood, we wondered if different 20-nucleotide protospacers in gRNAs vary in their potency to trigger an innate immune response via RIG-I. We designed 10 additional nontargeting gRNAs that we in vitro transcribed and transfected into HEK293 cells. Surprisingly, we found that the cells responded to different protospacers with a wide range of IFNB1 expression. Several gRNAs produced very little innate immune response, and one gRNA (gRNA11) yielded no IFNB1 activation at all (Fig 3A). We speculated that the differential response may be correlated with the purity of the RNA product after in vitro transcription or the stability of the secondary structure of the RNA [33,34]. However, we found that there was no obvious correlation between the immune response to certain gRNAs and their purity; predicted protospacer secondary structure; full secondary structure, including the constant region; or predicted disruption of the constant region by mispairing with the protospacer (S2 Fig). When we separately nucleofected five of these gRNAs into primary CD34 + human hematopoietic stem and progenitor cells (HSPCs), we found that all gRNAs induced a strong immune response. Only gRNA11, which showed no immune stimulation in HEK293 cells, resulted in half the amount of ISG15 transcript (Fig 3B). These results indicate that RIG-I recognition patterns of IVT gRNAs are complex and difficult to anticipate a priori based on predicted properties of the variable protospacer and cell type. One well-established structural requirement of RIG-I ligands is the presence of a 5'-triphosphate group [35]. We asked if preparations that remove the 5' triphosphate might avoid or reduce the innate immune response to IVT gRNAs. We first used a synthetic gRNA that lacks a 5'-triphosphate and verified that this gRNA does not induce IFNB1 expression when transfected into HEK293 cells (Fig 3C). Synthetic gRNAs are becoming more commonplace but are still an order of magnitude more expensive than in vitro transcription of gRNAs. This limits their application for high-throughput interrogation of gene function in primary cells. We therefore asked if treatment of IVT gRNA with phosphatases that remove the 5'-triphosphate would reduce IFNB1 induction. We tested calf intestinal alkaline phosphatase (CIP), shrimp alkaline phosphatase (SAP), 5'-RNA polyphosphatase (PP), and thermosensitive alkaline phosphatase (AP) and found that phosphatase treatment with CIP and AP abolished the IFNB1 response, while SAP and PP treatment only resulted in a reduction of the response (Fig 3C). We also compared purification of IVT gRNAs by solid-phase reversible immobilization (SPRI) beads to column purification and established that SPRI bead cleanup is not sufficient to completely avoid an immune response, even when more phosphatase is used (S3A- S3B Fig). Taken together, these results indicate that 5'-triphosphate is a necessary requirement for gRNA-induced IFNB1 activation through RIG-I but that additional structural properties of the gRNAs also influence the magnitude of the immune response. Next, we asked if phosphatase treatment alters the genome editing potential of gRNAs. As gRNA1 targets the BFP gene, we used a HEK293T cell line with a stably integrated BFP reporter [26], nucleofected cells with phosphatase-treated gRNA-Cas9 RNPs, and monitored editing outcomes by T7 endonuclease I assay (S3C Fig). We did not observe any significant difference in editing outcomes between synthetic, IVT, and phosphatase-treated gRNAs, suggesting that phosphatase treatment does not affect the function of the gRNA. When a cell initiates an antiviral immune response, it also undergoes cellular stress that can affect cell viability [36,37]. Hence, we asked if there is a correlation between the IFNβ response and cell viability after transfection with synthetic, IVT, or CIP-treated IVT gRNA. Not surprisingly, the viability of the very robust HEK293 cell line was not affected by the antiviral immune response (S3D Fig). We then turned to HSPCs, which are a much more sensitive cell type. We first nucleofected HSPCs with RNPs targeting the hemoglobin subunit beta (HBB) gene [12] and compared synthetic and IVT gRNA interferon stimulation and cell viability posttransfection. Double-strand breaks (DSBs) have been reported to cause innate immune stimulation and can themselves cause decreases in cell fitness [38,39]. Therefore, we performed controls and Cas9 or dCas9 RNPs. dCas9 or Cas9 were complexed with synthetic ("syn") or IVT gRNA targeting the HBB gene. Viability was determined by trypan blue exclusion test. (E) qRT-PCR analysis of ISG15 and DDX58 (RIG-I) transcript levels in human primary HSPCs 16 h postnucleofection. dCas9 or Cas9 were complexed with synthetic or IVT gRNA targeting the HBB gene, respectively. Ct values were normalized against Ct of mock-nucleofected cells. Average values of two biological replicates +/−SD are shown. (F) Viability of human primary HSPCs 16 h posttransfection with RNPs. RNPs consisted of dCas9 complexed with synthetic, IVT, or CIPtreated IVT gRNAs targeting a noncoding intron of JAK2 (left panel) or Cas9 complexed with gRNAs targeting exon 1 of HBB (right panel). Viability was determined by trypan blue exclusion test. (G) Editing outcomes in HSPCs 48 h after nucleofection with RNPs targeting the HBB locus. Indel frequencies were determined by amplicon NGS. Statistical significances were calculated by unpaired t test ( Ã p < 0.05, ÃÃ p < 0.01, ÃÃÃ p < 0.0001). The underlying data for this figure can be found in S1 Data. AP, thermosensitive alkaline phosphatase; Cas9, CRISPR-associated 9; CIP, calf intestinal alkaline phosphatase; Ct, cycle threshold; dCas9, nuclease-dead Cas9; DDX58, DExD-H-box helicase 58; gRNA, guide RNA; HEK293, human embryonic kidney 293; HSPC, CD34 + human hematopoietic stem and progenitor cell; HBB, hemoglobin subunit beta; IFNβ, interferon beta; IFNB1, interferon beta 1; indel, insertion and deletion; IVT, in vitro-transcribed; JAK2, Janus kinase 2; NGS, nextgeneration sequencing; n.s., not significant; qRT-PCR, quantitative real-time PCR; PP, 5' RNA polyphosphatase; RIG-I retinoic acid-inducible gene I; RNP, ribonucleoprotein; SAP, shrimp alkaline phosphatase. using nuclease-dead Cas9 (dCas9) to form RNPs and confirmed by Sanger sequencing and TIDE analysis that dCas9-RNPs did not induce DSBs [40] (S3E Fig). We found a significant decrease in HSPC viability using both of the IVT gRNA RNPs that had an increase in IFN-stimulated genes ISG15 and RIG-I (Fig D-E). We did not see a substantial difference in viability or ISG expression between Cas9 and dCas9 RNPs, suggesting that nuclease activity leading to DNA damage did not cause the immune response. Next, we asked if CIP treatment of gRNAs could reverse the decrease in viability in HSPCs. We nucleofected HSPCs with dCas9 RNPs targeting a noncoding intron of Janus kinase 2 (JAK2) or Cas9 RNPs targeting the HBB gene and compared synthetic, IVT, and CIP-treated IVT gRNAs. Strikingly, CIP treatment restored viability in HSPCs (Fig 3F). We were also interested in editing outcomes in these samples and performed amplicon next-generation sequencing (NGS) for the HBB locus. While the phosphatase-treated gRNA performed similarly to the synthetic gRNA, the IVT gRNA resulted in slightly fewer insertions and deletions (indels) (Fig 3G). Discussion We have found that IVT gRNAs used with Cas9 RNPs for many genome-editing experiments can trigger a strong innate immune response in many mammalian cell types (Fig 4). Lipofection results in a stronger and longer-lasting response than nucleofection, possibly because lipofection delivers gRNAs to the cytosol, while nucleofection delivers mainly to the nucleus. Using isogenic KO clones, we found the gRNA-induced response is mediated via the antiviral RIG-I pathway and results in expression of genes that initiate an antiviral immune response. While introduction of IFN-stimulating gRNAs does not affect viability in HEK293 cells, we found that viability of primary HSPCs is negatively affected by the antiviral immune response. While DSBs have on their own been reported to induce an innate immune response [38], we found triphosphate-containing gRNAs complexed with dCas9 induce an immune response and cell death in HSPCs. Only removal of the triphosphate is sufficient to reduce gRNA-induced innate immune signaling. These results have several implications. We suggest that the gene signature associated with type I interferon stimulation should be considered when studying the transcriptome of recently edited bulk populations of cells. Furthermore, all mammalian cells can both produce type I interferons and also respond to them through the ubiquitously expressed receptor interferon alpha and beta receptor subunit 1 (IFNAR1) [41]. Even cells that have not been successfully transfected with RNPs could sense the IFNβ produced by neighboring cells and activate downstream antiviral defense mechanisms. This could be an important consideration during in vivo genome editing applications, as RNP delivery into one set of cells could provoke a widespread innate immune response in the surrounding tissues. We found that synthetic gRNAs completely circumvent the RIG-I mediated response, offering a valuable path to avoid innate immune signaling during therapeutic editing. However, synthetic gRNAs can become expensive when performing experiments that require testing or using many gRNAs. We found that a cost-effective phosphatase treatment to remove the 5'-triphosphate before transfection reduces the immune response and increases posttransfection viability in HSPCs. Furthermore, editing outcomes in cell lines with phosphatase-treated gRNA were comparable to those of IVT gRNAs, suggesting that removal of 5'-phosphate groups does not abolish gRNA function. In fact, in sensitive HSPCs, phosphatase-treated gRNA slightly outperformed IVT gRNA, which is possibly due to reduced viability in samples transfected with IVT-RNPs. Thus, consideration of a potential innate immune stimulation prior to choice of genome editing reagents, study design, and implementation of controls is critical when performing genome editing using RNPs in mammalian cells. While we were preparing this manuscript for submission, the Kim group reported similar results in HeLa cells and primary human CD4 + T cells [42]. They confirmed that the type I interferon response is dependent on the presence of a 5'-triphopsphate group and that CIP treatment can increase viability by avoiding the antiviral response. These results are very much in alignment with our findings and extend the potential problem of innate immune signaling to additional cell types. Our study adds extra depth by further outlining the mechanisms by which gRNAs are sensed. We show that gRNA sensing depends upon RIG-I and MAVS, but MDA5 KO cells are fully capable of inducing IFNβ after IVT gRNA transfection. Hence, gRNA sensing is independent of the MDA5 PAMP receptor, consistent with RIG-I's preference for short doublestranded RNA (dsRNA) structures and MDA5's preference for long dsRNA fragments [43]. Furthermore, we show that in addition to a 5'-triphosphate, the protospacer sequence is also critical to determine the intensity of the IFNβ response. Not only do different gRNAs induce different innate immune responses, but some gRNAs induce no response at all. However, this seems to be cell-type specific, as we found that sensitive cells such as primary HSPCs react to the same gRNAs with a strong immune response independently of the protospacer. It has been proposed that 5'-base-paired RNA structures are required to activate antiviral signaling via RIG-I, but we found no correlation between signaling and a variety of predicted RNA properties, including secondary structure [33]. Our results therefore suggest that the mechanism of gRNA sensing by the RIG-I pathway is relatively complex in that it requires 5'-triphosphates but that this moiety is not sufficient to induce the response. Additionally, we have not ruled out the possibility that gRNAs could be recognized by Toll-like receptors (TLRs), though we and others [42] have found that KO of RIG-I is sufficient to completely abrogate gRNA-induced signaling in multiple cell contexts. The role of TLR recognition could be addressed in future work to delineate the full set of molecular features responsible for gRNA activation of innate immunity, which might yield accurate predictors of innate immune signaling in general. In vitro transcription of gRNAs gRNA was synthesized by assembly PCR and in vitro transcription as previously described [12]. Briefly, a T7 RNA polymerase substrate template was assembled by PCR from a variable 58-59 nt primer containing T7 promoter, variable gRNA guide sequence, the first 15 nt of the nonvariable region of the gRNA (T7FwdVar primers, 10 nM, S1 and S2 Tables for gRNA sequences), and an 83 nt primer containing the reverse complement of the invariant region of the gRNA (T7RevLong, 10 nM), along with amplification primers (T7FwdAmp, T7RevAmp, 200 nM each). The two long primers anneal in the first cycle of PCR and are then amplified in subsequent cycles. Phusion high-fidelity DNA polymerase was used for assembly (New England Biolabs). Assembled template was used without purification as a substrate for in vitro transcription by T7 RNA polymerase, using the HiScribe T7 High Yield RNA Synthesis kit (New England Biolabs) following the manufacturer's instructions. Resulting transcription reactions were treated with DNAse I (New England Biolabs), and RNA was purified by treatment with a 5X volume of homemade SPRI beads (comparable to Beckman-Coulter AMPure beads) and elution in RNAse-free water. Phosphatase treatment of IVT gRNAs gRNAs were treated with phosphatases as follows: CIP (New England Biolabs, 30 U), SAP (New England Biolabs 10 U), PP (Lucigen, 20 U), and FastAP AP (Thermo Fisher Scientific, 10 U) were added per 20 μl in vitro transcription reaction, and samples were incubated at 37˚C for 3 h before proceeding to purification and DNAseI treatment. gRNA was purified using a Qiagen RNeasy Mini Kit (Qiagen) or by 5X volume of homemade SPRI beads (comparable to Beckman-Coulter AMPure beads). The detailed protocol and additional notes are available online (dx.doi.org/10.17504/protocols.io.nghdbt6). In vitro transcription of HCV PAMP and Sendai virus DI RNA HCV PAMP in vitro transcription template [21] was generated by annealing HCV fwd and rev (5 μM each) oligos (S1 Table). In the subsequent in vitro transcription reaction, 2 μl of the annealed product was used as DNA template, using HiScribe T7 High Yield RNA Synthesis kit (New England Biolabs). The plasmid containing the SeV DI RNA [28] was a gift from Prof. Peter Palese, Icahn School of Medicine at Mount Sinai, New York. Plasmid was digested with HindII/EcoRI before in vitro transcription with HiScribe T7 High Yield RNA Synthesis kit (New England Biolabs). The sequence of the IVT DI, including the T7 promoter, hepatitis delta virus ribozyme, and the T7 terminator, is TAATACGACTCACTATAACCAGACAAGAGTTTAAGAGA The sequence of the SeV DI is highlighted in boldface. Both HCV PAMP and SeV DI RNA were purified by treatment with a 5X volume of homemade SPRI beads (comparable to Beckman-Coulter AMPure beads) and elution in RNAse-free water. Synthetic gRNAs Chemically synthesized gRNAs, which were purified using high-performance liquid chromatography (HPLC), were purchased from Synthego. RNA quality control IVT gRNAs were analyzed using a Bioanalyzer. This was performed by the UC Berkeley Functional Genomics Laboratory (FGL) core facility. gRNAs were denatured for 5 min at 70˚C before analysis on bioanalyzer. Cas9 protein preparation The Cas9 construct (pMJ915) contained an N-terminal hexahistidine-maltose binding protein (His6-MBP) tag, followed by a peptide sequence containing a tobacco etch virus (TEV) protease cleavage site. The protein was expressed in Escherichia coli strain BL21 Rosetta 2 (DE3; EMD Biosciences) grown in TB medium at 16˚C for 16 h following induction with 0.5 mM IPTG. The Cas9 protein was purified by a combination of affinity, ion exchange, and size exclusion chromatographic steps. Briefly, cells were lysed in 20 mM HEPES pH 7.5, 1 M KCl, 10 mM imidazole, 1 mM TCEP, 10% glycerol (supplemented with protease inhibitor cocktail [Roche]) in a homogenizer (Avestin). Clarified lysate was bound to Ni-NTA agarose (Qiagen). The resin was washed extensively with lysis buffer, and the bound protein was eluted in 20 mM HEPES pH 7.5, 100 mM KCl, 300 mM imidazole, 1 mM TCEP, 10% glycerol. The His6-MBP affinity tag was removed by cleavage with TEV protease, while the protein was dialyzed overnight against 20 mM HEPES pH 7.5, 300 mM KCl, 1 mM TCEP, 10% glycerol. The cleaved Cas9 protein was separated from the fusion tag by purification on a 5 ml SP Sepharose HiTrap column (GE Life Sciences), eluting with a linear gradient of 100 mM-1 M KCl. The protein was further purified by size exclusion chromatography on a Superdex 200 16/60 column in 20 mM HEPES pH 7.5, 150 mM KCl, and 1 mM TCEP. Eluted protein was concentrated to 40 uM, flash-frozen in liquid nitrogen, and stored at −80˚C. All transfections in cell lines were performed in 12-well cell culture dishes using 2 × 10 5 cells per transfection. For lipofection, we used Lipofectamine CRISPRMAX-Cas9, Lipofectamine RNAiMAX, or Lipofectamine 2000 Transfection Reagent (all Invitrogen) in reverse transfections according to the manufacturer's protocols. Unless stated otherwise, 2 × 10 5 cells were transfected with 50 pmol of RNA to a final concentration of 50 nM and harvested 24-30 h posttransfection for RNA extraction. RNA extraction, cDNA synthesis, and qRT-PCR Cell cultures were washed with PBS prior to RNA extraction. Total RNA was extracted using RNeasy Miniprep columns (Qiagen) according to the manufacturer's instructions, including the on-column DNAseI treatment (Qiagen). One μg of total RNA was used for subsequent cDNA synthesis using Reverse Transcription Supermix (Biorad). For qRT-PCR reactions, a total of 20 ng of cDNA was used as a template and combined with primers (see S3 Table), and EvaGreen Supermix (Biorad) and amplicons were generated using standard PCR amplification protocols for 40 cycles on a StepOnePlus Real-Time PCR system (Applied Biosystems). Ct values for each target gene were normalized against Ct values obtained for GAPDH to account for differences in loading (ΔCt). To determine "fold activation" of genes, ΔCt values for target genes were then normalized against ΔCt values for the same target gene for mock-treated cells (ΔΔCt). Generation of KO cell lines For CRISPR/Cas9 genome editing, we used a plasmid encoding both the Cas9 protein and the gRNA. pSpCas9(BB)-2A-GFP (px458) was a gift from Feng Zhang (Addgene plasmid #48138). We designed gRNA sequences using the free CRISPR KO design online tool from Synthego. Two different gRNA sequences were designed for RIG-I and MDA5, respectively (see S3 Table). Using a Lonza 4D nucleofector (Lonza) with the manufacturer's recommended settings, 2 × 10 5 HEK293 cells were nucleofected with 2 μg of px458 plasmids containing both targeting gRNAs in a 1:1 ratio. After 48 h, cells were harvested and subjected to fluorescence-activated cell sorting (FACS). Cells expressing high levels of GFP were single-cell sorted into 96-well plates to establish clonal populations. For the screening process, genomic DNA (gDNA) from clonal populations was extracted using QuickExtract solution (Lucigen). For KO of RIG-I and MDA5, we screened clones by genomic PCR, looking for a PCR product that is significantly smaller in size than that of WT HEK293 cells (see S4 Table for primers). PCR products were then Sanger sequenced by the UC Berkeley DNA Sequencing facility using the forward primers of the PCR reaction as sequencing primers. Western blot Cells were harvested and washed with PBS. Cells were lysed in 1x RIPA buffer (EMD Millipore) for 10 min on ice. Samples were spun down at 14,000 × g for 15 min, and protein lysates were transferred to a new tube. Fifty μg of total protein was separated for size by SDS-PAGE and transferred to a nitrocellulose membrane. Blots were blocked in 4% skim milk in 50 mm Tris-HCl (pH 7.4), 150 mm NaCl, and 0.05% Tween 20 (TBST) and then probed for RIG-I, MDA5, MAVS, or GAPDH protein using antibodies against RIG-I (D14G6), MDA5 (D74E4), MAVS (D5A9E), or GAPDH (14C10), respectively (all Cell Signaling Technologies). This was followed by incubation with secondary antibody IRDye 800CW Donkey anti-Rabbit IgG (Li-Cor). Protein standards (GE Healthcare) were loaded in each gel for size estimation. Blots were visualized using a Li-Cor Odyssey Clx (Li-Cor). T7 endonuclease I assay Cells were harvested 24 h after transfection and washed with PBS. gDNA was extracted using QuickExtract solution (Lucigen) following the manufacturer's protocol. PCR across the target site in the BFP gene was run using the BFP amplicon primer set (S4 Table). Two hundred ng of PCR product was heated to 100˚C and slowly cooled down to let DNA reanneal. Annealed DNA was digested with T7 endonuclease I (NEB) for 20 min at 37˚C. DNA was then analyzed by agarose gel electrophoresis. TIDE analysis PCR products were generated with target-specific HBB primer set 1, sequenced, and Sanger traces were then analyzed with the TIDE webtool (http://tide.nki.nl). PCR and next-generation amplicon sequencing preparation Using primer set 1, 50-100 ng of gDNA from edited CD34+ cells was amplified at HBB sites (S4 Table). The PCR products were SPRI cleaned, followed by amplification of 20-50 ng of the first PCR product in a second 12-cycle PCR using primer set 2 (S4 Table). Then, the second PCR products were SPRI cleaned, followed by amplification of 20-50 ng of the second PCR product in a third 9 cycle PCR using illumina-compatible primers (primers designed and purchased through the Vincent J. Coates Genomics Sequencing Laboratory [GSL] at University of California, Berkeley), generating indexed amplicons of an appropriate length for NGS. Libraries from 100-500 pools of edited cells were pooled and submitted to the GSL for paired-end 300 cycle processing using a version 3 Illumina MiSeq sequencing kit (Illumina, San Diego, CA) after quantitative PCR measurement to determine molarity. Next-generation amplicon sequencing analysis Samples were deep sequenced on an Illumina MiSeq at 300 bp paired-end reads to a depth of at least 10,000 reads. A modified version of CRISPResso [44] was used to analyze editing outcomes. Briefly, reads were adapter trimmed and then joined before performing a global alignment between reads and the reference sequence using NEEDLE [45]. Indel rates were calculated as any reads in which an insertion or deletion overlaps the cut site or occurs within 3 base pairs of either side of the cut site, divided by the total number of reads. Bioanalyzer results for gRNAs tested in Fig 3A. IVT gRNAs were denatured for 5 min at 70˚C before analysis. (B) Correlation between IFNB1 activation and RNA stability or hamming distance, respectively. Predicted RNA secondary structure was calculated using Vienna RNA Fold [46]. Hamming distance reflects the extent to which the protospacer might interact with the gRNA constant region. The predicted secondary structure of the constant region in isolation was compared to the predicted secondary structure of the constant region when paired with the protospacer. The hamming distance between the dot-bracket notation-predicted secondary structure in each context is shown. gRNA, guide RNA; IFNβ, interferon beta; IFNB1, interferon beta 1; IVT, in vitro-transcribed. (TIF) S3 Fig. gRNA purification and complete removal of 5'-triphosphate groups are essential to avoid an innate immune response. (A) qRT-PCR analysis of IFNB1 transcript levels in HEK293 cells transfected with synthetic, IVT, and CIP IVT gRNAs (gRNA1). After in vitro transcription and CIP-treatment, gRNAs were purified with SPRI beads or spin columns, respectively. Cells were harvested for RNA extraction 30 h after transfection with RNAiMAX transfection reagent. Average values of 3 biological replicates +/−SD are shown (B) qRT-PCR analysis of IFNB1 transcript levels in HEK293 cells transfected with IVT gRNA via RNAiMAX lipofection. IVT gRNAs were treated with 0, 10, 20, or 30 units (U) of CIP, respectively, before purification with SPRI beads. (C) T7E1 assay to determine cleavage efficiencies of phosphatase-treated IVT gRNA-RNPs targeting the BFP locus in HEK293T-BFP cells. HEK293T-BFP cells were nucleofected with Cas9/dCas9-RNPs and harvested after 24 h. PCR-amplified target DNA was heated, reannealed, and digested with T7E1 before gel electrophoresis. (D) Viability of HEK293 cells after transfection with gRNAs. Viability was determined using trypan blue exclusion test. (E) Editing outcome in primary HSPCs that were nucleofected with dCas9 or Cas9-IVT gRNA RNPs targeting the HBB locus. Amounts of indels were determined 24 h after transfection by PCR across the target site, followed by Sanger sequencing and TIDE analysis. Statistical significances were calculated by unpaired t test ( Ã p < 0.05, ÃÃÃ p < 0.0001). The underlying data for this figure can be found in S1 Data. BFP, blue fluorescent protein; Cas9, CRISPR-associated 9; CIP, Calf intestine phosphatase; dCas9, nuclease-dead CRISPR-associated 9; gRNA, guide RNA; HEK293, human embryonic kidney 293; HBB, hemoglobin subunit beta; IFNB1, interferon beta 1; indel, insertion and deletion; IVT, in vitro-transcribed; n.s., not significant; qRT-PCR, quantitative real-time PCR; SPRI, solid-phase reversible immobilization; RNP, ribonucleoprotein; T7E1, T7 endonuclease 1. (TIF) S1
2018-07-24T22:38:27.480Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "8817eba0440c17e84a1fd19cc38e267ec8adcddf", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.2005840&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8817eba0440c17e84a1fd19cc38e267ec8adcddf", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
1851080
pes2o/s2orc
v3-fos-license
DACH1: Its Role as a Classifier of Long Term Good Prognosis in Luminal Breast Cancer Background Oestrogen receptor (ER) positive (luminal) tumours account for the largest proportion of females with breast cancer. Theirs is a heterogeneous disease presenting clinical challenges in managing their treatment. Three main biological luminal groups have been identified but clinically these can be distilled into two prognostic groups in which Luminal A are accorded good prognosis and Luminal B correlate with poor prognosis. Further biomarkers are needed to attain classification consensus. Machine learning approaches like Artificial Neural Networks (ANNs) have been used for classification and identification of biomarkers in breast cancer using high throughput data. In this study, we have used an artificial neural network (ANN) approach to identify DACH1 as a candidate luminal marker and its role in predicting clinical outcome in breast cancer is assessed. Materials and methods A reiterative ANN approach incorporating a network inferencing algorithm was used to identify ER-associated biomarkers in a publically available cDNA microarray dataset. DACH1 was identified in having a strong influence on ER associated markers and a positive association with ER. Its clinical relevance in predicting breast cancer specific survival was investigated by statistically assessing protein expression levels after immunohistochemistry in a series of unselected breast cancers, formatted as a tissue microarray. Results Strong nuclear DACH1 staining is more prevalent in tubular and lobular breast cancer. Its expression correlated with ER-alpha positive tumours expressing PgR, epithelial cytokeratins (CK)18/19 and ‘luminal-like’ markers of good prognosis including FOXA1 and RERG (p<0.05). DACH1 is increased in patients showing longer cancer specific survival and disease free interval and reduced metastasis formation (p<0.001). Nuclear DACH1 showed a negative association with markers of aggressive growth and poor prognosis. Conclusion Nuclear DACH1 expression appears to be a Luminal A biomarker predictive of good prognosis, but is not independent of clinical stage, tumour size, NPI status or systemic therapy. Introduction Breast cancer is the most common cancer in females and the third most common cause of cancer death in the UK after lung and large bowel cancer [1]. Recent studies have confirmed the heterogeneity of breast cancer arising from inherited and acquired genetic variation. It has recently been proposed that 10 molecular breast cancer groups exist [2], building on the overarching and simpler four group molecular stratification established more than a decade ago [3][4][5][6]. The largest of these groups comprise oestrogen receptor (ER) positive (luminal) tumours with the latest evidence suggesting complex clinical diversity and mortality risk [2]. It has long been appreciated that the oestrogen receptor has a compelling role in breast cancer biology because its expression is both a predictive and independent prognostic factor for disease outcome, treatment response and recurrence in breast cancer [7]. This is because when activated it induces pro-cancerous cell signalling pathways, influencing cell growth, survival and differentiation. Gene expression array data has shown the luminal family of breast cancer includes at least one high risk subgroup, several intermediate risk subgroups (including a luminal B subgroup), and two good prognosis subgroups comprising a 'pure' ER luminal A subgroup and a mixed ER positive/negative subgroup [2]. Improved classification delivering clinical utility is required to achieve more effective therapeutic treatment and to identify patients that will be refractory to anti-hormonal therapy. Luminal A tumours tend to be low grade tumours that are characterised by over expression of ER-activating genes including LIV1, CCND1, FOXA1, XBP1, GATA3 and Bcl-2 [8]. Contrasting with this, luminal B cancers are high grade, show increased proliferation (Ki67 positive) and growth factor receptors such as EGFR, and have variable HER2 expression [9]. A number of studies have attempted to phenotype luminal subgroups using protein biomarkers with immunohistochemistry, and to relate these to increased risk of adverse events. For example the transferrin receptor, CD71, is involved in the uptake of iron and is expressed on cells showing high proliferation, and previously we reported it to be an independent prognosticator of an ER+ subgroup characterised by poor prognosis and resistance to endocrine therapy [10]. Another example is the proliferation related marker TK1 which is an enzyme involved in the synthesis of thymidine triphosphate needed by the proliferating cells to enter S phase [11]. In addition, CARM 1 [12] and PELP1 are transcriptional corepressors and indicators of reduced disease free survival in luminal cancers [13]. PELP1 is a coactivator that binds with the AF-2 domain (oestrogen responsive element) of ERa, facilitating downstream estradiol-induced DNA synthesis and cell proliferation [14]. In recent times, various computational approaches have been developed for cancer classification and diagnosis prediction [15]. In breast cancer hierarchical clustering analysis of gene expression array data has proven useful in providing broad molecular classification [3], but other techniques are required to identify biomarkers defining membership to various subgroups. Subsequently, computer algorithms incorporating a multilayer perceptron based Artificial Neural Network (ANN) method [16] have been adopted to identify cancer-relevant biomarkers to assist in clinical decision-making [17,18]. Previously ANN has been used to identify a panel of protein biomarkers [19] capable of classifying breast cancer patients parallel to that achieved using gene expression profiling [3]. ANNs have proved to be capable of modelling biological systems more precisely than conventional statistical techniques [20], and are successful for avoiding overfitting and to produce generalised models with validation subsets in breast cancer dataset [21]. In this study we used an ANN based network inference approach [22] to identify ER-associated biomarkers with the aim of improving classification of luminal breast cancer group based on cancer specific survival. Seventeen candidate genes were identified including the Drosophila dachshund (dac) gene. DACH1 belongs to the nuclear protein family undertaking a vital role in promoting differentiation of Drosophila eye and limb and retinal determination signalling pathway [23,24]. In humans, DACH1 is known to repress tumorigenesis in human breast and prostate cancers [25] and down regulates EGFR and cyclin D1 in tumour cells [26]. Furthermore, DACH1 may control stem cell gene expression [27] preventing cancer cell migration needed for metastasis development [28]. DACH1 was selected for further study because it is hypothesised that high levels of DACH1 will competitively inhibit the growth promoting activity of PELP1 and consequently will be associated with improved prognosis. The current study aims to characterise the association of DACH1 with other cancer relevant biomarkers in the luminal subtype of breast cancer, with the emphasis being in determining its possible role as a clinical classifier of disease outcome and as a prognostic biomarker. Materials and Methods This study was approved by the Nottingham Research Ethics Committee 2 under the title 'Development of a molecular genetics classification of breast cancer'. Breast cancer microarray dataset To identify genes associated with ER status in breast cancer a cDNA microarray dataset, E-GEOD-20194 [29], was selected from the public repository ArrayExpress [30], submitted by Micro Array Quality Control consortium. The dataset comprises expression values for 22,283 probe sets targeting gene transcripts across 278 samples (ER positive = 164 and ER negative = 114) with tumour stage ranging from I-III. ANN architecture and model development The ANN architecture encompasses supervised learning from a multilayer perceptron model employing two hidden nodes with a sigmoidal transfer function. The samples were subjected to Monte Carlo Cross Validation strategy by randomly segregating them into three different subsets namely: train (to perform learning), test (for early stopping when the network fails to perform better with a threshold of 3000 epochs or 1000 epochs without improvement in mean square errors (MSE) and validation subsets (to authenticate the model performance on previously unseen data) in a proportion of 60%, 20% and 20% respectively. Each of the 22,283 probe sets were used as individual input variables in the model. The algorithm used a momentum of 0.5 and learning rate of 0.1. The error differences in actual and predicted values were used to update the weights with a back propagation algorithm. The complete ANN model is reiterated 50 times with random sampling. Across 50 ANN model predictions, the average MSE of a test subset for each input variable was considered to determine their predictive capability for ER class. Interaction network development To evaluate the interactions between the highly predictive probe sets for ER class, we have employed the interaction algorithm based on an ANN model described earlier [31]. Briefly, from a set of 100 probes, 99 probe expression values (inputs) were used to predict a single one (output). An ANN model was trained until an optimal solution is found minimising the difference between the expected output and the predicted. The weights for the optimised model were recorded. This process was iteratively repeated, selecting new inputs from the 100 set, until all probe expressions are predicted from the remaining probes. The weights quantify the intensity of the relation between source and target which could be positive (stimulating) or negative (inhibiting). The analysis generated a matrix of 9,900 bidirectional interactions for all 100 probes. These were subsequently filtered to select the top 100 interactions for further visualisation. The interaction network was visualised using CytoscapeH Ver 2.7.0 [32], which symbolised each probe set as a node and interaction as an edge. To give directionality for the interactions each input was considered as source, the output as target, and the weights recorded for the prediction as interaction values. The directionality for the edge is given according to source and target, and the weight of the interaction is materialised by the thickness of the edges. Patient selection and immunohistochemistry Tissue microarray (TMA) sections comprising 993 patients from the Nottingham Tenovus study (1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998) with two tissue cores represented from each patients tumour. TMA sections were immunostained to assess the protein expression levels of DACH1. This TMA is well characterised with data for clinical information, tissue protein expression of tumour-relevant pathological biomarkers and long term clinical follow-up including information on local, regional and distant tumour recurrence, and cancer specific survival outcome [10]. Patient management was based on the Nottingham Prognostic Index (NPI) score and ER status as previously described [33]. Breast cancer specific survival (BCSS) was defined as the time (in months) from the date of the primary surgical treatment to the time of death from breast cancer. Distant metastasis free interval (DMFI) was defined as the interval (in months) from the date of the primary surgical treatment to the date of development of the first distant metastasis. Four micron thick formalin fixed paraffin-processed TMA and full face sections were subjected to microwave antigen retrieval in citrate buffer (pH 6.0), and then immunohistochemically stained with a rabbit polyclonal antibody against DACH1 (Sigma HPA012672, St Louis, USA) using a streptavidin biotin technique (Dako, Cambridge, UK). The DACH1 antibody was optimised for heterogeneity and specificity at a working dilution of 1:200. Sections were counterstained in haematoxylin and mounted using DPX mounting medium. Negative controls comprising omission of the primary antibody or substitution with an inappropriate primary antibody of similar immunoglobulin class was used. The immunohistochemically stained TMA sections were scored with observers blinded to the clinicopathological features of tumours and patients' outcome. Nuclear staining intensity and percentage of cells stained was assessed in unequivocal malignant epithelium using the H-score (histochemical score) [34]. Staining intensity was scored 0, 1, 2 or 3 and the percentage of positive cells at each intensity subjectively estimated to produce a final score in the range 0-300. Damaged tissue cores and those that did not contain invasive carcinoma were censored. Statistical Analysis Statistical analysis was performed using SPSS 15.0 (SPSS Inc., Chicago, IL, USA) software. Three patient subgroups were identified representing negative, low and high tumour nuclear H-scores. The Kaplan-Meier method with a log rank test was used to model the association of DACH1 group membership with cancer specific survival. Patients were categorised using an H-score $200 to define strong DACH1 positivity obtained in the majority of cells in a patient's tumour. Association between DACH1 expression and different clinicopathological factors and breast cancer markers was evaluated using the non-parametric Chi- square test. Patients that died due to causes other than breast cancer were censored during survival analyses. Multivariate Cox proportional hazard regression models were used to evaluate any independent prognostic effect of the variables with 95% confidence interval. A p-value of ,0.05 was considered to indicate statistical significance. Identification of the ER interactome Details of the gene signature associated with ER status were recently published [22]. The best predictive probe sets for showing association with ER status were selected based on lowest average of test error encountered across 10 independent predictive models. The best predictive probe was found to be 205225_at belonging to ESR1 gene which codes for oestrogen receptor alpha (ERa). Other highly predictive probe sets included GATA3, CA12 and NAT1 and DACH1 (205471_s_at). Interaction network inference The 100 best ER predictive probes selected from ER-positive samples were further submitted to a network inference algorithm to determine the strength and nature of interactions between the selected probes. The algorithm yielded 9,702 interactions across 10 independent models. To reduce the dimensionality and to remove insignificant interactions, a filtering strategy was applied to select only the top 200 interactions based on interaction weight. Bidirectional interactions were computed for any given pair of genes accordingly to yield a bidirectional interaction matrix between each source and target. A network model of the top 200 (100 positive and 100 negative) interactions forming positive and negative hubs is shown in Figure S1. For example, DACH1 (Dachshund homolog 1), SERPINA 5 (Serpin peptidase inhibitor member 5), TFF3 (Trefoil factor 3), and RARA (Retinoic acid receptor alpha) were connected with the majority of positive interactions forming positive hubs. In contrast, SOX11 (SRY (sex determining region Y)-box11), EGFR (Epidermal growth factor receptor) and CDH3 (cadherin 3, type 1, Pcadherin) were connected with the majority of negative interactions forming negative hubs. The strongest positive influence was found between TFF1 (Trefoil factor 1) and TFF3, and the strongest negative influence was found between MAPT (Microtubule-associated protein tau) and EGFR. To establish an interaction map with only DACH1 in luminal (ER-positive) breast cancer samples, we created a DACH1 interactome (Figure 1) using the 100 best predictive genes. Computationally, DACH1 was found to be highly positively influenced by KIAA0882, a variant of TBC1 (tre-2/USP6, BUB2, cdc16 domain 1) family member 9A, and highly negatively influenced by IL6ST (Interleukin 6 signal transducer). DACH1 was also found to be highly positively and negatively influencing CDH3 and SOX11 respectively. An interaction map ( Figure S2) of important genes overlapping with the oestrogen receptor and DACH1 respectively, shows similarity. DACH1 protein expression in breast cancer To test the clinical relevance in breast cancer, the association of DACH1 protein with clinicopathology features was investigated in a well characterised patient cohort. The median age of the patients was 55 years (range 27-70). DACH1 immunostaining was localised to the nuclei of malignant cells and was found to be (Figure 2). DACH1 was significantly increased in post-menopausal patients with lobular and tubular cancer types but in contrast, was rarely seen in patients with medullary cancer. DACH1 expression showed no significant association with tumour size, tumour stage, metastasis development, tumour recurrence, or vascular invasion. DACH1 expression was significantly increased in tumours of low grade, good Nottingham Prognostic Index and candidacy for hormonal therapy ( Table 1). Association of DACH1 with disease biomarkers Nuclear DACH1 expression was strongly increased in patients with ER-alpha positive tumours co-expressing PgR, and epithelial CK18/19 cytokeratins. Nuclear staining was significantly associated with 'luminal-like' markers of good prognosis including FOXA1 and RERG. In contrast, strong inverse associations were found with candidate luminal markers of poor prognosis including CD71 ( Table 2). Supporting its association with good prognosis, tumour DACH 1 expression correlated with low cell proliferation (MIB1). Low DACH1 frequency and expression was seen in tumours bearing markers of poor prognosis including the basal-like markers CK14/ 5/6 and EGFR, as well as HER2 and p53 positivity. The effect of endocrine therapy on the ability of DACH1 to predict breast cancer specific survival was considered using Kaplan-Meier modelling. DACH1 positivity was associated with good survival in patients treated with tamoxifen (x 2 = 8.30, p = 0.004) and in addition, also showed a trend in patients not receiving tamoxifen (x 2 = 3.7, p = 0.055). The predictive independence of DACH1 was tested using multivariate models (Cox regression) incorporating endocrine therapy, clinical stage, tumour size and NPI status. DACH1 was not found to be independent of these variables for predicting cancer specific survival. Discussion In our study, we used an artificial neural network (ANN) based inference technique to identify ER associated biomarkers capable of separating good and poor prognosis patients with luminal type breast cancer. Consistent with expectations, the best predictive probe for identifying ER status in multiple independent runs was 205225_at representing ESR1 gene coding for oestrogen receptor alpha. Moreover the regulatory gene DACH1, associated with TGFb signalling, was identified among the probe sets that produced a strongly positive interaction with ER status and so we tested its relevance as a luminal marker of disease progression by investigating its association with clinicopathologic variables. The objective is to compile cumulative evidence to produce a panel of markers capable of clinically guiding in the selection and management of breast cancer patients within the heterogeneous luminal class. We observed three predominant patterns of nuclear DACH1 expression compatible with TSG (tumour suppressor gene) functionality. Nuclear DACH1 protein expression was significantly associated with markers of good prognosis including low cellular proliferation (MIB1 expression) and functional apoptosis (Bcl2 expression). It has previously been observed that reduced DACH1 expression occurs in invasive cancer compared to normal breast epithelium confirmed by our findings where DACH1 expression showed an inverse association with mitosis and cyclin D1 expression in breast cancer patient samples [26]. More recently, increased DACH1 expression was reported to correlate with reduced expression of IL-8 and other related chemokines, thus inhibiting cellular migration and invasion in MCF10A breast cancer cells [28]. Further evidence of its TSG function is provided by the observation that DACH1 homozygous deletion stimulates tumorigenesis in glioma cells [35], and loss of DACH1 occurs in high FIGO surgical stage endometrial cancers [36]. Furthermore, it has also recently been reported that over-expression of DACH1 protein is associated with poor prognosis when expressed in the cytoplasm rather than nuclei of ovarian cancer cells indicating disease progression [37], compatible with loss of TSG function. In vitro cell signalling studies have shown that DACH1 exerts its regulatory control on TGFb signalling by nuclear binding via SMAD4 [26,38], competing with precancerous transcriptional factors. Recent breast cancer studies have shown that DACH1 can directly influence the gene expression of stem cells, causing them to under-express CD24 [27]. In addition, it appears that the tumour suppressor function of DACH1 can be moderated by the tissue microenvironment including the presence of growth factors, evidenced by tumorigenesis seen in cell lines grown in vitro in the presence of IGF-1 [39]. Steroid receptors, coactivators and co-repressors regulate the activity of ERa. PELP1 (proline, glutamic acid and leucine rich protein 1) is a coactivator that binds with the AF-2 domain (oestrogen responsive element) of ERa, facilitating downstream estradiol-induced DNA synthesis and cell proliferation [14]. Previously, we reported that PELP1 expression is associated with larger tumours and clinicopathology features indicative of poor prognosis, including high grade and basal cytokeratin expression [13]. DACH1 competitively binds with ERa, preventing PELP1 binding [14]. In the current study we found that moderate to high tumour nuclear DACH1 expression in the majority of cancer cells is compatible with functionally blocking PELP1 activity, reflected by its association with good prognosis. Conversely, absent or weak DACH1 nuclear staining represents unopposed PELP1 mediated tumour cell growth. An inverse relationship was seen between DACH1 and basal type markers including CK14, CK5/6 and EGFR. EGFR is a member of the HER family associated with multiple downstream cell signalling pathways leading to adverse clinical outcomes including tumour growth and metastasis. In accord we found an inverse association for DACH1 and HER2. In this respect and similar to our previous report, we propose that DACH1 and FOXA1 [33] share membership of the Luminal A biomarker group in being associated with variables of good prognosis. DACH1 was found to be a predictor of specific survival but was not independent of hormonal therapy, clinical stage, tumour size or NPI status. Clinical tests that identify high risk (increased metastatic potential) patients with breast cancer to select candidates for chemotherapy treatment are currently under review [40]. Applying rationalised targeted treatment is necessary because chemotherapy can result in medical complications, reduced quality of life and economic burden. Crucially, some cancers present with no greater mortality risk if untreated with chemotherapy and among these, patients with Luminal A cancers appear to have good survival prospects (in press). Further investigation is required to determine if DACH1 and other Luminal A biomarkers can be used for selecting patients not requiring chemotherapy. As ANNs have a proven application in breast cancer patient classification [22] and for biomarker identification associated with disease progression [41], in the current study the focus for relevance to clinical outcome has been exploited. Among the top ten ranked genes with positive association to ER was the transcription factor GATA3 known to be associated with ER [42], ER status [21] and hormonal responsiveness in breast cancer [43]. Genes showing a negative association with ER included CA12 which is associated with hypoxia and poor prognosis in breast cancer [44]. These findings and others in previous studies support the validity and robustness of the ANN technique and its application in identifying breast cancer biomarkers. In summary, we have shown that DACH1 occurs in patients with ER+ breast cancers and predicts good prognosis. In this respect DACH1 can be regarded as a Luminal A biomarker. Figure S1 Interaction map of 2 (100 positive and 100 negative) interactions from highly predictive probe sets in ER positive samples. The genes are represented as nodes and interactions as edges. Green edge is a positive interaction and red edge is a negative interaction. The intensity of the interaction is represented in terms of the thickness of edge and the directionality with the arrow from source to target. The nodes with multiple interactions (.5) are considered as hubs. (TIF) Figure S2 Association of luminal markers with (a) ESR1 and (b) DACH1 in luminal samples. The genes are represented as nodes and interactions as edges. The green edge is a positive interaction and the red edge is a negative interaction. The intensity of the interaction is represented in terms of the thickness of edge and the directionality with the arrow. (TIF)
2016-05-12T22:15:10.714Z
2014-01-02T00:00:00.000
{ "year": 2014, "sha1": "1ae50eb8e950ab0ca196c0380fd68fc0912a17c1", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0084428&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "214ff295ba87f693735752fec94a44368a454f8f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
54608439
pes2o/s2orc
v3-fos-license
Cavitation Erosion and Jet Impingement Erosion Behavior of the NiTi Coating Produced by Air Plasma Spraying Cavitation erosion and jet impingement erosion can result in a great loss of materials. NiTi alloy is a very promising candidate to acquire cavitation erosion resistance and jet impingement erosion resistance because of its superelasticity. Due to the high cost and poor workability of NiTi alloy, many people tried to overcome such drawbacks by preparing NiTi coatings on the basis of deteriorating the good properties as little as possible. From the aspect of the application of NiTi coating, the erosion resistance should be evaluated comprehensively. One of these evaluations involves the comparison of cavitation erosion resistance and jet impingement erosion resistance of NiTi. This evaluation has not been made thus far. Thus, in this study, the NiTi coating was prepared by air plasma spraying (APS) using pre-alloyed NiTi powder. Its microstructure, chemical composition and phase transformation were identified. Cavitation erosion behavior and jet impingement erosion behavior of the as-sprayed NiTi coating were compared. The results showed that the coating exhibited better jet impingement erosion resistance than cavitation erosion resistance. This was attributed to the oxides, impurities, cracks and pores that existed in the coating, whose effects on deteriorating the cavitation erosion were far greater than those worsening the jet impingement erosion. Introduction Cavitation erosion and jet impingement erosion often occur in high speed and ultrasonic mixing systems.To repair the damage caused by cavitation erosion and jet impingement erosion, millions of pounds have to be expended every year [1][2][3]. In general, when the hydrodynamic pressure changes, vapor cavities are formed.The subsequent collapse of these unstable bubbles creates the shock wave and/or micro-jet in turbulent flows, which can impact surfaces and initiate cavitation erosion damage [4][5][6][7].Typically, jet impingement erosion is defined as the acceleration or increase in the rate of metal/alloy deterioration caused by the mechanical damage due to the impacting of solid particles.Such erosion involves in surface damage and severe material loss caused by the repetitive impact of hard erodent particles [8][9][10][11]. NiTi alloy is a very promising candidate to acquire both cavitation erosion resistance and jet impingement erosion resistance [12].The nearly equiatomic NiTi intermetallic compounds, called the shape memory alloys, have unique properties such as shape memory effect, superelasticity, high corrosion resistance and fatigue strength [13][14][15].For wear applications of shape memory alloys, the superelasticity has been considered as the major property.The martensitic transformation may also be stress-induced, giving rise to the superelastic effect.The superelastic effect occurs by the application of stress when the materials are in the temperature range of thermally stable austenite.This property allows for the material to undergo a significant deformation during loading, with full recovery of its shape upon unloading.For the particular case of NiTi shape memory alloys, the stress-induced martensite is highly deformable by twin boundary migration when external stresses are applied, leading to the reorientation of the martensitic plates [16].In principle, the mass loss in the processes of both cavitation erosion and jet impingement erosion originates from either the impact stress (from the shock wave, micro-jet, or hard erodent particles) or the crack propagation.The superelasticity of the NiTi alloy can mitigate the effect of the impact stress and retard the crack propagation, which can result in high cavitation erosion and jet impingement erosion resistance.In recent years, the high cavitation erosion resistance of the NiTi alloy has already been demonstrated by several authors.Many authors reported that both austenite and martensite contributed to the high cavitation erosion resistance of NiTi alloy [13,17,18].Wu et al. [14] reported that the variants accommodation, pseudoelasticity of stress-induced martensite and high work-hardening rate could improve the erosion resistance of NiTi alloy.Shida et al. [19] showed that the erosion resistance of the NiTi alloy was strongly dependent on chemical composition and microstructure, but not on hardness. However, the high cost and poor workability restrict the wide application of NiTi alloy.In order to conquer its shortcomings, people have tried to prepare the NiTi coatings on the basis of sacrificing the superior properties of NiTi alloy as little as possible.Many investigations have been carried out to prepare NiTi coatings or films with cavitation and erosion resistance.These NiTi coatings are generally prepared by explosive welding [20,21], low pressure plasma spray process (LPPS, used to be called VPS) [4,15,22,23], air plasma spraying (APS) [22][23][24], high velocity oxy-fuel spray (HVOF) [22][23][24][25], tungsten inert gas (TIG) [26], laser [27,28], laser plasma hybrid spraying [29,30], sputter depositing [31], cold spraying [32,33] and modified high-velocity oxygen fuel spraying process (so called low temperature HVOF) [34][35][36][37].Many researchers studied the cavitation erosion resistance of NiTi coatings.For example, Cheng et al. [26][27][28] reported that the cavitation erosion resistance was ranked in descending order as: NiTi plate > 316-NiTi-Laser > 316-NiTi-TIG > AISI 316L.However, almost no investigations focused on the resistance of jet impingement erosion.Furthermore, no one made a comparison of the cavitation erosion resistance and the jet impingement erosion resistance of NiTi coatings, which is useful for the application.Therefore, this comparison is worthwhile. In the present work, our purpose is to make a comparison of the cavitation erosion resistance and the jet impingement erosion resistance of NiTi coatings.We are interested in whether the NiTi coating can hold the excellent characteristics of the NiTi alloy.The APS is used to prepare the NiTi coating because it does not need a vacuum chamber and the shape of the workpiece is not limited, although the coating made by this method has more oxides, pores and impurities compared to that made by LPPS.The microstructure, chemical composition and defects of the as-prepared NiTi coating are evaluated, and the cavitation erosion resistance and jet impingement erosion resistance are compared. Materials In the present work, NiTi powder was used, which was obtained from Beijing AMC Powder Metallurgy Technology Co., Ltd.(Beijing, China).The powder was prepared using commercial NiTi rods by electrode induction inert gas atomization.The chemical compositions of NiTi powder are listed in Table 1. Figure 1b shows the XRD patterns of NiTi powder.The main phase is austenite B2 (NiTi).No other intermetallic compounds, carbides or oxides are found, which may be due to the low content of these impurities.Figure 1a shows the morphology of NiTi powder used as a starting material for APS processing.The powder has a spherical shape, and its degree of sphericity is 0.89.That means excellent flowability for the powder feeding.The particle size of the powders was measured by the laser particle analyzer.The d 10 , d 50 and d 90 values are 53.20 µm, 78.13 µm and 124.70 µm, respectively.These values are suitable for APS.The relatively large diameter of the powders can ensure the flowability of the powders and reduce the phenomenon of overburning. In addition, the NiTi alloy plate was used as the comparing material, whose phase and chemical composition are austenite B2 and Ni 50.5 Ti 49.5 at %, respectively.(NiTi).No other intermetallic compounds, carbides or oxides are found, which may be due to the low content of these impurities.Figure 1a shows the morphology of NiTi powder used as a starting material for APS processing.The powder has a spherical shape, and its degree of sphericity is 0.89.That means excellent flowability for the powder feeding.The particle size of the powders was measured by the laser particle analyzer.The d10, d50 and d90 values are 53.20 μm, 78.13 μm and 124.70 μm, respectively.These values are suitable for APS.The relatively large diameter of the powders can ensure the flowability of the powders and reduce the phenomenon of overburning. In addition, the NiTi alloy plate was used as the comparing material, whose phase and chemical composition are austenite B2 and Ni50.5Ti49.5 at %, respectively. Preparation of Coating The 304 stainless steel was selected as the substrate with 140 mm in length, 50 mm in width and 5 mm in thickness, which was prepared by descaling and blast cleaning before APS.The descaling can clean the surface of substrate, and the blasting with corundum can improve the roughness of substrate.The chemical composition of 304 stainless steel is listed in Table 2.The APS equipment is homemade, mainly including the power supply, control cabinet, spray gun, powder feeder, circulating water cooling and gas supply systems.The schematic diagram of the APS system is shown in Figure 2. The inert gases are taken to the plasma generator (plasma gun), and these inert gases are heated and activated when they pass through the direct current arc between positive and negative poles in the plasma gun.Therefore, this ionized the inert gases.This process produces high temperatures and a high-speed plasma arc flow.The temperature of the plasma arc is so high that the powders are heated to a molten or semi-molten state.These powders impact the substrate in the high speed plasma arc flow, resulting in the formation of the coating. Preparation of Coating The 304 stainless steel was selected as the substrate with 140 mm in length, 50 mm in width and 5 mm in thickness, which was prepared by descaling and blast cleaning before APS.The descaling can clean the surface of substrate, and the blasting with corundum can improve the roughness of substrate.The chemical composition of 304 stainless steel is listed in Table 2.The APS equipment is homemade, mainly including the power supply, control cabinet, spray gun, powder feeder, circulating water cooling and gas supply systems.The schematic diagram of the APS system is shown in Figure 2. The inert gases are taken to the plasma generator (plasma gun), and these inert gases are heated and activated when they pass through the direct current arc between positive and negative poles in the plasma gun.Therefore, this ionized the inert gases.This process produces high temperatures and a high-speed plasma arc flow.The temperature of the plasma arc is so high that the powders are heated to a molten or semi-molten state.These powders impact the substrate in the high speed plasma arc flow, resulting in the formation of the coating.Before the spraying process, the substrate is heated by the plasma arc flow without powder feeding.In the process of spraying, the plasma gun moves on a fixed route.When the route is complete and the spray completely covers the surface of the substrate, this is one spraying cycle.The spraying parameter is shown in Table 3. Microstructure, Chemical Analysis and Phase Transformation The microstructure of the powder was investigated by optical microscope (OM) (Carl Zeiss Axio Observer Z1 m, ZEISS, Oberkochen, Germany).The samples and powders were etched by a solution of 100 mL distilled water, 20 mL hydrofluoric acid and 50 mL nitric acid.The micro-hardness of the NiTi coating and NiTi plate was measured using a HV-1000 micro-hardness tester (laizhou hengyi tester equipment Co., Ltd., Laizhou, China).The hardness profile was acquired by means of a microhardness test at a load of 500 g and a loading time of 10 s.The morphology and chemical composition of the specimen were investigated by using a FEI XL30 field emission gun scanning electron microscope (FEG-SEM, FEI, Hillsboro, OR, USA).In addition, X-ray diffraction (XRD) using a Righaku D/max 2400 diffractometer (Rigaku Corporation, Tokyo, Japan)with monochromated Cu kα radiation (λ = 0.1542 nm) and differential thermal scanning analysis (DSC, STA 449 C, NETZSCH, Free State of Bavaria, Germany) were used to characterize microstructures, analyze chemical compositions and evaluate phase transformation.Before the spraying process, the substrate is heated by the plasma arc flow without powder feeding.In the process of spraying, the plasma gun moves on a fixed route.When the route is complete and the spray completely covers the surface of the substrate, this is one spraying cycle.The spraying parameter is shown in Table 3.The microstructure of the powder was investigated by optical microscope (OM) (Carl Zeiss Axio Observer Z1 m, ZEISS, Oberkochen, Germany).The samples and powders were etched by a solution of 100 mL distilled water, 20 mL hydrofluoric acid and 50 mL nitric acid.The micro-hardness of the NiTi coating and NiTi plate was measured using a HV-1000 micro-hardness tester (laizhou hengyi tester equipment Co., Ltd., Laizhou, China).The hardness profile was acquired by means of a micro-hardness test at a load of 500 g and a loading time of 10 s.The morphology and chemical composition of the specimen were investigated by using a FEI XL30 field emission gun scanning electron microscope (FEG-SEM, FEI, Hillsboro, OR, USA).In addition, X-ray diffraction (XRD) using a Righaku D/max 2400 diffractometer (Rigaku Corporation, Tokyo, Japan)with monochromated Cu kα radiation (λ = 0.1542 nm) and differential thermal scanning analysis (DSC, STA 449 C, NETZSCH, Free State of Bavaria, Germany) were used to characterize microstructures, analyze chemical compositions and evaluate phase transformation. Cavitation Erosion Test In the cavitation erosion test, the ultrasonically vibratory apparatus was used.The test was carried out according to the ASTM G32-10 (2010) standard [37].The equipment consists of an ultrasonic liquid processor (Q700) from Qsonica, LLC (53 Church Hill Rd., Newtown, CT, USA).A sonicator system is comprised of three major components: the generator, the converter and the horn (also known as a probe).The standard probe (probe 4420) was selected.The probe is made from titanium and the tip diameter is 13 mm.The area of the bottom surface of the probe was bigger than the top surface of the sample to ensure that the whole top surface of the sample suffers from the cavitation erosion.The vibration frequency of the probe was 20 kHz and the amplitude was 30 µm.Before the cavitation erosion test, the sample was ground with SiC emery paper up to grade 2000, washed in the alcohol, dried in hot air, and weighted using an analytical balance with an accuracy of 0.1 mg.The sample was submerged in the distilled water, and was placed at a distance of 0.5 mm to the lower surface of the probe, which was immersed into the test medium to a depth of 15 mm.The temperature of the distilled water was maintained at about 20 • C using the cooling water.The erosion damage of the samples was evaluated in terms of the mass loss.Each test was repeated at least three times to ensure the reproducibility. Jet Impingement Erosion Test The jet impingement erosion equipment was homemade.The schematic diagram of the jet apparatus for erosion corrosion is shown in Figure 3a [38].In the process of jet impingement erosion, the diameter of the nozzle was 3 mm.The sample was placed at 5 mm from the nozzle.The impact angle was set at 90 • .Tap water containing 2 wt % silica sand (70-150 mesh) was used as the test medium.The shape of the silica sand was irregular and angled, as shown in Figure 3b.It is a recycled abrasive erosion test during each run and the new silica sand was used for each run.The flow velocity was 15 m/s throughout the whole process of the jet impingement erosion.Before the jet impingement erosion test, the samples were ground with SiC emery paper up to grade 2000 and washed in alcohol.The samples were dried before the tests, and weighed using an analytical balance with an accuracy of 0.1 mg.The erosion damage to the samples was evaluated in terms of the mass loss.Each test was repeated at least three times to determine reproducibility. Cavitation Erosion Test In the cavitation erosion test, the ultrasonically vibratory apparatus was used.The test was carried out according to the ASTM G32-10 (2010) standard [37].The equipment consists of an ultrasonic liquid processor (Q700) from Qsonica, LLC (53 Church Hill Rd., Newtown, CT, USA).A sonicator system is comprised of three major components: the generator, the converter and the horn (also known as a probe).The standard probe (probe 4420) was selected.The probe is made from titanium and the tip diameter is 13 mm.The area of the bottom surface of the probe was bigger than the top surface of the sample to ensure that the whole top surface of the sample suffers from the cavitation erosion.The vibration frequency of the probe was 20 kHz and the amplitude was 30 μm.Before the cavitation erosion test, the sample was ground with SiC emery paper up to grade 2000, washed in the alcohol, dried in hot air, and weighted using an analytical balance with an accuracy of 0.1 mg.The sample was submerged in the distilled water, and was placed at a distance of 0.5 mm to the lower surface of the probe, which was immersed into the test medium to a depth of 15 mm.The temperature of the distilled water was maintained at about 20 °C using the cooling water.The erosion damage of the samples was evaluated in terms of the mass loss.Each test was repeated at least three times to ensure the reproducibility. Jet Impingement Erosion Test The jet impingement erosion equipment was homemade.The schematic diagram of the jet apparatus for erosion corrosion is shown in Figure 3a [38].In the process of jet impingement erosion, the diameter of the nozzle was 3 mm.The sample was placed at 5 mm from the nozzle.The impact angle was set at 90°.Tap water containing 2 wt % silica sand (70-150 mesh) was used as the test medium.The shape of the silica sand was irregular and angled, as shown in Figure 3b.It is a recycled abrasive erosion test during each run and the new silica sand was used for each run.The flow velocity was 15 m/s throughout the whole process of the jet impingement erosion.Before the jet impingement erosion test, the samples were ground with SiC emery paper up to grade 2000 and washed in alcohol.The samples were dried before the tests, and weighed using an analytical balance with an accuracy of 0.1 mg.The erosion damage to the samples was evaluated in terms of the mass loss.Each test was repeated at least three times to determine reproducibility. Microstructure, Chemical Analysis and Phase Transformation The hardness profile was acquired by the means of a micro-hardness test.The hardness of the NiTi coating and the NiTi plate is 549.1 HV and 307.7 HV, respectively.Figure 4a shows the cross-sectional microstructure of the NiTi powder without etching by an optical microscope.Some pores exist in the interior of powder.These pores are formed in the process of the inert gas atomization of commercial NiTi rods using electrode induction.The molten powders have an affinity to the gas, and the gas dissolves into the molten powder.When the powders cool fast, the integrated gas has no time to escape completely.The residual gas causes the formation of these pores.The existence of these pores in the powders is taken as one of the reasons for the pores existing in NiTi coating, which will be shown later.Figure 4b shows the cross-sectional microstructure of NiTi powder after etching.The powders consist of equiaxed crystal.Figure 1b shows that the equiaxed crystal is austenite B2. Figure 4c shows the surface microstructure of the NiTi plate after etching.The second phases, oxides and pores are found on the surface of the NiTi plate before the cavitation erosion test. Microstructure, Chemical Analysis and Phase Transformation The hardness profile was acquired by the means of a micro-hardness test.The hardness of the NiTi coating and the NiTi plate is 549.1 HV and 307.7 HV, respectively.Figure 4a shows the crosssectional microstructure of the NiTi powder without etching by an optical microscope.Some pores exist in the interior of powder.These pores are formed in the process of the inert gas atomization of commercial NiTi rods using electrode induction.The molten powders have an affinity to the gas, and the gas dissolves into the molten powder.When the powders cool fast, the integrated gas has no time to escape completely.The residual gas causes the formation of these pores.The existence of these pores in the powders is taken as one of the reasons for the pores existing in NiTi coating, which will be shown later.Figure 4b shows the cross-sectional microstructure of NiTi powder after etching.The powders consist of equiaxed crystal.Figure 1b shows that the equiaxed crystal is austenite B2. Figure 4c shows the surface microstructure of the NiTi plate after etching.The second phases, oxides and pores are found on the surface of the NiTi plate before the cavitation erosion test.Figure 5 shows the back scattered electron (BSE) microstructure of NiTi coating, in which un-melted powders, parallel cracks, perpendicular cracks, pores, oxides and impurities can be found.The un-melted powders retain the state of the starting powders.The length of the plasma arc flow is short, making it very easy for air entrainment.Then, it can induce the heat inhomogeneity of the particles in the process of APS.As a result, the partial particles are melted while the other partial particles are semi-molten.Under the effect of the plasma arc flow, the melted particles form the lamellae when the particles hit the substrate.However, the un-melted particles are inlaid in the coating and keep the initial shape of the particles.The existence of pores has a great influence on the mechanical and corrosion properties of the coating.There are two kinds of pores, macropores and micropores.The macropores exist between the lamellae.The incomplete filling of the droplets when the droplets impact substrate and the insufficient flattening of the unmelted particles are the reasons for the forming of macropores.The micropores are formed for the following reasons.The first one is the insufficient wetting of the surface of the existing coating.The second is the existing pores in the inner of starting powders.The third is that the integrated gas has no time to escape when the droplets cool at a high velocity.The formation of oxides and impurities results from the metallic droplets reacting with the atmospheric oxygen during their flight to the substrate. Coatings 2018, 8, x FOR PEER REVIEW 7 of 17 Figure 5 shows the back scattered electron (BSE) microstructure of NiTi coating, in which unmelted powders, parallel cracks, perpendicular cracks, pores, oxides and impurities can be found.The un-melted powders retain the state of the starting powders.The length of the plasma arc flow is short, making it very easy for air entrainment.Then, it can induce the heat inhomogeneity of the particles in the process of APS.As a result, the partial particles are melted while the other partial particles are semi-molten.Under the effect of the plasma arc flow, the melted particles form the lamellae when the particles hit the substrate.However, the un-melted particles are inlaid in the coating and keep the initial shape of the particles.The existence of pores has a great influence on the mechanical and corrosion properties of the coating.There are two kinds of pores, macropores and micropores.The macropores exist between the lamellae.The incomplete filling of the droplets when the droplets impact substrate and the insufficient flattening of the unmelted particles are the reasons for the forming of macropores.The micropores are formed for the following reasons.The first one is the insufficient wetting of the surface of the existing coating.The second is the existing pores in the inner of starting powders.The third is that the integrated gas has no time to escape when the droplets cool at a high velocity.The formation of oxides and impurities results from the metallic droplets reacting with the atmospheric oxygen during their flight to the substrate.The cracks in the coating perpendicular to the substrate are through the individual lamellae and are perpendicular to the lamellae.The forming of these kinds of cracks is due to the cooling of the droplets at a high velocity.In the process of APS, the cooling and flattening of the droplets are two separate processes.When the droplets with high temperatures impact the previously deposited lamellae, cooling and flattening can happen.The cooled and flattened droplets will shrink, but the previously deposited lamellae will limit this shrinking.It will induce staining inside the coating, and cracks will form consequently so the stress can be released.The reason the cracks in the coating form parallel to the substrate is the sheer stress generating at the interface between the freshly impacted lamellae and the previously deposited surface. Figure 6 shows the EDS analysis of the NiTi coating.The dark grey areas C and D are the oxides.The oxide layer around each lamella can reduce the area fraction of intimate contact.The light grey area A is the austenite phase B2.The white area B is solid solution nickel with a very small amount of titanium.This result can be verified in the XRD patterns as shown in Figure 7.The austenite phase B2 of area A occupies the main part of the coating.The cracks in the coating perpendicular to the substrate are through the individual lamellae and are perpendicular to the lamellae.The forming of these kinds of cracks is due to the cooling of the droplets at a high velocity.In the process of APS, the cooling and flattening of the droplets are two separate processes.When the droplets with high temperatures impact the previously deposited lamellae, cooling and flattening can happen.The cooled and flattened droplets will shrink, but the previously deposited lamellae will limit this shrinking.It will induce staining inside the coating, and cracks will form consequently so the stress can be released.The reason the cracks in the coating form parallel to the substrate is the sheer stress generating at the interface between the freshly impacted lamellae and the previously deposited surface. Figure 6 shows the EDS analysis of the NiTi coating.The dark grey areas C and D are the oxides.The oxide layer around each lamella can reduce the area fraction of intimate contact.The light grey area A is the austenite phase B2.The white area B is solid solution nickel with a very small amount of titanium.This result can be verified in the XRD patterns as shown in Figure 7.The austenite phase B2 of area A occupies the main part of the coating.Table 4 shows the chemical compositions of the selected area from the NiTi coating.Combining the results of the XRD patterns as shown in Figure 7, the areas C and D may be the TiO and NiTi2/Ni2Ti4O3.Line scanning of the selected area from the NiTi coating is shown in Figure6, in which titanium, nickel and oxygen elements are the foci.Because the titanium has an affinity for oxygen, the oxygen content increases with the titanium content.In the dark grey area, the content of titanium is more than that of nickel.In the white area, the content of nickel is more than the content of titanium.The XRD pattern of coating shown in Figure 7 also confirms that Ni4Ti3 exists in the coating.It is not found in Figure 5 (BSE microstructure of NiTi coating with a magnification).The Ni4Ti3 may be very small and is only distributed in area A as shown in Figure 6.Compared with the diffraction peak of the starting powder, the diffraction peak of the as-sprayed coating is obviously broadened as shown in Figure 7.The starting powder impact the substrate in high speed during the process of APS.The particles with high temperatures have severe plastic deformation.These could result in dynamic recrystallization and the emergence of high deformation regions.The grain refinement phenomenon leads to a decrease in the grain size and widens the diffraction peak.In addition, the plastic deformation of particles results in the microstrain between grains or intergranular grains.The crystal plane space and the diffraction angle of grains are changed.The diffraction peaks of different diffraction angles are superimposed, so the measured diffraction peaks broaden.Table 4 shows the chemical compositions of the selected area from the NiTi coating.Combining the results of the XRD patterns as shown in Figure 7, the areas C and D may be the TiO and NiTi2/Ni2Ti4O3.Line scanning of the selected area from the NiTi coating is shown in Figure6, in which titanium, nickel and oxygen elements are the foci.Because the titanium has an affinity for oxygen, the oxygen content increases with the titanium content.In the dark grey area, the content of titanium is more than that of nickel.In the white area, the content of nickel is more than the content of titanium.The XRD pattern of coating shown in Figure 7 also confirms that Ni4Ti3 exists in the coating.It is not found in Figure 5 (BSE microstructure of NiTi coating with a magnification).The Ni4Ti3 may be very small and is only distributed in area A as shown in Figure 6.Compared with the diffraction peak of the starting powder, the diffraction peak of the as-sprayed coating is obviously broadened as shown in Figure 7.The starting powder impact the substrate in high speed during the process of APS.The particles with high temperatures have severe plastic deformation.These could result in dynamic recrystallization and the emergence of high deformation regions.The grain refinement phenomenon leads to a decrease in the grain size and widens the diffraction peak.In addition, the plastic deformation of particles results in the microstrain between grains or intergranular grains.The crystal plane space and the diffraction angle of grains are changed.The diffraction peaks of different diffraction angles are superimposed, so the measured diffraction peaks broaden.Table 4 shows the chemical compositions of the selected area from the NiTi coating.Combining the results of the XRD patterns as shown in Figure 7, the areas C and D may be the TiO and NiTi 2 /Ni 2 Ti 4 O 3 .Line scanning of the selected area from the NiTi coating is shown in Figure 6, in which titanium, nickel and oxygen elements are the foci.Because the titanium has an affinity for oxygen, the oxygen content increases with the titanium content.In the dark grey area, the content of titanium is more than that of nickel.In the white area, the content of nickel is more than the content of titanium.The XRD pattern of coating shown in Figure 7 also confirms that Ni 4 Ti 3 exists in the coating.It is not found in Figure 5 (BSE microstructure of NiTi coating with a magnification).The Ni 4 Ti 3 may be very small and is only distributed in area A as shown in Figure 6.Compared with the diffraction peak of the starting powder, the diffraction peak of the as-sprayed coating is obviously broadened as shown in Figure 7.The starting powder impact the substrate in high speed during the process of APS.The particles with high temperatures have severe plastic deformation.These could result in dynamic recrystallization and the emergence of high deformation regions.The grain refinement phenomenon leads to a decrease in the grain size and widens the diffraction peak.In addition, the plastic deformation of particles results in the microstrain between grains or intergranular grains.The crystal plane space and the diffraction angle of grains are changed.The diffraction peaks of different diffraction angles are superimposed, so the measured diffraction peaks broaden. Cavitation Erosion Test Figure 9a shows the cumulative mass loss with time for cavitation erosion of the NiTi coating and the NiTi plate.The mass loss of the NiTi coating was far greater than that of the NiTi plate.After testing for 30 h, the mass loss of the NiTi coating was 5.2 times that of the NiTi plate.There was no incubation period for the cavitation erosion of NiTi coating.In contrast, the mass loss of NiTi plate was still in the incubation period. Figure 9b shows the XRD pattern of (A) as-sprayed NiTi coating and (B) NiTi coating after 30 h cavitation erosion.Compared to the as-sprayed NiTi coating, the NiTi coating after 30 h cavitation erosion had no Ni and NiTi2/Ni2Ti4O3.It indicated that Ni and NiTi2/Ni2Ti4O3 on the surface of the NiTi coating were removed after 30 h cavitation erosion.The austenite B2 is still in the main phase. Figure 10 shows the surface morphologies of the NiTi plate and the NiTi coating after 30 h cavitation erosion.Some eroded areas and some little pits can be detected on the surface of NiTi plate.Even so, there are still some intact areas on the surface.The eroded areas look like popcorn.Some weak points like second phases and oxides in the NiTi plate (Figure 4c) could be removed easily in the initial stage of cavitation erosion.These little pores could be cavitation sources even when the quantity of the pores is small.Thus, the cavitation damages are mainly the location of these defects. Cavitation Erosion Test Figure 9a shows the cumulative mass loss with time for cavitation erosion of the NiTi coating and the NiTi plate.The mass loss of the NiTi coating was far greater than that of the NiTi plate.After testing for 30 h, the mass loss of the NiTi coating was 5.2 times that of the NiTi plate.There was no incubation period for the cavitation erosion of NiTi coating.In contrast, the mass loss of NiTi plate was still in the incubation period. Figure 9b shows the XRD pattern of (A) as-sprayed NiTi coating and (B) NiTi coating after 30 h cavitation erosion.Compared to the as-sprayed NiTi coating, the NiTi coating after 30 h cavitation erosion had no Ni and NiTi 2 /Ni 2 Ti 4 O 3 .It indicated that Ni and NiTi 2 /Ni 2 Ti 4 O 3 on the surface of the NiTi coating were removed after 30 h cavitation erosion.The austenite B2 is still in the main phase. Figure 10 shows the surface morphologies of the NiTi plate and the NiTi coating after 30 h cavitation erosion.Some eroded areas and some little pits can be detected on the surface of NiTi plate.Even so, there are still some intact areas on the surface.The eroded areas look like popcorn.Some weak points like second phases and oxides in the NiTi plate (Figure 4c) could be removed easily in the initial stage of cavitation erosion.These little pores could be cavitation sources even when the quantity of the pores is small.Thus, the cavitation damages are mainly the location of these defects.In Figure 10b, some huge and deep pits exist on the surface of the NiTi coating after 30 h cavitation erosion.There are also some intact areas on the surface.The diameter of some pits is more than 100 μm.The magnification of selected areas for the NiTi coating after 30 h cavitation erosion is shown in Figure 10c.The shape of the fracture shows typical brittle features.Some cracks existed in the bottom of the pit. The shock wave or micro-jet generated in the bubble collapse resulted in the plastic deformation of materials.The repeated plastic deformation of the material's surface caused the appearance of toughness reduction.Due to the continuous impacting of the shock wave or micro-jet, cracks were generated near the cavitation pits.The cracks expanded to defects at the inner part.The cracks continued to expand, and as a result, the particles were shed.The cavitation pits gradually expanded and connected to pieces.Eventually the whole surface is damaged by cavitation erosion. Figure 11 shows the cross-sectional microstructure of the NiTi coating after 30 h cavitation erosion.In general, the cavitation erosion damage, first happens in or around the defects [39].The In Figure 10b, some huge and deep pits exist on the surface of the NiTi coating after 30 h cavitation erosion.There are also some intact areas on the surface.The diameter of some pits is more than 100 μm.The magnification of selected areas for the NiTi coating after 30 h cavitation erosion is shown in Figure 10c.The shape of the fracture shows typical brittle features.Some cracks existed in the bottom of the pit. The shock wave or micro-jet generated in the bubble collapse resulted in the plastic deformation of materials.The repeated plastic deformation of the material's surface caused the appearance of toughness reduction.Due to the continuous impacting of the shock wave or micro-jet, cracks were generated near the cavitation pits.The cracks expanded to defects at the inner part.The cracks continued to expand, and as a result, the particles were shed.The cavitation pits gradually expanded and connected to pieces.Eventually the whole surface is damaged by cavitation erosion. Figure 11 shows the cross-sectional microstructure of the NiTi coating after 30 h cavitation erosion.In general, the cavitation erosion damage, first happens in or around the defects [39].The In Figure 10b, some huge and deep pits exist on the surface of the NiTi coating after 30 h cavitation erosion.There are also some intact areas on the surface.The diameter of some pits is more than 100 µm.The magnification of selected areas for the NiTi coating after 30 h cavitation erosion is shown in Figure 10c.The shape of the fracture shows typical brittle features.Some cracks existed in the bottom of the pit. The shock wave or micro-jet generated in the bubble collapse resulted in the plastic deformation of materials.The repeated plastic deformation of the material's surface caused the appearance of toughness reduction.Due to the continuous impacting of the shock wave or micro-jet, cracks were generated near the cavitation pits.The cracks expanded to defects at the inner part.The cracks continued to expand, and as a result, the particles were shed.The cavitation pits gradually expanded and connected to pieces.Eventually the whole surface is damaged by cavitation erosion.11 shows the cross-sectional microstructure of the NiTi coating after 30 h cavitation erosion.In general, the cavitation erosion damage, first happens in or around the defects [39].The failure mechanism of NiTi coating is proposed in Figure 12 based on the results of Figures 10 and 11.The removal of material is considerably slower in defect-free areas than in porous areas.Some weak points could be removed easily in the initial stage of cavitation erosion.In Figure 11a, the damaged areas are huge.The depth of some pits is more than 150 µm.At the bottom of the pits, there is a crack perpendicular to lamellae, as shown in Figure 11b.The formation of this pit should result from this kind of crack.Some cracks perpendicular to the lamellae had already existed within the prepared NiTi coating before the cavitation erosion test (Figure 5).The cavitation erosion damage will inevitably initiate at, or around, such cracks, and then these cracks will propagate and expand during the cavitation erosion test.The pits become deeper and deeper with the propagating of such cracks.Furthermore, Figure 11c documents that the two pits advance along a crack.When they encounter the cracks parallel to lamellae as shown in Figure 5, the two pits will connect with each other.Accordingly, the NiTi coating will suffer from severe cavitation erosion damage.The removal of material is considerably slower in defect-free areas than in porous areas.Some weak points could be removed easily in the initial stage of cavitation erosion.In Figure 11a, the damaged areas are huge.The depth of some pits is more than 150 μm.At the bottom of the pits, there is a crack perpendicular to lamellae, as shown in Figure 11b.The formation of this pit should result from this kind of crack.Some cracks perpendicular to the lamellae had already existed within the prepared NiTi coating before the cavitation erosion test (Figure 5).The cavitation erosion damage will inevitably initiate at, or around, such cracks, and then these cracks will propagate and expand during the cavitation erosion test.The pits become deeper and deeper with the propagating of such cracks.Furthermore, Figure 11c documents that the two pits advance along a crack.When they encounter the cracks parallel to lamellae as shown in Figure 5, the two pits will connect with each other.Accordingly, the NiTi coating will suffer from severe cavitation erosion damage.For the NiTi plate, despite the time spent in the cavitation test, there is little mass loss.Only some particles that are easy to peel off are removed in the initial period [6,13].It should be emphasized that the incubation period is quite long.This should be ascribed to the superelasticity of the NiTi plate.During the cavitation erosion test, the stress induced martensitic transformation will occur in the The removal of material is considerably slower in defect-free areas than in porous areas.Some weak points could be removed easily in the initial stage of cavitation erosion.In Figure 11a, the damaged areas are huge.The depth of some pits is more than 150 μm.At the bottom of the pits, there is a crack perpendicular to lamellae, as shown in Figure 11b.The formation of this pit should result from this kind of crack.Some cracks perpendicular to the lamellae had already existed within the prepared NiTi coating before the cavitation erosion test (Figure 5).The cavitation erosion damage will inevitably initiate at, or around, such cracks, and then these cracks will propagate and expand during the cavitation erosion test.The pits become deeper and deeper with the propagating of such cracks.Furthermore, Figure 11c documents that the two pits advance along a crack.When they encounter the cracks parallel to lamellae as shown in Figure 5, the two pits will connect with each other.Accordingly, the NiTi coating will suffer from severe cavitation erosion damage.For the NiTi plate, despite the time spent in the cavitation test, there is little mass loss.Only some particles that are easy to peel off are removed in the initial period [6,13].It should be emphasized that the incubation period is quite long.This should be ascribed to the superelasticity of the NiTi plate.During the cavitation erosion test, the stress induced martensitic transformation will occur in the For the NiTi plate, the time spent in the cavitation test, there is little mass loss.Only some particles that are easy to peel off are removed in the initial period [6,13].It should be emphasized that the incubation period is quite long.This should be ascribed to the superelasticity of the NiTi plate.During the cavitation erosion test, the stress induced martensitic transformation will occur in the austenite phase under external stress.The stress-induced martensite is highly deformable by twin boundary migration when external stresses are applied, leading to the reorientation of the martensitic plates [4,13].Through the two kinds of mechanism, most of the energy generated by the bubble collapse is absorbed, which reduces the effect of the stress and retards the crack propagation. In contrast, the prepared NiTi coating is destroyed by cavitation erosion at a faster rate and has no cavitation incubation period during the whole process of cavitation erosion.There are three main reasons for this.First, the coating contains large amounts of the oxides and impurities.These brittle oxides and impurities have poor cavitation erosion resistance.They are peeled off at the initial stage of cavitation erosion.There are many microcracks in these brittle materials.The external stress can easily cause stress concentration near the microcracks.When the stress exceeds the critical value, the cracks expand rapidly, which results in the brittle fracture of materials.The repeated stress from the shock wave or micro-jet acting on the surface is huge during the cavitation erosion, which causes the microcracks' propagation.The oxides and impurities encounter brittle fracture and fall off.Second, after the surface oxides and impurities are all peeled off, the cavitation damage will develop along the longitudinal direction through the cracks perpendicular to the lamellae.The oxides and impurities completely fall off and form the cavitation pits.In the cavitation pits, the cracks propagate along the interface of particles.This causes the cavitation damage areas to increase, and some new particles are exposed and the damage repeats by the mechanism referenced above.Third, some pores exist in the NiTi coating.The cavitation bubbles are expected to nucleate and grow easily around these pores.The small damage pits are continuously attacked during further cavitation and grow wider and deeper [14]. In conclusion, the oxides, impurities and pores are the key reason for the severe cavitation erosion damage of the prepared NiTi coating. Jet Impingement Erosion Test Figure 13a shows the cumulative mass loss of jet impingement erosion for the NiTi coating and the NiTi plate in tap water with 2 wt % silica sand for 12 h.From the data point of view, the jet impingement erosion resistance of the coating is slightly better than that of NiTi plate. Coatings 2018, 8, x FOR PEER REVIEW 12 of 17 austenite phase under external stress.The stress-induced martensite is highly deformable by twin boundary migration when external stresses are applied, leading to the reorientation of the martensitic plates [4,13].Through the two kinds of mechanism, most of the energy generated by the bubble collapse is absorbed, which reduces the effect of the stress and retards the crack propagation. In contrast, the prepared NiTi coating is destroyed by cavitation erosion at a faster rate and has no cavitation incubation period during the whole process of cavitation erosion.There are three main reasons for this.First, the coating contains large amounts of the oxides and impurities.These brittle oxides and impurities have poor cavitation erosion resistance.They are peeled off at the initial stage of cavitation erosion.There are many microcracks in these brittle materials.The external stress can easily cause stress concentration near the microcracks.When the stress exceeds the critical value, the cracks expand rapidly, which results in the brittle fracture of materials.The repeated stress from the shock wave or micro-jet acting on the surface is huge during the cavitation erosion, which causes the microcracks' propagation.The oxides and impurities encounter brittle fracture and fall off.Second, after the surface oxides and impurities are all peeled off, the cavitation damage will develop along the longitudinal direction through the cracks perpendicular to the lamellae.The oxides and impurities completely fall off and form the cavitation pits.In the cavitation pits, the cracks propagate along the interface of particles.This causes the cavitation damage areas to increase, and some new particles are exposed and the damage repeats by the mechanism referenced above.Third, some pores exist in the NiTi coating.The cavitation bubbles are expected to nucleate and grow easily around these pores.The small damage pits are continuously attacked during further cavitation and grow wider and deeper [14]. In conclusion, the oxides, impurities and pores are the key reason for the severe cavitation erosion damage of the prepared NiTi coating. Jet Impingement Erosion Test Figure 13a shows the cumulative mass loss of jet impingement erosion for the NiTi coating and the NiTi plate in tap water with 2 wt % silica sand for 12 h.From the data point of view, the jet impingement erosion resistance of the coating is slightly better than that of NiTi plate.Figure 13b shows the micro-beam XRD pattern of the as-sprayed NiTi coating, and before and after the 12 h jet impingement erosion.The reason for using the micro-beam XRD pattern with cobalt as the target is that the eroded surface of the jet impingement erosion is so small that it cannot be detected by conventional XRD.The Ni and NiTi2/Ni2Ti4O3 are not found in the XRD pattern of NiTi coating after 12 h jet impingement erosion.The main phase is still austenite B2, and the TiO and Ni4Ti3 Figure 13b shows the XRD pattern of the as-sprayed NiTi coating, and before and after the 12 h jet impingement erosion.The reason for using the micro-beam XRD pattern with cobalt as the target is that the eroded surface of the jet impingement erosion is so small that it cannot be detected by conventional XRD.The Ni and NiTi 2 /Ni 2 Ti 4 O 3 are not found in the XRD pattern of NiTi coating after 12 h jet impingement erosion.The main phase is still austenite B2, and the TiO and Ni 4 Ti 3 are also found in the XRD pattern.The peak intensity of austenite B2 of the as-sprayed NiTi coating after 12 h jet impingement erosion become weak compared to the original coating.The reason may be that the partial austenite B2 on the surface of the coating transforms to martensite B19' and partial B19' transforms to martensitic variant B19 after 12 h jet impingement erosion.The B19' or B19 is not found in the XRD pattern, which is probably due to the small amount of B19' and B19. Figure 14 shows the surface morphology of the NiTi plate and the NiTi coating before the jet impingement test.Figure 15 shows the surface morphology of the NiTi plate and the NiTi coating after 12 h jet impingement erosion.The shape of impingement particles is irregular, which will cause severe damage.There is no intact place on the surface.It exhibits the typical damage features of ductile material in the jet impingement erosion.On the surface of the NiTi plate and the NiTi coating after 12 h jet impingement erosion, there is a small number of ploughed scratches, long furrows and ridges.But there are many overlaps and irregular concavities on the surface.At the high impinged angle, the material damage accumulation comes from fatigue, shear localization, microforging and extrusion processes.Lots of indented concavities and thin platelets come from this combined deformation of microforging and extrusion.These protruding thin platelets will be partially impinged off by the subsequent impinged particles.When the material encounters the subsequent impinged particles, many overlapping irregular concavities and residual protruding thin platelets attach on-to the nearby surface.In fact, these impinged surface morphologies are consistent with those reported for ductile materials [19,40]. Coatings 2018, 8, x FOR PEER REVIEW 13 of 17 after 12 h jet impingement erosion become weak compared to the original coating.The reason may be that the partial austenite B2 on the surface of the coating transforms to martensite B19' and partial B19' transforms to martensitic variant B19 after 12 h jet impingement erosion.The B19' or B19 is not found in the XRD pattern, which is probably due to the small amount of B19' and B19. Figure 14 shows the surface morphology of the NiTi plate and the NiTi coating before the jet impingement test.Figure 15 shows the surface morphology of the NiTi plate and the NiTi coating after 12 h jet impingement erosion.The shape of impingement particles is irregular, which will cause severe damage.There is no intact place on the surface.It exhibits the typical damage features of ductile material in the jet impingement erosion.On the surface of the NiTi plate and the NiTi coating after 12 h jet impingement erosion, there is a small number of ploughed scratches, long furrows and ridges.But there are many overlaps and irregular concavities on the surface.At the high impinged angle, the material damage accumulation comes from fatigue, shear localization, microforging and extrusion processes.Lots of indented concavities and thin platelets come from this combined deformation of microforging and extrusion.These protruding thin platelets will be partially impinged off by the subsequent impinged particles.When the material encounters the subsequent impinged particles, many overlapping irregular concavities and residual protruding thin platelets attach on-to the nearby surface.In fact, these impinged surface morphologies are consistent with those reported for ductile materials [19,40].Like the cavitation erosion test, the stress induced martensitic transformation occurs in the austenite phase under external stress during the jet impingement erosion test.When external stresses are applied, the stress-induced martensite transforms to the martensite variant that has the most favorable orientation through the formation of a martensite twin boundary slip.In the NiTi plate or after 12 h jet impingement erosion become weak compared to the original coating.The reason may be that the partial austenite B2 on the surface of the coating transforms to martensite B19' and partial B19' transforms to martensitic variant B19 after 12 h jet impingement erosion.The B19' or B19 is not found in the XRD pattern, which is probably due to the small amount of B19' and B19. Figure 14 shows the surface morphology of the NiTi plate and the NiTi coating before the jet impingement test.Figure 15 shows the surface morphology of the NiTi plate and the NiTi coating after 12 h jet impingement erosion.The shape of impingement particles is irregular, which will cause severe damage.There is no intact place on the surface.It exhibits the typical damage features of ductile material in the jet impingement erosion.On the surface of the NiTi plate and the NiTi coating after 12 h jet impingement erosion, there is a small number of ploughed scratches, long furrows and ridges.But there are many overlaps and irregular concavities on the surface.At the high impinged angle, the material damage accumulation comes from fatigue, shear localization, microforging and extrusion processes.Lots of indented concavities and thin platelets come from this combined deformation of microforging and extrusion.These protruding thin platelets will be partially impinged off by the subsequent impinged particles.When the material encounters the subsequent impinged particles, many overlapping irregular concavities and residual protruding thin platelets attach on-to the nearby surface.In fact, these impinged surface morphologies are consistent with those reported for ductile materials [19,40].Like the cavitation erosion test, the stress induced martensitic transformation occurs in the austenite phase under external stress during the jet impingement erosion test.When external stresses are applied, the stress-induced martensite transforms to the martensite variant that has the most favorable orientation through the formation of a martensite twin boundary slip.In the NiTi plate or Like cavitation erosion test, the stress induced martensitic transformation occurs in the austenite phase under external stress during the jet impingement erosion test.When external stresses are applied, the stress-induced martensite transforms to the martensite variant that has the most favorable orientation through the formation of a martensite twin boundary slip.In the NiTi plate or NiTi coating, the austenite phase B2 is the main component.The superelasticity of the austenite would also stabilize the fatigue crack tips and hinder the crack propagation.This mechanism partially relieves the impact load and accommodates the impact strain elastically.Figure 16 shows the BSE surface micrographs and cross-sectional microstructure of the NiTi coating after 12 h jet impingement erosion.Some small pits can be detected.Some of these pits existed before the jet impingement erosion test, while some were formed due to the solid particle impacting.No apparent falling off the bulk material can be recognized.During the process of jet impingement erosion, the oxides and impurities are impacted by the solid particle impacting.Although the impacting will cause mass loss, the existence of brittle phases is discrete and does not allow pits to connect.The mass loss caused by the oxides and impurities is small.In addition, the existence of pores in the coating has a certain buffer effect on the impact of the particles.So, the jet impingement erosion resistance of the NiTi coating is equivalent of that of the NiTi plate. Coatings 2018, 8, x FOR PEER REVIEW 14 of 17 relieves the impact load and accommodates the impact strain elastically.Figure 16 shows the BSE surface micrographs and cross-sectional microstructure of the NiTi coating after 12 h jet impingement erosion.Some small pits can be detected.Some of these pits existed before the jet impingement erosion test, while some were formed due to the solid particle impacting.No apparent falling off the bulk material can be recognized.During the process of jet impingement erosion, the oxides and impurities are impacted by the solid particle impacting.Although the impacting will cause mass loss, the existence of brittle phases is discrete and does not allow pits to connect.The mass loss caused by the oxides and impurities is small.In addition, the existence of pores in the coating has a certain buffer effect on the impact of the particles.So, the jet impingement erosion resistance of the NiTi coating is equivalent of that of the NiTi plate.Compared with the NiTi plate, the NiTi coating presents inferior resistance to cavitation erosion.In contrast, the NiTi coating shows almost equivalently superior jet impingement erosion resistance.What results in such differences?The reasons are as follows. First, during the cavitation erosion process, the velocity of the shock wave or micro-jet caused by the bubble collapse is very high.It is far greater than the velocity of particles during the jet impingement erosion process.In addition, the shock wave or micro-jet often impacts the material surface with high energy and an instantaneous high temperature.Therefore, the material damage caused by the shock wave or the micro-jet is far greater than that of the impactive particles during the jet impingement erosion process.During the cavitation erosion process, especially when defects exist in the coating, the oxides and impurities can easily be broken down by the shock wave or microjet to form a pit.The pits will become cavitation sources.Therefore, they can cause more bubble collapses near the pits.In addition, the energy carried by the shock wave or the micro-jet can make the existing cracks propagate and expand.This can cause a large amount of the material to fall off.During the jet impingement erosion process, the oxides and impurities will also be impacted.The impact strength is far less than the impact strength of cavitation erosion.The oxides and impurities rarely fall off during the jet impingement erosion process.The high hardness of these hard and brittle phases can help improve the resistance to erosion.In the process of jet impingement erosion, there is no phenomenon of bulk material falling off. Second, there are some pores in the NiTi coating.The pores will become a cavitation source.The pores could cause more bubble collapses near the pores.The bubbles can collapse at the bottom of the pores.The shock wave or the micro-jet will impact the bottom of the pores and make the pores bigger and deeper.In the process of jet impingement erosion, the existence of pores in the coating has a certain buffer effect on the impact of the particles.Compared with the NiTi plate, the NiTi coating presents inferior resistance to cavitation erosion.In contrast, the NiTi coating shows almost equivalently superior jet impingement erosion resistance.What results in such differences?The reasons are as follows. First, during the cavitation erosion process, the velocity of the shock wave or micro-jet caused by the bubble collapse is very high.It is far greater than the velocity of particles during the jet impingement erosion process.In addition, the shock wave or micro-jet often impacts the material surface with high energy and an instantaneous high temperature.Therefore, the material damage caused by the shock wave or the micro-jet is far greater than that of the impactive particles during the jet impingement erosion process.During the cavitation erosion process, especially when defects exist in the coating, the oxides and impurities can easily be broken down by the shock wave or micro-jet to form a pit.The pits will become cavitation sources.Therefore, they can cause more bubble collapses near the pits.In addition, the energy carried by the shock wave or the micro-jet can make the existing cracks propagate and expand.This can cause a large amount of the material to fall off.During the jet impingement erosion process, the oxides and impurities will also be impacted.The impact strength is far less than the impact strength of cavitation erosion.The oxides and impurities rarely fall off during the jet impingement erosion process.The high hardness of these hard and brittle phases can help improve the resistance to erosion.In the process of jet impingement erosion, there is no phenomenon of bulk material falling off. Second, there are some pores in NiTi coating.The pores will become a cavitation source.The pores could cause more bubble collapses near the pores.The bubbles can collapse at the bottom of the pores.The shock wave or the micro-jet will impact the bottom of the pores and make the pores bigger and deeper.In the process of jet impingement erosion, the existence of pores in the coating has a certain buffer effect on the impact of the particles. All in all, the effects of oxides, impurities, cracks and pores in the coating on the cavitation erosion are far greater than those on the jet impingement erosion.The NiTi plate has a relatively small quantity of defects compared with the NiTi coating.So, the NiTi coating presents inferior resistance to cavitation erosion but almost equivalently superior jet impingement erosion resistance compared with the NiTi plate. In future research, it is important to optimize the parameters of APS which are related to the quantity of the oxides, impurities, cracks and pores in the NiTi coating.In addition, the interaction between the electrochemical corrosion behavior of the NiTi coating and cavitation erosion and jet impingement erosion also needs to be studied. Conclusions The NiTi coating produced by APS from the pre-alloyed NiTi powder is prepared successfully.The main phase of the NiTi coating is austenite B2.In addition, unmelted powders, parallel cracks, perpendicular cracks, pores, oxides and impurities have been found in the NiTi coating. During the cavitation erosion process, the velocity of the shock wave or micro-jet caused is far greater than that of the particles during the jet impingement erosion process.The energy carried by the shock wave or the micro-jet can make the existing cracks propagate and expand.During the jet impingement erosion process, the oxides and impurities rarely fell off. In addition, there are pores in the NiTi coating.The pores will become a cavitation source during the cavitation erosion process.In the process of jet impingement erosion, the existence of pores in the coating has a certain buffer effect on the impact of the particles. All in all, the effects of defects in the NiTi coating on the cavitation erosion are far greater than those on the jet impingement erosion.As the result, the NiTi coating presents inferior resistance to cavitation erosion and nearly equivalently superior jet impingement erosion resistance compared with the NiTi plate. Figure 1 . Figure 1.(a) Morphology of NiTi powder used as staring material for APS procession, (b) XRD pattern of NiTi powder. Figure 1 . Figure 1.(a) Morphology of NiTi powder used as staring material for APS procession, (b) XRD pattern of NiTi powder. Figure 4 . Figure 4. Cross-sectional microstructure of NiTi powder (a) without etching, (b) after etching and (c) surface microstructure of NiTi plate after etching. Figure 4 . Figure 4. Cross-sectional microstructure of NiTi powder (a) without etching, (b) after etching and (c) surface microstructure of NiTi plate after etching. 17 Figure 6 . Figure 6.EDS analysis of NiTi coating and line scanning of the selected area from NiTi coating. Figure 6 . 17 Figure 6 . Figure 6.EDS analysis of NiTi coating and line scanning of the selected area from NiTi coating. Figure 8 Figure 8 shows the DSC curve of (a) NiTi powder and (b) as-sprayed NiTi coating.The DSC peak of NiTi coating widens compared with that of the NiTi powder.Ni 4 Ti 3 , NiTi 2 /Ni 2 Ti 4 O 3 , Ni and TiO, can induce the fluctuations of the Ni:Ti ratio.This could be the main reason for the broadening of the NiTi coating DSC peak. Figure 8 Figure 8 shows the DSC curve of (a) NiTi powder and (b) as-sprayed NiTi coating.The DSC peak of NiTi coating widens compared with that of the NiTi powder.Ni4Ti3, NiTi2/Ni2Ti4O3, Ni and TiO, can induce the fluctuations of the Ni:Ti ratio.This could be the main reason for the broadening of the NiTi coating DSC peak. Figure 8 . Figure 8. Phase transformation temperatures of the (a) NiTi powder and (b) as-sprayed NiTi coating measured by DSC with the heating and cooling rate of 10 K/min. Figure 8 . Figure 8. Phase transformation temperatures of the (a) NiTi powder and (b) as-sprayed NiTi coating measured by DSC with the heating and cooling rate of 10 K/min. Figure 9 . Figure 9. (a) Cumulative mass loss as a function of cavitation erosion time for NiTi coating and NiTi plate; (b) XRD pattern of (A) as-sprayed NiTi coating and (B) NiTi coating after 30 h cavitation erosion. Figure 10 . Figure 10.Surface morphologies of (a) NiTi plate, (b) NiTi coating and (c) magnification of the selected area for NiTi coating after 30 h cavitation erosion. Figure 9 . 17 Figure 9 . Figure 9. (a) Cumulative mass loss as a function of cavitation erosion time for NiTi coating and NiTi plate; (b) XRD pattern of (A) as-sprayed NiTi coating and (B) NiTi coating after 30 h cavitation erosion. Figure 10 . Figure 10.Surface morphologies of (a) NiTi plate, (b) NiTi coating and (c) magnification of the selected area for NiTi coating after 30 h cavitation erosion. Figure 10 . Figure 10.Surface morphologies of (a) NiTi plate, (b) NiTi coating and (c) magnification of the selected area for NiTi coating after 30 h cavitation erosion. Coatings 2018, 8 , x FOR PEER REVIEW 11 of 17 failure mechanism of NiTi coating is proposed in Figure12based on the results of Figures10 and 11 . Figure 11 . Figure 11.Cross-sectional microstructure of (a) NiTi coating and (b, c) magnification of the selected area from NiTi coating after 30 h cavitation erosion. Figure 12 . Figure 12.Schematic diagram of cavitation erosion for the NiTi coating. Figure 11 . Figure 11.Cross-sectional microstructure of (a) NiTi coating and (b,c) magnification of the selected area from NiTi coating after 30 h cavitation erosion. . Figure 11 . Figure 11.Cross-sectional microstructure of (a) NiTi coating and (b, c) magnification of the selected area from NiTi coating after 30 h cavitation erosion. Figure 12 . Figure 12.Schematic diagram of cavitation erosion for the NiTi coating. Figure 12 . Figure 12.Schematic diagram of cavitation erosion for the NiTi coating. Figure 13 . Figure 13.(a) Cumulative mass loss of jet impingement erosion for NiTi coating and NiTi plate in 12 h, (b) micro-beam XRD pattern of (C) as-sprayed NiTi coating and (D) NiTi coating after 12 h jet impingement erosion. Figure 13 . Figure 13.(a) Cumulative mass loss of jet impingement erosion for NiTi coating and NiTi plate in 12 h, (b) micro-beam XRD pattern of (C) as-sprayed NiTi coating and (D) NiTi coating after 12 h jet impingement erosion. Figure 14 . Figure 14.Surface morphologies of (a) NiTi plate and (b) NiTi coating before jet impingement test. Figure 14 . Figure 14.Surface morphologies of (a) NiTi plate and (b) NiTi coating before jet impingement test. Figure 14 . Figure 14.Surface morphologies of (a) NiTi plate and (b) NiTi coating before jet impingement test. Figure 16 . Figure 16.BSE micrograph of (a) surface and (b) cross-sectional microstructure of NiTi coating after 12 h jet impingement erosion. Figure 16 . Figure 16.BSE micrograph of (a) surface and (b) cross-sectional microstructure of NiTi coating after 12 h jet impingement erosion. Table 4 . Chemical compositions of the selected area from NiTi coating in Figure6. Table 4 . Chemical compositions of the selected area from NiTi coating in Figure6.
2018-12-02T07:11:09.288Z
2018-09-28T00:00:00.000
{ "year": 2018, "sha1": "574da6aa9b26ea9c1c88cd55f7b5526be110b3fc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6412/8/10/346/pdf?version=1538137130", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "574da6aa9b26ea9c1c88cd55f7b5526be110b3fc", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
52900922
pes2o/s2orc
v3-fos-license
Development of 10 single‐copy nuclear DNA markers for Euchresta horsfieldii (Fabaceae), a rare medicinal plant Premise of the Study Euchresta horsfieldii (Fabaceae) is a rare and endangered medicinal plant in Indonesia with restricted distribution. Single‐copy nuclear DNA (scnDNA) markers were developed for this species to facilitate further investigation of genetic diversity and population structure. Methods and Results We performed RNA‐Seq and de novo assembly of the transcriptome. Ten primer sets were developed for E. horsfieldii, all of which also amplified in E. japonica and E. tubulosa. Conclusions These scnDNA markers will be an important resource for the study of genetic diversity and population structure of E. horsfieldii and other species in the genus Euchresta. package RNA-Seq by Expectation Maximization (RSEM; Li and Dewey, 2011), and only assembled genes with fragments per kilobase of transcript per million mapped reads (FPKM) values greater than 1 were selected for subsequent analysis. Coding regions within these unigenes were predicted by TransDecoder version 5.0.1 (https://github.com/TransDecoder). We performed Pfam and BLASTP searches of these protein-coding genes against UniProtKB/Swiss-Prot to predict their putative functions. Their ortholog groups were compared against Ricinus communis L., Arabidopsis thaliana (L.) Heynh., Oryza sativa L., and Physcomitrella patens (Hedw.) Bruch & Schimp. and identified using an online version of OrthoMCL-DB (Chen et al., 2006; http:// orthomcl.org/orthomcl/). These ortholog groups were treated as putative single-copy genes (scnDNA). Approximately 21 million Illumina paired-end clean reads were generated (National Center for Biotechnology Information [NCBI] Sequence Read Archive [SRA] accession no. SRP149026). Clean reads were assembled into 61,796 unigenes with an N50 length of 2160 bp. Among these, 49,804 unigenes with FPKM greater than 1 were obtained, 27,405 protein-coding genes were predicted, and 1017 putative scnDNA were identified. We randomly selected 24 of these putative scnDNA for initial design of 72 PCR primers using Primer-BLAST (Ye et al., 2012). To validate the scnDNA markers, genomic DNA was extracted from two individuals each of E. horsfieldii (population BLBG) and E. japonica (populations SCB1, SCB2) (Appendix 1). Validation was done separately in these two species. DNA was extracted from approximately 15-20 mg of silica gel-dried leaf samples using the Plant Genomic DNA extraction kit (BioTeke, Beijing, China), following the manufacturer's instructions. DNA amplification was performed in a 20-μL reaction mixture containing 10 μL of 2× EasyTaq PCR SuperMix (TransGen Biotech Co.), 0.5 μL each of forward and reverse primer, 8.5 μL of ddH 2 O, and approximately 50 ng of template DNA. The PCR program was set as one cycle of 5 min at 95°C; 35 cycles of 30 s at 94°C, 90 s at 55°C, 60 s at 72°C; and a final extension of 10 min at 72°C. For amplicon quality and quantity check, each PCR product was run for 15 min of electrophoresis in 1% agarose gel at 120 V. Amplicons with only one clear band were sequenced using an ABI 3730xl DNA Sequencer (Tsingke Biological Technology, Guangzhou, China). Ten primer pairs showed single clear bands and good electropherogram quality from Sanger sequencing. Sequence data were then read, trimmed, and exported to FASTA in Chromas version 2.6.2 (Technelysium, South Brisbane, Queensland, Australia; http://technelysium.com.au). FASTA sequences were aligned using the MUSCLE algorithm available in MEGA 7.0 (Kumar et al., 2016) and then formatted manually to PHYLIP file format as input for further analysis. These 10 primers were assessed for polymorphism in three populations of E. horsfieldii from Indonesia and one population of E. tubulosa from China, following the same protocol described above for marker validation. In total, we collected 38 wild individuals of E. horsfieldii from Indonesia and six individuals of E. tubulosa from China. Voucher specimens of E. horsfieldii were deposited in the Herbarium Hortus Botanicus Baliense (THBB), Bali Botanic Garden, Indonesian Institute of Sciences (LIPI), Bali, Indonesia (Appendix 1). PCR primer pairs and characteristics of the 10 newly developed scnDNA markers, GenBank accessions, and BLASTN hits are presented in Table 1. Genetic diversity measures of all samples derived from pairwise number of site differences, including nucleotide diversity (π), Watterson estimator (θ w ), and related measures, were calculated using DnaSP version 5.10 (Librado and Rozas, 2009) (Table 2). The average number of alleles was 5.9 (5 to 7), π was 5.03 × 10 −3 (1.75 × 10 −3 to 7.5 × 10 −3 ), and θ w was 4.01 × 10 −3 (1.65 × 10 −3 to 7.23 × 10 −3 ). There was no significant Tajima's D; however, EhoScn04a and EhoScn16a showed significant negative Fay and Wu's H, indicating non-neutrality of these two loci. Most loci showed no linkage disequilibrium after 10,000 permutations in Arlequin version 3.5.2.2 (Excoffier et al., 2005), except EhoScg15b and EhoScg24b in population SCHU. For the population genetic analysis, the PHYLIP file format was first converted to STRUCTURE input using SPADS 1.0 (Dellicour and Mardulyn, 2014), followed by conversion to GENEPOP format using PGDSpider2 (Lischer and Excoffier, 2012). Allelic richness of each locus and population and number of private alleles (Appendix 2) were generated by PopGenReport version 3.0.0 (Adamack and Gruber, 2014 CONCLUSIONS In this study, we demonstrated the application of RNA-Seq to develop scnDNA markers for E. horsfieldii, and these markers were cross-amplified to E. japonica and E. tubulosa. These scnDNA markers will provide useful resources to study the population genetic diversity and population structure of these rare medicinal plants. Note: A = number of alleles; h = haplotype diversity; k = average number of nucleotide differences; n = number of sequences used (consists of four sequences of E. japonica, 12 sequences of E. tubulosa, and 80 sequences of E. horsfieldii); p n = proportion of polymorphic sites; π = nucleotide diversity; θ w = Watterson estimator per site from S; S = variable sites. **Significant (α = 0.01).
2018-10-14T17:56:42.162Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "3c2802e7aabf416de6109347420461dda4c863c3", "oa_license": "CCBY", "oa_url": "https://bsapubs.onlinelibrary.wiley.com/doi/pdfdirect/10.1002/aps3.1178", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c2802e7aabf416de6109347420461dda4c863c3", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
214047749
pes2o/s2orc
v3-fos-license
Enhancement in the performance of nanostructured CuO–ZnO solar cells by band alignment In this study, we investigated the effect of cobalt doping on band alignment and the performance of nanostructured ZnO/CuO heterojunction solar cells. ZnO nanorods and CuO nanostructures were fabricated by a low-temperature and cost-effective chemical bath deposition technique. The band offsets between Zn1−xCoxO (x = 0, 0.05, 0.10, 0.15, and 0.20) and CuO nanostructures were estimated using X-ray photoelectron spectroscopy and it was observed that the reduction of the conduction band offset with CuO. This also results in an enhancement in the open-circuit voltage. It was demonstrated that an optimal amount of cobalt doping could effectively passivate the ZnO related defects, resulting in a suitable conduction band offset, suppressing interface recombination, and enhancing conductivity and mobility. The capacitance–voltage analysis demonstrated the effectiveness of cobalt doping on enhancing the depletion width and built-in potential. Through impedance spectroscopy analysis, it was shown that recombination resistance increased up to 10% cobalt doping, thus decreased charge recombination at the interface. Further, it was demonstrated that the insertion of a thin layer of molybdenum oxide (MoO3) between the active layer (CuO) and the gold electrode hinders the formation of a Schottky junction and improved charge extraction at the interface. The ZnO/CuO solar cells with 10% cobalt doped ZnO and 20 nm thick MoO3 buffer layer achieved the best power conversion efficiency of 2.11%. Our results demonstrate the crucial role of the band alignment on the performance of the ZnO/CuO heterojunction solar cells and could pave the way for further progress on improving conversion efficiency in oxide-based heterojunction solar cells. Introduction Fabrication of photovoltaic (PV) devices is critically dependent on the availability of low-cost materials used in the fabrication. 1 For third-generation solar technologies, nanostructured materials are integrated into scalable, robust, and low-cost device structures that electronically couple the photoactive nanostructures to an external circuit. Metal oxides are abundant in nature and can be synthesized by inexpensive wet chemical methods with tunable electrical properties. With its excellent minority carrier diffusion length, high absorption coefficient, non-toxicity, and environmental friendly deposition methods, CuO has always been considered as a potential absorber material for low-cost photovoltaic applications. 1,2 For a single p-n junction solar cell, the optimum bandgap is about 1.34 eV for maximum efficiency, the optical properties of CuO (bandgap 1.4 eV) serve as an excellent candidate for a semiconductor absorber material for solar cell application. 3 Based on a Shockley-Queisser analysis of CuO-based solar cells, it is predicted to have a theoretical power conversion efficiency of around 30% by considering only radiative recombination. 4 As there is no n-type CuO available, solar cells must be constructed as heterojunctions with another material as an n-type, wide bandgap window material. 5 One n-type material that can be used for oxide-based solar cells is zinc oxide (ZnO) due to its wide bandgap and a relatively high absorption coefficient. 3,5 Similarly, homo-junctions are challenging to form for ZnO because the p-type conductivity is very sensitive to the synthesis and post-treatment conditions. 5,6 However, there are few reports of p-type ZnO available in the literature. 7,8 Therefore, one of the best solutions for fabricating oxide-based solar cells will be to study a ZnO/CuO heterojunction as a stable p-n junction. 3,9,10 Previously, few efforts have been focused on fabricating p-CuO/n-ZnO heterojunctions [11][12][13] even though simple estimates predict large valence band offsets (VBOs) and conduction band offsets (CBOs) between the two semiconductors. 14 Therefore, it is imperative to control the band alignments in CuO/ZnO for further enhancement of power conversion efficiency by altering conduction and valence band offsets. A heterojunction device based on ZnO nanowires coated with CuO nanoparticles has been reported with a power conversion efficiency of up to 0.3%. 15 Omayio et al. 16 reported a maximum efficiency of 0.23% for Sn-doped ZnO/CuO solar cells. In another literature, 17 the maximum reported efficiency for CuO/ZnO nanocompositesbased device is 1.1 Â 10 À4 %. By using the pulsed laser deposition (PLD) method, Bhaumik et al. 18 reported the highest power conversion efficiency of 2.88% device of CuO nanostructure decorated with nanoparticles. For heterojunction solar cell, both the n-type and active absorber layer interface play a critical role in determining their overall performance because charge injection and recombination are directly related to the properties of the nanomaterial used. [19][20][21] The optimization of the properties of the window layer plays a critical role in the optimization of the performance of the solar cells. The surface modication and doping of various metal oxides have been considered as approaches to improving the optical and electrical properties of the n-type electron transport window layer. 21 For example, metal ions such as Mg, Cs, Li, Al-doped ZnO have been used as an efficient electron transport layer to enhance efficiency in various perovskites, [22][23][24] polymers, 25 quantum dots, 26 and copper indium gallium selenide (CIGS) 27 based solar cells. In transition metal-doped ZnO, cobalt has a relatively higher solubility limit than other kinds of dopants. 28 Due to this property, cobalt is an excellent candidate as a dopant in ZnO nanorods for tuning optical and electrical properties. Due to the slightly smaller ionic radius of Co ++ (58 pm) than Zn ++ (60 pm) under the same coordination number, as in wurtzite ZnO, incorporation of cobalt into ZnO layer to form Zn 1Àx Co x O would signicantly alter the bandgap mainly due to the liing of the conduction band minimum (CBM) and valence band maximum (VBM). 8,12 Using this composition, the Fermi level in the Zn 1Àx Co x O layer would move up, which leads to the enhanced open-circuit voltage (V OC ), which in turn will result in an enhancement in device power conversion efficiency (PCE). Fermi-level pinning can result in the formation of a Schottky barrier even between CuO and gold electrode, despite the close alignment between the Fermi level of gold and the valence band edge of CuO. 29 Similar back-contact Schottky barriers have also been observed in cadmium telluride (CdTe) thin-lm solar cells 30 and can lead to a substantial reduction in the opencircuit voltage, ll factor, and PCE. Some previous reports on the incorporation of molybdenum oxide (MoO 3 ) into photovoltaic devices have attributed an increase in efficiency due to the reduction in series resistance resulting from improved hole extraction from the p-type active layer through the high work function of MoO 3 , 31,32 while others have credited it to a decrease in leakage current and concomitant rise in shunt resistance, identifying electron-blocking property as the primary contribution of MoO 3 . 33,34 Our approach in this study is to employ a low-resistance contact to p-CuO using molybdenum oxide (MoO 3 ) as the back-contact buffer layer for improving solar cell performance. To understand the effect of MoO 3 thickness on ZnO/CuO heterojunction, layers of various thicknesses of MoO 3 were deposited onto a CuO layer by a spin coating method. We found that the thickness of the MoO 3 layers had a signicant effect on the overall performance of the ZnO/CuO solar cells. In this work, we report a shi in band offset extracted from X-ray photoelectron spectroscopy (XPS) with cobalt doping in ZnO nanorods on the performance of ZnO/CuO heterojunction solar cells. The performance of the heterojunction PV cells is correlated to the conduction band offsets with various levels of cobalt doping. Besides, it is veried that the insertion of a thin layer of molybdenum oxide (MoO 3 ) between the CuO absorber and the gold electrode hinders the formation of a back contact Schottky junction. We also present a detailed analysis of the inuence of MoO 3 buffer layer thickness on the current-voltage (I-V) characteristics of devices. Experimental details and characterization techniques The detailed process of fabricating ZnO and cobalt doped ZnO nanorods using chemical bath deposition (CBD) was discussed in detail in our previous studies. [35][36][37][38][39][40][41] For CuO nanostructures, the growth mechanism was a modied chemical bath deposition (CBD) method in which equimolar 0.1 M of copper nitrate trihydrate (Cu(NO) 3 $3H 2 O) and hexamethylenetetramine (C 6 H 12 N 4 ) was dissolved in deionized water under constant stirring. Few drops of ammonium hydroxide solution (30-33% NH 3 in H 2 O) was added to the resulting precursor solution until the pH reached 8. The seeded substrate was submerged in the precursor solution and heated at 90 C for 4-8 hours in a convection oven. Finally, samples were rinsed with deionized (DI) water, dried in the air, and annealed at 300 C for an hour in the air. For the buffer layer, MoO 3 solutions were synthesized by a thermal decomposition method using ammonium heptamolybdate (NH 4 ) 6 Mo 7 O 24 $4H 2 O as a precursor. 42 (NH 4 ) 6 -Mo 7 O 24 $4H 2 O was dissolved in deionized water (20 ml) and heated at 80 C for 1 hour in the air. The precursor decomposed into three components, MoO 3 , NH 3, and H 2 O, among which NH 3 evaporated into the air, and MoO 3 is expected to be the signicant solute in the solution (a small amount of NH 3 can remain in the solution). The product is considered to be a layered structure MoO 3 , which forms a long molecular chain. 43 The resulting precursor solution was diluted by DI water to various concentrations (0.2 wt%, 0.5 wt%, 1 wt%, and 2 wt%) and was used to form MoO 3 lms by spin coating techniques. For >1 wt%, the layer was very thick and acted as an insulator. 0.2 wt% did not produce improvements in solar cell properties as well. We have found efficiency enhancement only with a 0.5 wt% solution. Finally, 0.5 wt% precursor solution was spin-coated on the top of the CuO layer at different rpm and different deposition time to form 20-40 nm thick MoO 3 lms (Fig. S1 †). For electrical measurements, DC sputtering technique was used to deposit high-quality gold electrodes. The thicknesses of the ZnO seed layer, ZnO nanorods, CuO nanostructures, MoO 3 buffer layer, and gold electrodes were approximately 50 nm, 450 nm, 1.6 mm, 20-40 nm, and 100 nm respectively. The active device area was 0.16 cm 2 . The schematic design of the fabricated solar cell is shown in Fig. 1. Morphological characterization of prepared ZnO nanorods, CuO nanostructures, and MoO 3 layer was performed using by FEI Inspect S50 scanning electron microscope (SEM). Crystal structures were analyzed by the X-ray diffraction (XRD) technique using the Rigaku SmartLab X-ray diffractometer (CuKa radiation, l ¼ 1.54056 A). Rietveld renement was performed using the Rigaku PDXL-XRD analysis soware. The absorption spectra were calculated by a VARIAN Cary 50 Scan UV-Vis spectrometer. The thickness of different nanostructured layers was measured using the n&k 1200 Analyzer. XPS spectra were acquired using a dual anode X-ray source, XR 04-548 (Physical Electronics), and an Omicron EA 125 hemispherical energy analyzer with a resolution of 0.02 eV in an ultra-high vacuum (UHV) chamber with a base pressure <10 À10 torr. The X-ray source used was the Al-Ka source operated at 400 W and an Xrays incident angle of 54.7 and normal emission. Hall measurements were performed using MMR H-50 Hall, van der Pauw controller. The current-voltage (J-V) characteristics of the solar cells were measured by a Keithley 2450 source meter with AM 1.5 Global spectrum source for illumination. External quantum efficiency (EQE) spectra were measured using Oriel IQE 200 instruments. Capacitance measurements were performed using Agilent/HP 4274A multi-frequency LCR meter with external biasing. Impedance spectra were analyzed in the frequency range of 1 Hz to 10 MHz with 20 mV ac voltage using Omicron Bode 100 analyzer. EIS spectrum analyzer 1.0 44 soware was used to model the Cole-Cole plots obtained from the impedance measurements. Structural analysis The crystal structure of ZnO nanorods was analyzed using X-ray diffraction (XRD) technique. Fig. 2(a-e) shows the XRD spectra of 0-20% cobalt-doped ZnO nanorods and corresponding Rietveld analysis (see Table S1 †). All the peaks on X-ray diffraction patterns were well matched to the ZnO wurtzitephase structure (JCPDS no. . 45 All the undoped and cobalt doped ZnO are highly c-axis oriented, and (002) diffraction peak position gradually shied toward lower diffraction angle with higher cobalt doping. This indicates that the Co ++ ions were well-substituted during doping into Zn sites without the creation of the secondary phase (within the detection limits). For 10%, 15%, and 20% cobalt doping, we observe an additional spinel ZnCo 2 O 4 phase, which is a typically found spinel structure with Zn ++ ions in the tetrahedral sites and Co 3+ occupying the octahedral sites. In our previous studies, 39 we have discussed the detail Rietveld analysis of these nanomaterials and found that the percentage of the wurtzite ZnO phase in samples decreases and the secondary phase of ZnCo 2 O 4 increases with excessive cobalt concentration. However, for 10% cobalt doped samples, the existence of the ZnCo 2 O 4 secondary phase is low (<5%) in comparison to other higher doped samples. Also, we observed that the crystallite size increases from 50.01 nm to 73.24 nm as cobalt doping is increased from 0 to 20%. The increase in crystallite size reveals the presence of cobalt in the ZnO lattice. During cobalt doping, distortion is produced by dopant atoms due to a mismatch between ionic radii of Zn ++ and Co ++ . This mismatch created distortions at various locations across the ZnO lattice, and at higher cobalt concentrations, the distortion centers increase, which increases the average crystallite size. 46 Fig. 2(f) displays the XRD spectra of the fabricated ZnO/CuO/MoO 3 heterojunction solar cell. Diffraction peaks matching to the hexagonal wurtzite structure of ZnO (JCPDS no.36-1451) 45 can be identied from the XRD plot. The intense (002) peak points out the crystalline nature of the ZnO nanorods, and it grows along the caxis. Fig. 2(f) also shows the presence of the diffraction peaks, which belong to the monoclinic tenorite structure of CuO (JCPDS no. . 47 Previous reports 48,49 have shown that CuO lm deposited on ZnO lm has peaks corresponding to ( 111), (111), and (202), which is consistent with our XRD results. XRD results conrm that there is no other secondary impurity crystalline phase (Cu or Cu 2 O or Cu(OH) 2 ) in our device. Furthermore, the XRD patterns also show sharp diffraction peaks, which is consistent with the standard values of the orthorhombic MoO 3 crystal structure (JCPDS card no. 76-1003). 50,51 Fig. 3 shows a typical SEM image of ZnO nanorods, CuO nanostructures, and MoO 3 thin lm layers. From SEM images, we infer that ZnO has hexagonal shaped perpendicular aligned nanorods, and CuO has a unique cone-like nanostructure measuring less than 37 AE 5 nm. The average diameter of the undoped ZnO nanorods is about 94 AE 4 nm, and the length is about 453 AE 9 nm. The morphology and size of the prepared ZnO nanorods are well suited to act as a conducting path for electron and CuO nanostructures as light trapping centers, which can increase the rate of charge carrier generation. XPS analysis and band offsets X-ray photoelectron spectroscopy (XPS) was performed to investigate the incorporation of cobalt in ZnO nanostructures. Co-O complexes are formed, thereby reducing oxygen-related defects, which in turn results in the selective passivation of deep level defects up to a certain doping level (in our case, 10% doping). This type of passivation of native defects of ZnO was observed in the previous study using Mg doping (up to 4%). 53 Therefore, we can say that cobalt dopant can play a role in suppressing the formation of oxygen vacancies. Also, the weight (%) of the Peak-C in the O 1s spectrum in 10% cobalt doped ZnO is slightly smaller when compared to the corresponding peak area in the spectrum of undoped ZnO, which might be associated with a reduction in the concentration of hydroxyl groups in the ZnO due to cobalt doping. 21 However, the weight (%) of the Peak-C is slightly enhanced with higher cobalt dopant (15% and 20%) due to cobalt clustering. Similarly, the high-resolution XPS spectra of Cu 2p states are shown in Fig. 6. The core-level spectra were tted with Pseudo-Voigt (mixed Lorentzian-Gaussian) function by employing a Shirley background correction. Peaks around at 933.98 eV and 953.86 eV were observed, which are assigned to the Cu 2p 3/2 and Cu 2p 1/2 state in CuO. 23 In addition to the main peak, the strong shake-up satellite peaks are also detected at the higher binding energy (BE) side, at 941.28 eV, 943.67 eV, and 962.33 eV, which are characteristics of partially lled d-block (3d 9 ) of Cu ++ ions. 49 The peak positions and presence of the shake-up satellites indicate the formation of pure CuO, which is supported by our XRD results. Also, the position of the Cu 2p peaks in ZnO/CuO heterojunction samples shis to higher binding energy compared to pure CuO nanostructures. The shi of Cu 2p peaks to higher binding energy can be explained by the strong interaction between ZnO and CuO nanostructures in our heterojunction. 49 Similar results have been reported earlier by the researcher while performing XPS studies of ZnO/CuO nanostructures by different (RF sputtering, hydrothermal, electrospinning) routes. 9,13,49 The critical parameters leading restrictions for the performance of the photovoltaic cells is band offsets at the heterojunction interface. Depending on the band offsets present at any heterojunction solar cells, the charge transport across the junction interface is affected. 54 To determine band offsets and to explain the structure of ZnO/CuO heterojunction, valence band offset (VBO) was measured by calculating the binding energy difference between valence band maximum (VBM) and the core level (CL) using XPS analysis. 26 Fig. S3 † shows the valence band (VB) spectrum of undoped and cobalt doped ZnO along with CuO nanostructures, and the valence band maximum (VBM) value of 2.64 eV-1.80 eV and 0.57 eV are extrapolated by linear tting for 0-20% cobalt doped ZnO and CuO, respectively. For the calculation of VBO, we use the corelevel measurement technique, as proposed by Kraut et al. 55 In this method, the VBO of ZnO/CuO interface can be calculated using the following equation: where E s i denotes the binding energy of the core level i for the sample s while E s VBM denotes the VBM for the sample s. Further, conduction band offset (CBO) was calculated by using bandgap values of ZnO, CuO, and the VBO by the following relation: 54 where E ZnO g and E CuO g are the bandgap of ZnO and CuO calculated by the Tauc plot using absorption spectra (see Fig. S4 †). The calculated values for VBO and CBO are shown in Table 1. Similar results (VBO ¼ 2.83 eV and CBO ¼ À0.73 eV) were obtained by Hussain et al. 13 for undoped ZnO/CuO nanocomposite. As the cobalt doping level was below 10%, the conduction band of ZnO was lower than that of the CuO nanostructures; i.e., the value of CBO was negative, thus forming a cliff structure at the interface with type-II band alignment. 21,54 When the cobalt doping concentration was above 10%, the conduction band of ZnO was higher than that of the CuO; i.e., the CBO was positive, thus forming a spike structure with type-I band alignment. Many previous studies 54, [56][57][58] showed that a large negative CBO increases the probability of recombination at the interface, and large positive CBO produces a barrier, which hinders the collection of photogenerated carriers. Accordingly, a small positive CBO with a notch like structure at the junction is necessary for stronger band bending to prevent the injected electrons from going to the junction and decrease the chances of recombination. We calculated a positive CBO of 0.07 eV for a 10% cobalt doped ZnO/CuO device (Fig. 7). For 15% and 20% cobalt doped ZnO heterojunctions, where the CBO spike is higher (0.4 eV), and as a result, the recombination became faster again. This consequence might be due to the increase in the trap density because of excessive cobalt doping, and this creates the negative effect of the trap states on the recombination mechanism despite the positive effect of the notch or spike-like structure. Conductivity and Hall measurements It is well known that the electrical properties of the interfacial layers are essential for the performance of the solar cell devices because they will affect charge transport at the interface. 59 Fig. 8(a) shows the conductivity of 0-20% cobalt doped ZnO nanostructures. The electrical conductivity increases with increasing cobalt concentration and shows a maximum value of 20.55 U À1 cm À1 at a 10% cobalt doping level, but beyond this doping concentration, the conductivity was found to decrease again. At relatively lower doping concentrations (#10%), electrons from the dopant play a dominant role in the lm and as less oxygen vacancy scattering occurs, the conductivity increases. 25,59 At a higher cobalt doping concentration of above 10%, the disorder produced in the lattice (due to cobalt clustering) increases the efficiency of scattering mechanisms such as phonon scattering and ionized impurity scattering which, in turn, causes a decrease in conductivity. 60 ZnO with higher conductivity is benecial for reducing the ohmic voltage loss when electron transport within the layer, resulting in the enhancement in V OC and FF of the solar cell which will be discussed later. Hall measurement shows the n-type conductivity in all undoped and doped ZnO samples. We observed that the carrier concentration and mobility of cobalt doped ZnO lms increases to 3.18 Â 10 18 cm À3 and 40.41 cm 2 V À1 s À1 , a peak value for the 10% cobalt doped sample. However, aer 10% cobalt doping level, mobility starts to decrease, whereas carrier concentration becomes saturated. This can be explained as due to excessive cobalt doping; more defects will be formed and acts as scattering centers, which results in the formation of sites capable of trapping carriers. Aer trapping the charge carriers, the traps became electrically charged, creating a potential energy barrier, which hindered the transport of photogenerated carriers from one crystallite site to another, thereby reducing their mobility and conductivity. 61,62 A similar result was obtained by Wu et al. 63 for 0-10% cobalt doped ZnO and proposed that the substitution of Co ++ for Zn ++ could improve the electrical conductivity and mobility up to certain limit due to the increase in carrier concentration. The increased mobility resulting from increasing the cobalt doping also ensures that electrons can quickly transport to the FTO electrode. Hall measurement on CuO lms shows p-type conductivity with This journal is © The Royal Society of Chemistry 2020 RSC Adv., 2020, 10, 7839-7854 | 7845 a carrier concentration of 2.93 Â 10 16 cm À3 , mobility of 0.74 cm 2 V À1 s À1 , and conductivity of 3.46 Â 10 À3 U À1 cm À1 . I-V and EQE analysis Typical current density-voltage (J-V) characteristics of cobalt doped ZnO/CuO heterojunction solar cell (champion cell) under standard AM1.5G illumination are displayed in Fig. 9(a). To assure credible device performance, the statistical parameter distribution of eight devices were plotted in Fig. 10 To nd out the effect of cobalt doping in ZnO/CuO heterojunction on charge recombination mechanism, the diode formation of the solar cells was studied by examining the dark J-V characteristics. The diode parameters in the solar cells, such as the ideality factor (n) and reverse saturation current density (J 0 ), are important indicators of the dominant recombination mechanism. The dark J-V plot ( Fig. 9(b)) reassures that the ZnO/ CuO heterojunction display rectifying performance, which proves the p-n junction formation. Fig. 9(b) shows the semi-log J-V plot, and the inset shows experimental dark J-V data (0% cobalt) tted with a generalized single diode Shockley equation. 64 where J 0 , n, R SH , R S , q, k B , and T are the reverse saturation current density, the diode ideality factor, the shunt resistance, the series resistance, the electronic charge, the Boltzmann constant, and the temperature of the device, respectively. Fitted parameters extracted from the single diode Shockley equation are displayed in Fig. 11 (see Table S3 † for average value). For undoped ZnO, average reverse saturation current density, series resistance, shunt resistance, and diode ideality factor of 8.38 Â 10 À4 mA cm À2 , 30.88 U cm 2 , 736.4 U cm 2 , and 3.39 respectively, were obtained. This high reverse leakage current is the indication of the existence of signicant interface defects and nonradiative recombination at the ZnO/CuO junction. Also, the high value of leakage current is responsible for the limitation of open-circuit voltage 65,66 as well as the ll factor of the device, which can be seen on our device performances. As seen in Fig. 11, the dark reverse leakage current of the ZnO/CuO heterojunction decreased with increasing cobalt doping level-up to 10%, indicating a suppressed recombination. The continuous decrease in the dark current of the solar cells may be due to the increased built-in potential (explained later in C-V analysis) in the cobalt doped ZnO layer, attributed to the increased Femi level of cobalt-doped ZnO. 67 In our case, the ideality factor is higher than 2, and it is reported that if the ideality factor is higher than 2, the defect activities are supposed to be dominated by a recombination process with traps distributed at the interface and surface. 66,68 Besides, the inhomogeneous thickness of ZnO nanorods and non-uniformity of interfacial charges could contribute to the high ideality factor. Since the existence of vacancies and mid-gap trap states associated with nonidealities in both ZnO and CuO surface also forms a substantial number of interfacial states at the ZnO/CuO junction. 38 These nonidealities might be responsible for defect-mediated tunneling, 69,70 which stimulate photogenerated carrier in the reverse direction. The tunneling process mediated by defect densities could cause shunting of the device and result in a lower reverse breakdown voltage. 39 This type of behavior can be seen in Fig. 9 with sizeable reverse leakage current and low photocurrent. The average value of the ideality factor rst reduced from 3.39 to 2.98 as the cobalt content was increased to 10% and then increased to 3.84, with a further increase in cobalt concentration. The 10% cobalt-doped device exhibits the smallest diode ideality factor, indicating that this device shows the best suppression of interfacial recombination, which might be due to a small positive CBO, increased built-in potential and suitable structural properties such as lesser defect density. However, even for the 10% cobalt doped cells, the ideality factor is still high. This substantial value indicates that the spacecharge recombination still dominates the device loss mechanism. Fig. 12 shows the external quantum efficiency (EQE) of typical ZnO/CuO heterojunction solar cells with the best PCE at each cobalt content (0-20%). By integrating EQE spectra with the standard solar spectrum, we found the calculated J SC values of around 8.52, 8.96, 9.12, 7.96, and 6.68 mA cm À2 , respectively, for 0-20% cobalt doped ZnO. These J SC values from EQE agrees with those obtained from the J-V analysis. We can observe poor EQE in the long-wavelength region (>550 nm), meaning a poor carrier collection in this region, which explains its lower short circuit current density (J SC ). This indicates the loss of deeply absorbed photons due to recombination in the bulk and depletion regions of the device. 66 In the wavelength range corresponding to the ZnO layer absorption, the EQE of the doped cell is slightly inferior to the cell with the un-doped absorber layer. Also, the EQE results show slight band tailing, which is attributed to defect complexes present in the CuO layer. 71,72 However, the increase in EQE (from 38% to 43%) with 10% cobalt doped ZnO likely results from the factors such as improvement in conduction band offsets and hence better charge collection efficiency due to the reduction in interface recombination. Thus, further improvement in solar cells should focus on the enhancement of the heterojunction interface and conduction band alignment to improve the V OC by doping of the CuO absorber layer as well. Impedance and capacitance-voltage (CV) analysis To gain information about the carrier recombination process, we investigated ZnO/CuO devices using impedance spectroscopy. Fig. 13 displays the impedance spectrum of devices in dark conditions. The impedance patterns of different undoped and cobalt doped samples analyzed were tted using an equivalent circuit model shown in the inset of Fig. 13(a). The Cole-Cole plot reveals two distinct features, that is, a small arc at high frequency and a large arc at low frequency. Here, the high-frequency feature (Fig. 13(b)) is ascribed to charge transfer process, while the lower frequency element contains the information about carrier recombination process. [73][74][75][76] In addition, the value of the starting point at the real part of the Cole-Cole plot corresponds to the series resistance R S . 22 In our devices; the R S is related to the resistance, including that arising from external wires or the substrates. In the equivalent circuit, the resistance, R ct , known as charge transfer resistance, is associated with the ZnO/CuO interface or the CuO/Au contact and the selective contact capacitance (CPE ct ) owing to the charge buildup at the interfaces. Whereas, the element R rec and CPE m are associated with recombination resistance and chemical capacitance of the system. The tted parameters based on the equivalent circuit are listed in Table 2. In the equivalent circuit, the constant phase element (CPE) accounts for the deviation of the capacitance from an ideal capacitor since several theories such as leaky-capacitor and non-uniform current distribution have been proposed to account for the non-ideal behavior of capacitance such as surface roughness, porosity, and various surface states. [77][78][79] We also observed that CPE ct is lowest for 10% cobalt-doped sample, which implies less charge accumulation and efficient charge transport by 10% cobalt doped ZnO layer. Since CPE ct is related to charge collection at the interfaces, having a small value is favorable for increased device performance. 80 We observed that the 10% cobalt-doped ZnO based device gives the lowest R ct value (573.09 U), indicating that charge extraction is most efficient at the interface of this sample compared to all the investigated samples, a result that is consistent with the conductivity and electron mobility values. The lower R ct values must contribute to the highest J SC value of a 10% cobalt-doped device. 75,81 In contrast, when various cobalt concentrations were introduced into ZnO nanostructures, the R rec is signicantly increased from 6582 U for the cell with pristine ZnO to 7973 U corresponding to the cells with 10% Co-doped ZnO, respectively. The much larger R rec for the device with Co:ZnO prepared with the cobalt concentration of 10% originates from fewer defect-assisted traps, indicating effective suppression of the charge recombination and leakage current. 73 Furthermore, due to the more favorable band alignments between the 10% cobalt doped ZnO and CuO, previously discussed in XPS analysis, the extraction of charge carrier is energetically favored and thus decreased interfacial charge accumulation occurs, resulting in a decrease in carrier recombination in the ZnO/CuO interface. This is consistent with the increase in R rec observed in all (8 samples for each doping level) 0-20% cobalt doped devices, which denotes less frequent recombination events. Higher recombination resistance value (R rec ) of 10% cobalt doped sample from impedance analysis also conrms its more substantial shunt resistance (R SH ) from J-V measurement, suggesting that the charge recombination is unfavorable between the interface of ZnO and CuO layer. This result shows a clear correlation between a higher R rec (lower recombination rate) and a higher V OC of the device. In other words, the samples with cobalt doping (up to 10%) reveals that the presence of cobalt in ZnO lattice induces a decrease in the recombination rate, thus enhancing the V OC of the devices and solar cell performances. This effect is reversed when the cobalt doping increases beyond 10%. This consequence might be due to the increase in the deep level trap density and higher conduction band offsets because of excessive cobalt doping. From the complex impedance spectrum, the effective carrier lifetime can be estimated since the peak of the large semicircle in the Cole-Cole plot corresponds to a frequency whose reciprocal is the effective carrier lifetime. 82,83 Carrier lifetime calculated from the peak of the large semicircle in a Cole-Cole plot and RC time constant (s ¼ R rec C m ) using equivalent circuit t is within 4% ( Table 2). For the undoped device, the effective carrier lifetime is determined to be 44.2 ms, and the corresponding values for doped samples (5-20%) are 51.6, 60.2, 37.9, and 32.5 ms, respectively. The longer carrier lifetime of the device indicates that photogenerated carriers have more time to move toward, and ultimately be collected by the electrode without suffering recombination. 82 We can say that the device with 10% cobalt doped samples undergoes less recombination since it has a long carrier lifetime of 60.21 ms. This is consistent with the more considerable recombination resistance of the device (the diameter of the large semicircle), which indicates less carrier recombination inside the device. The longer carrier lifetime and larger recombination resistance should be induced by the reduction of defects as well as the improved band alignment between ZnO and CuO layers by cobalt doping. To investigate more into the recombination phenomena, we have characterized the capacitive spectra. Mott-Schottky analysis is commonly used to determine a semiconductor's doping density and the built-in bias potential. 84 Depletion width and the capacitance-voltage behavior is then expressed according to the Mott-Schottky equation as w ¼ where C represents the depletion capacitance, V bi is the built-in potential, V is the applied bias, A is the device area (0.16 cm 2 ), 3 is the active layer's dielectric constant (dielectric constant of 18.1 was assumed for CuO 85 ), 3 0 is the permittivity of free space, q is the electronic charge, w is the depletion width, and N a is the background doping concentration. The built-in potential and doping density are then found by tting eqn (4) to the linear portion of the C À2 versus bias voltage plot, where the slope gave the background doping density and extrapolated intersection with the voltage axis gives the built-in potential (Fig. 14). 84 Modulating frequency of 1 kHz is used for C-V measurements because, at high frequency, not all of the defect states can respond to the modulating signal and hence no longer contribute to the junction capacitance. 86 Therefore, lowfrequency measurements provide a more reliable description of the carrier dynamics in the system, despite the role of traps and defects. The effect of cobalt doping on built-in potential, depletion width, and background doping concentration are shown in . In the case of the undoped sample, a narrow depletion width of 174 nm, a built-in potential of 0.514 V, and a background doping concentration of 3.41 Â 10 16 cm À3 was observed. This value of background concentration in ZnO/CuO heterojunction is comparable to the result (3Â 10 16 cm À3 ) obtained by Chabane et al. 11 This result shows that the high background doping concentration in our sample is consistent with small depletion width, which will increase the probability of recombination. As the cobalt concentration is increased to 10%, the corresponding depletion width and built-in potential become 227 nm and 0.611 V, respectively, and the background doping concentration reduces to 2.37 Â 10 16 cm À3 . This result indicates that the solar cell is not still fully depleted and could be the reason for low device performance arising from low carrier collection. However, this study demonstrates the effectiveness of cobalt doping on enhancing the depletion width and built-in potential. This improved performance can be seen in the EQE and J-V plot due to the broader depletion width provided by the low background doping density. Here, the net effect of these processes is also directly observed in the properties of the built-in voltage. The high initial background doping level evident in solar cells is due to defects forming from the low-temperature growth conditions because there exist many defects (oxygen vacancies or interstitials) in the nanostructured lms grown by a chemical bath deposition process. By the cobalt doping process, strong and stable Co-O complexes are formed, thereby reducing oxygen-related defects (can be seen in O 1s spectra in XPS analysis); this results in the selective passivation of deep level defects up to certain doping level (in our case 10% doping). 53 A cross-over in the forward-bias (V > 0.5 V) region is observed for the device without any MoO 3 buffer layer. This type of cross-over is an indication of the presence of a Schottky barrier at the CuO/ Au contact in opposition to the junction formed at the ZnO/CuO interface. 87 Such a Schottky barrier would obstruct the extraction of holes from the CuO layer, regulating both the dark and the photocurrent, and dropping the open-circuit voltage, 38 as is observed in Fig. 15(a). The incorporation of MoO 3 between the CuO and Au electrode reduces this cross-over effect, concurrently increasing all the parameters. It is to be noted that the PCE increases rstly and then decreases with the increase of MoO 3 thickness and achieves its highest value of 2.11% when the MoO 3 thickness equals to 20 nm (Table 3). Further increase in the thickness of MoO 3 deteriorates the performance of solar cells. It is evident that the thickness of the MoO 3 buffer layer can signicantly affect the solar cell parameters of J SC , V OC , and FF and then lead to the variation of power conversion efficiency. The increase of J SC and FF might be attributed to smaller series resistances and lower recombination rates as the MoO 3 thickness increases from 0 to 20 nm; while the decrease of J SC and FF as the MoO 3 thickness further increases to 30 nm and 40 nm should be due to more signicant series resistances and higher recombination rates induced by thick MoO 3 layers. It is worth pointing out that V OC keeps increasing when the MoO 3 thickness is varied from 0 to 20 nm (Table 3). If MoO 3 thickness increases beyond 20 nm, the V OC decreased gradually due to electron-hole recombination induced by charge accumulation emerging at the interface between MoO 3 and CuO layer. 88 This hypothesis is veried by our impedance analysis results (Table S4 †), where we observed charge transfer resistance is decreased, and recombination resistance is slightly increased by 20 nm thick MoO 3 layers. Furthermore, a decrease in charge transfer capacitance is observed, which again conrms the reduced charge accumulation and increased charge extraction by optimized MoO 3 thickness. Effect of MoO 3 buffer layer The effect of the MoO 3 buffer layer is also evident in dark J-V characteristics ( Fig. 15(b)). A reduction in diode ideality factor to 2.50 is observed with a 20 nm thick MoO 3 layer, which means electron-hole recombination is reduced due to better ohmic contact. Besides, smaller ideality factor (2.50) achieved for MoO 3 -based devices also proves the better hole-selectivity, which results in decreased charge recombination loss at the CuO/Au interface. Subsequently, with MoO 3 thickness' continuous increasing from 20 nm to 40 nm, the ideality factor rises from 2.50 to 3.09 attributed to charge accumulation, which leads to enhanced recombination. The optimum efficiency can be obtained for the device with a 20 nm MoO 3 layer and a minimum ideality factor (2.50), which indicates that the carrier recombination decreased to the minimum. Therefore, the variation of circuit parameters could be attributed to charge accumulation at the CuO/Au Schottky type contact or CuO/ thick-MoO 3 interface. The lower J 0 (9.39 Â 10 À5 mA cm À2 ) of the 20 nm thick MoO 3 device compared to the current density without MoO 3 device affirms that reduced recombination is achieved in the former device. This is an indication that an increased hole extraction efficiency is due to the decrease in the height of the Schottky barrier. 38 The dark J-V characteristics show that the MoO 3 -based device possesses better diode behavior with lower leakage current density and higher rectication ratio than that of the device without MoO 3 . This implies that the MoO 3 -based PV cells maintain smaller reverse saturation current (J 0 ) and ideality factor (n), which are the signicant parameters contributing to the suppression of electron-hole recombination. While the MoO 3 buffer layer is thick >20 nm, the charge extraction will be hindered; the photo-generated charges would be recombined in the buffer layer and degrade the solar cell performance. This type of effect is also discussed with reports of enhanced hole injection into organic holetransport layers upon the insertion of a MoO 3 buffer layer. 89 Previous reports on the use of MoO 3 layer into solar cell devices have attributed an increase in power conversion efficiency is due to the reduction in series resistance resulting from improved hole extraction from the p-type layer through the high-work-function of MoO 3 , 90,91 while others have ascribed it to a decrease in leakage current and simultaneous increase in shunt resistance, categorizing an electron-blocking characteristics as the vital contribution of MoO 3 . 92,93 In our present study, both effects are noticed; however, the enhancement in hole extraction due to the reduction of the back contact Schottky barrier of CuO/Au interface might play a dominant role. Fig. 16 shows the EQE spectra of the heterojunction PV cell with structures of FTO/ZnO/CuO/MoO 3 /Au. Fig. 16 also shows that the EQE was enhanced by the insertion of the MoO 3 layer, particularly in l > 550 nm. In this study, the cell with a 20 nm thick MoO 3 layer revealed the highest EQE, and the maximum peak of EQE was 44% at 510 nm wavelength. The peak in EQE with the MoO 3 layer is improved to 44% from 43% without the MoO 3 buffer layer, indicating that the charge collection efficiency is enhanced. This may be an indication of the enhancement in EQE due to better charge collection from the removal of back Schottky barrier with CuO/Au interface, which can be demonstrated by a signicant improvement in EQE for l > 550 nm. Conclusion We investigated band alignments in nanostructured ZnO/CuO heterojunction solar cells based on XPS measurements and observed an enhancement in power conversion efficiency by altering conduction and valence band offsets with cobalt doped ZnO. We found that doping of cobalt into the ZnO layer will result in Zn 1Àx Co x O, which resulted in lowering the bandgap because of the shiing of the conduction band minimum (CBM) and valence band maximum (VBM). We also observed that when the cobalt doping level was below 10%, a cliff structure is formed at the interface with type-II band alignments. When the cobalt concentration was above 10%, a spike structure is formed with type-I band alignment. We were able to control the CBOs in the range from À0.21 to 0.48 eV with cobalt doping in ZnO/CuO heterostructures. The photovoltaic device with 10% cobalt-doped ZnO exhibited the best performance, with a PCE of 1.87%. This increased efficiency originated from improved optical properties, proper band offsets, excellent charge extraction efficiency, and suppressed charge recombination between the interface of CuO and 10% cobalt doped ZnO, as revealed by optical, electrical, and impedance spectroscopy measurements. Further, we employed a low-resistance contact to p-CuO using molybdenum oxide (MoO 3 ) as the backcontact buffer layer for the enhancement of PV cell performance. We observed that the thickness of the MoO 3 layers had a signicant impact on the performance of the ZnO/CuO solar cells. We were able to increase the efficiency of the solar cell up to 2.11% with a MoO 3 buffer layer thickness of 20 nm. Our study demonstrates the importance of optimizing the band alignment to enhance the optical and electrical properties of the ZnO layer resulting in the overall performance of oxide-based photovoltaic cells. Conflicts of interest There are no conicts of interest to declare.
2020-02-27T09:32:14.765Z
2020-02-18T00:00:00.000
{ "year": 2020, "sha1": "df8960cdfd23fef81c70195be2e1de8909b586e3", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/c9ra10771a", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb9347deb1899c275c1c0bb489011287fe64fbce", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
218889835
pes2o/s2orc
v3-fos-license
Formation and Dynamics of Transequatorial Loops To study the dynamical evolution of trans-equatorial loops (TELs) using imaging and spectroscopy. We have used the images recorded by the Atmospheric Imaging Assembly and the Helioseismic Magnetic Imager on-board the Solar Dynamics Observatory and spectroscopic observations taken from the Extreme-Ultraviolet Imaging Spectrometer on-board Hinode. The data from AIA 193 {\AA} channel show that TELs are formed between AR 12230 and a newly emerging AR 12234 and evolved during December 10-14, 2014. The xt-plots for December 12, 2014 obtained using AIA 193 {\AA} data reveal signatures of inflow and outflow towards an X-region. High cadence AIA images also show recurrent intensity enhancements in close proximity to the X-region (P2), which is observed to have higher intensities for spectral lines formed at log T[K] =6.20 and voids at other higher temperatures. The electron densities and temperatures in the X-region (and P2) are maintained steadily at log N_e =8.5-8.7 /cc and log T[K] =6.20, respectively. Doppler velocities in the X-region show predominant redshifts by about 5-8 km/s when closer to the disk centre but blueshifts (along with some zero-velocity pixels) when away from the centre. The Full-Width-Half-Maxima (FWHM) maps reveal non-thermal velocities of about 27-30 km/s for Fe XII, Fe XIII and Fe XV lines. However, the brightest pixels have non-thermal velocities of about 62 km/s for Fe XII and Fe XIII lines. On the contrary, the dark X-region for Fe XV line have the highest non-thermal velocity (roughly 115 km/s). We conclude that the TELs are formed due to magnetic reconnection. We further note that the TELs themselves undergo magnetic reconnection leading to reformation of loops of individual ARs. Moreover, this study, for the first time, provides measurements of plasma parameters in X-regions thereby providing essential constraints for theoretical studies. Introduction It is well established that the solar corona is full of loop structures (see, e.g., Reale 2014, for a review) as well as diffuse emissions (Del Zanna & Mason 2003;Tripathi et al. 2009;Viall & Klimchuk 2011;Subramanian et al. 2014). One of those are the trans-equatorial loops (TELs), which join ARs across the equator of the Sun. The theoretical existence of such loop structures was first suggested by Babcock (1961) as a consequence of the solar dynamo. However, such loops were first reported over a decade later in 1974 using the observations recorded by the X-Ray Telescope (XRT) on-board Skylab (Vaiana et al. 1974). Based on the subsequent observations of ARs associated with McMath plage numbers 12472 and 12474, Chase et al. (1976) and Svestka et al. (1977) suggested that the TELs were formed due to magnetic reconnection between two ARs. Using the observations recorded by the Soft X-ray Telescope (Tsuneta et al. 1991) on-board Yohkoh, Tsuneta (1996) reported the formation of X-type topology in TELs connecting two ARs in two hemispheres. This has been followed by a number of authors (Fárník et al. 1999;Pevtsov 2000;Fárník et al. 2001;Fárník & Švestka 2002;Crooker et al. 2002;Pevtsov 2004;Chen et al. 2006Chen et al. , 2007Wang et al. 2007;Shimojo et al. 2007;Yokoyama & Masuda 2009 reporting the formation, characteristics and evolution of TELs, using the observations taken from SXT as well as the X-Ray Telescope (XRT; Golub et al. 2007) on-board Hinode (Kosugi et al. 2007). The early spectroscopic diagnostics of TELs were reported by authors viz., Harra et al. (2003) and Brosius (2006) using the observations recorded by the Coronal Diagnostic Spectrometer (CDS; Harrison et al. 1995) on-board the Solar and Heliospheric Observatory (SOHO; Domingo et al. 1995). However, we emphasize that these spectroscopic measurements were performed in and along the off-limb TELs only. X-shaped regions (cusp regions) are believed to be signatures of magnetic reconnection, for e.g., Tsuneta et al. (1992); Forbes & Acton (1996); Tripathi et al. (2006Tripathi et al. ( , 2007. Therefore, measurement of plasma parameters in the X-shaped structure formed between the TELs may help us probe the physical properties of reconnection regions. To the best of our knowledge, there are no such measurements till date for TELs. In this paper, we perform a detailed study of a complete sequence of the formation and evolution of a set of TELs using the observations taken from the Atmospheric Imaging Assembly (AIA; Lemen et al. 2012) and the Helioseismic Magnetic Imager (HMI; Schou et al. 2012a,b) on-board the Solar Dynamics Observatory (SDO; Boerner et al. 2012;Pesnell et al. 2012). This is accompanied with spectroscopic observations recorded with the Extreme-ultraviolet Imaging Spectrometer (EIS; Culhane et al. 2007) on-board Hinode. The rest of the paper is structured as follows. In §2, we discuss the observations used for this study. Data analysis and results are presented in §3. Finally, we summarize and discuss the results in §4. Observations and data In this work, we have primarily used the AIA and EIS observations. AIA provides near-simultaneous full-disk observations of the solar atmosphere in 7 EUV channels sensitive to different temperatures (O'Dwyer et al. 2010;Del Zanna et al. 2011;Boerner et al. 2012) with an approximate time cadence of 12 s and a pixel size of 0.6 . EIS provides spectroscopic observations in two wavelength bands, CCD A (170-210 Å) and CCD B (250-290 Å) using four slit widths 1, 2, 4 and 250 . We have used the AIA observations to study the long term evolution of the TELs and EIS observations to measure the physical plasma parameters such as electron density, temperature, Doppler velocities and non-thermal widths. A portion of the solar disk observed on the AIA 193 Å channel on December 12, 2014 at 19:35:30 UT, is shown in panel A of Fig. 1. The over-plotted green box indicates the regionof-interest (ROI) and displayed in panel B and considered for further study. The over-plotted white box in both panels A & B shows the EIS raster field-of-view (FOV). In panel B, we overplot two white dashed lines crossing each other to highlight the X-shaped structure. Fortuitously, the top part of the EIS raster FOV covers this X-shaped region. Also, the blue (positive) and black (negative) contours represent the Line-Of-Sight (LOS) magnetic flux density of 500 G as observed at a nearsimultaneous time (19:35:40 UT) by HMI. The same HMI image is shown in panel (C) where we clearly see two bipolar active regions (AR 12230 and AR 12234) on either side of the solar equator. We note that EIS rastered this region four times during 10:16:00-11:17:00 UT,[19][20][20][21]:00 UT on December 12th, 2014 using the 2 slit. We denote these rasters as 'E1', 'E2', 'E3' and 'E4', respectively, where E2 is the closest to the disk center. All these rasters are 60-steps rasters that covered a FOV of 120 ×512 . We have processed all the AIA and EIS observations with the standard software provided in Solarsoft (Freeland & Handy 1998). Table 1. EIS spectral lines used for studying the plasma parameters at the reconnection region, as well as the adjacent loops where λ 0 is the rest wavelength. The peak formation temperatures are taken from CHIANTI (Dere et al. 1996;Del Zanna et al. 2015) at one particular density. Ion name λ 0 (Å) log T peak (Brown et al. 2008) [ The EIS rasters included several spectral lines spread over a broad range of temperatures (log [T/K] = 5.80 − 6.35). Table 1 lists the spectral lines along with their laboratory wavelengths taken from Brown et al. (2008) and peak formation temperatures derived from CHIANTI (Dere et al. 1996;Del Zanna et al. 2015). The reference wavelength for Doppler velocities are derived using the relatively cooler lines, Fe viii and Si vii for CCD A and CCD B, respectively. For this purpose, we have used the averaged spectra for all pixels belonging to rows 50-149 (from the bottom) of the EIS raster (highlighted by the white box in panels A and B of Fig. 1) because these represent the quiet Sun better. However, we have ignored the bottom 0-49 rows due to missing data. We have used the eis_auto_fit.pro package 1 that rectifies the EIS spectral data by removing the errors due to orbital drift and slit tilt. An uncertainty of ∼5 km s −1 is expected on EIS velocity estimates 2 . We note that there are four line pairs, viz., Fe xi, Fe xii, Fe xiii and Fe xiv (labelled with a, b in Table 1), which are suitable to estimate the electron densities at four different temperatures (see e.g., Young et al. 2007;Tripathi et al. 2010 The other AR 12234 located in the northern hemisphere was first detected on 10th December 2014. To show the evolution of the structures better, note that we have differentially rotated all the images displayed here at 19:35:30 UT on 12th December 2014 that corresponds to the center time of the EIS raster E2. Fig. 2 demonstrates that as early as on 10th December, the two ARs have individual loop structures along with a slight hint of faint TELs, connecting the two ARs. These TELs become prominent on 12th December (panel C), along with an X-shaped structure that is observed between x = [140 , 260 ] and y = [-80 , -20 ] (see panels D, E and F). The X-shaped structure exists through 12th December. By 13th December, the ARs drift across the disk, towards the western limb, with gradual disorientation of the loops. The very well-defined morphology of the TELs are no longer visible on or after 14th December. Formation and Evolution of TELs An animation tel12dec.gif created with data taken at a cadence of 30 minutes using the AIA 193 Å channel images on 12th December, shows a continuous interaction amongst the loops belonging to the individual ARs as well as the TELs. On the one hand, we identify epochs when loops belonging to individual ARs interact and form TELs (e.g., refer to the frames at 13:00 hours in the animation). On the other hand, we also identify the interaction among the TELs leading to the reformation of loops belonging to their parent ARs. To highlight this, we show in panel A of Fig. 3 loops belonging to individual ARs (marked as 'I'), along with some TELs on the left of the FOV (marked as 'T'). In panel B, we see the 'I' loops coming close enough so as to merge and form new TELs on the right of the X-region. Similarly, we see these TELs interacting amongst themselves in panel C. Finally, in panel D, we see such TELs have been annihilated to form more loops belonging to the two ARs. Loop dynamics To further understand the dynamics of the loops, we created xtplots along two artificial 10-pixels wide slits named 'V' and 'H' (see panel A in Fig. 4). For this purpose, we have used the data taken on December 12, 2014, with AIA 193Å at a cadence of 1 minute. Panels B and C of Fig. 4 show the xt-plots corresponding to the slits V and H, respectively. The horizontal black dashed lines trace the locus of point P labelled in panel A whereas the solid vertical black lines locate the four EIS raster phases, marked as E1, E2, E3 and E4. The xt-plot corresponding to the slit V (panel B) demonstrates the inward movement of the structures from the very be-ginning and appear to merge soon after the raster E1, approximately between 12:00 and 13:00 UT. Such inward motion in the xt-plot along the slit V suggests the evolution of loops belonging to individual active regions and moving closer to each other. During the same interval, the xt-plot corresponding to slit H (panel C) reveals hints of outward motion, though not as systematic as the case for the inward movement. The outward movement in the xt-plot for slit H suggests the structures are moving away. Combining the two xt-plots, we infer that loops belonging to the individual active regions are evolving. Their tips are coming closer to each other, leading to magnetic reconnection and formation of the TELs. We have highlighted the inward and outward motions with blue slanted (dashed-doted) lines in panels B and C. After 13:00 UT, the motions observed in the two xt-plots reverse direction till about 19:00 UT. The reversing of movements suggests that during this period, the TELs are evolving and moving towards each other. During this time, the TELs reconnect and lead to the reformation of the loops belonging to individual active regions. Beyond 19:00 UT on December 12, 2014, the motion is similar to that is observed during the first phase. We have used the over-plotted blue dashed-dotted lines in panel B (by eye estimation) to obtain the inflow speeds during the first and third phases. The inflow speeds are 1 and 3 km s −1 , whereas the outflow speed for the second phase is 1.5 km s −1 . Article number, page 3 of 17 A&A proofs: manuscript no. AG_manuscript_arxiv Due to ambiguous patterns observed in the xt-plot shown in panel C, we did not measure speeds along the slit H. Similar values of inflows and outflow speeds were also reported by Yokoyama et al. (2001) in an observation related to a flare. Identification of the reconnection point with AIA images As mentioned earlier, the X-region was rastered with EIS four times on 12th December. Since the raster takes time to scan through the FOV, we identify a region that was simultaneously observed by AIA and EIS slits. For this purpose, we plot in Fig. 5 the AIA intensity maps in 193 Å channel corresponding to 'E2' (panels A-F and H). However, the ROI plotted in these maps is between y=[-80 , 10 ]) so as to highlight the dynamics/ features in the neighbourhood of the X-region. We identify intermittent brightenings occurring between x=[217 , 222 ] and y= [-34 , -29 ], which is henceforth identified as 'P2' (marked with the yellow box). Coincidentally, this region is also observed by four EIS raster exposures (highlighted by the white vertical lines in panel C). In panel G (just beneath panel C), we plot the average light curve in this box with a cadence of 12 seconds between 18:55:07-20:14:55 UT. The bold black vertical lines represent the four exposures of E2 raster, each 2 wide, corresponding to the white lines marked in panel C. In addition, the exact duration of E2 is marked by the dashed black vertical lines in panel G. From the light curve, we see that three intensity peaks occur in quick succession at P2. This indicates dynamic events occurring and are also captured by simultaneous EIS raster observations. This gives us the opportunity for measuring the plasma parameters in the P2 region. Similarly, for E4 observations, there is a partial overlap of EIS observations with the region having repeated intensity enhancement. However, no such spatial overlap of EIS exposures with those of homologous intensity enhancement is observed for E1 and E3. Plasma Diagnostics with EIS As stated earlier and shown in Fig. 1, the EIS spectrometer observed the X-shaped region four times on 12th December 2014, in raster mode. These are denoted as E1, E2, E3 and E4. We plot the EIS intensity maps obtained in different ionisation states of Iron from Fe viii to Fe xv in Figs. 6 & 7 (as labelled). In these maps, we emphasize the region between y=[-120 , 10 ], so as to show the morphology of the loops near the X-region as well as the loops beneath and above it, in detail. The four columns in these two figures correspond to the four different EIS rasters. We have differentially rotated the FOVs of E2, E3 and E4 to the raster start time of E1, to facilitate the time evolution study of the same region. The noticeable rightmost black stripes in E2, E3 and E4 are due to this rotation. We locate the X-shaped region in all the intensity maps with a blue box, that encloses the point 'P' shown in panel A of Fig. 4. The intensity maps obtained at cooler temperatures, such as in Fe viii and Fe x, show a number of distinct loops criss-crossing each other (top and middle rows in Fig. 6). Similar structures are also seen in the intensity images obtained in Fe xi (bottom row of Fig. 6) and Fe xii (top row of Fig. 7) but with a significant amount of diffuse emission, similar to those observed by, for e.g., Tripathi et al. (2009);Subramanian et al. (2014). In the intensity images obtained at higher temperatures (exceeding log T [K] = 6.20), we notice a decrease in the intensity in the X-shaped region. The region appears fully evacuated in the intensity maps of Fe xiv and Fe xv for all the three rasters but E1 (bottom row of Fig. 7). It is worth noticing that there is an abundance of bright loops in the Fe xv intensity maps beneath the X-region. To illustrate this better, we obtain light curves in five spectral lines of Iron for all four EIS rasters and plot them in Fig. 8 as labelled. The top panels are the intensity maps in Fe xv and bottom panels are the light curves. We obtain the light curves by averaging the intensities within the two horizontal white lines overplotted in the top panels. We have drawn the vertical blue and black lines in the top and bottom panels, respectively, to highlight the extent (in x-direction) of the X-shaped region and the corresponding extent in the respective light curves. The light curves (shown in the bottom panels of Fig. 8) reveal that the X-shaped region is dimmer in all the spectral lines in all rasters, except E1, where Fe xv is brighter than other lines. However, it shows a decrease in intensity where Fe x shows an enhancement. Similar nature of light curves is noted for Fe xiv (not shown here), albeit it is a weak line and has minor fluctuations in intensity as a function of position. Density and Temperature diagnostics Plasma diagnostics are conducted in several regions within the EIS raster FOV, to discern the properties in different types of loop structures captured in the same. The average electron densities and temperatures have been estimated at the X-region along with those in the adjacent loops for all four rasters. These are denoted in Fig. 9 by 'P' (X-region), 'L' (TELs) and 'D' (hot loops beneath the X-region) on the Fe xii 195 Å intensity map of E2, rotated with respect to E1. In addition, for E2, we have the fourth region of interest P2 which shows intermittent intensity enhancements, as shown in panel G of Fig. 5. 3.4.1.1. Assessment of background/foreground Estimate of background/ foreground emission plays an important role in measuring plasma electron densities and temperatures (Del Zanna & Mason 2003;Tripathi et al. 2011). For this, we identify a region marked as 'B' in Fig. 9. The average intensity in 'B' is assumed to be the background/ foreground intensity. The green box in Fig. 9 is the zoomed-in ROI and correspond to the ROI that is shown in Figs. 6 and 7. The averaged intensities for locations P, P2, L, D and B are noted in Table 2. We note that the averaged intensities in Mg vii and Si vii in 'B' region larger than the actual locations which are considered for the estimates. Similar discrepancy is noted for Fe xiv line. We attribute this to the weak spectral lines and therefore, discard these three lines for any kind of plasma diagnostic studies. In addition, we note that for different locations for different rasters, the average intensities in 'B' exceed those in the other three regions. This renders most of the low temperature lines viz., Fe viii, Fe ix, Fe x and Fe xi unsuitable for the EM-loci computation. That leaves only a set of four lines Fe xii 192 Å and 195 Å, Fe xiii and Fe xv only. Therefore, we neglect the background/ foreground correction for EM-loci plots for all three (four) regions, 'P', 'L', and 'D' (and 'P2' in case of E2). Density diagnostics in the X-region Using the average intensities in 'B' in Fig. 9 as the background/ foreground values, we derive the density maps for Fe xii and Fe xiii line pairs (refer to Fig. 10) for all four raster periods. Note that these maps show background subtracted densities. The four columns represent the four different rasters as labelled. The overplotted white-box in each map highlights the X-shaped structure (same location as the blue boxes in Figs. 6 & 7). It is further noted that these boxes enclose the point 'P', shown in panel A of Fig. 4. We neglect deriving density maps obtained using Fe xi and Fe xiv, since these are noisy. This is because of one of the lines of Fe xi, i.e. 180.401 Å is at the edge of CCD A where the effective aperture area is small; and Fe xiv lines have small signal to noise ratios as they are weak. The average densities within the X-shaped regions (enclosed by white boxes) for all four rasters using the Fe xii (similar for Fe xiii) lines range between log N e = 8.46 to 8.67 cm −3 (also see, Table 3). To check the consistency of our results, we compared these densities by obtaining the averaged spectra within the white box before taking the ratios. The density estimates were similar to those listed in Table 3. In addition, the density at the identified brightening observed by EIS slits, i.e., P2 is log N e = 8.50 in Fe xii, which is consistent with the average values in the X-region. Temperature diagnostics in the X-region We estimate the temperature of the X-shaped region, which is shown by blue boxes in Figs 2010). For this purpose, we first obtain the averaged intensities in the region in all the spectral lines marked with 't' in Table 1. However, we exclude Si vii, for reasons explained in §3.4.1.1. To compute the EM we have used coronal abundances of Schmelz et al. (2012) and standard ionization equilibrium given by CHI-ANTI (Dere et al. 1996;Del Zanna et al. 2015). The obtained EM-loci curves for all four rasters in the X-region are shown in Fig. 11. We have also plotted the histograms of number of crossings of EM curves within a temperature bin of log T [K] = 0.1 in the respective panels. In all the four plots, the left y-axis denote the EM values, whereas the right y-axis represents number of crossings in each temperature bin. The EM-loci curves along with the histograms suggest that the plasma within the X-shaped structure (enclosing the point P) during E2 is nearly isothermal with temperature within log T [K] = 6.10-6.20. In contrast, for E3 we note that the plasma is more multi-thermal in nature, with the histogram peaking between log T [K] = 6.10-6.30. The average temperatures in the X-region for all four rasters are noted in in Table 3. For the P2-region, the EM-loci curves are shown in Fig. 12. It reveals that the peak formation temperature ranges between log T [K] = 6.20-6.30 (refer to Table 3), which is somewhat larger than that noted in the X-region. Plasma density and temperature diagnostics in the adjacent loops It is imperative to have a comparison of the electron densities and temperatures in the X-region (the point P is contained in this region) with those at the adjacent loops. Therefore, in Fig. 9 we have identified two additional regions-one on the TELs (shown in blue and identified as 'L') and other at the loops belonging to the active region in the southern hemisphere (shown in black, identified as 'D') of the X-region. The background is Fe xii 195 Å intensity map corresponding to E2. The average densities obtained at L, in most cases, is higher than those obtained at P. The former, in turn, is always less than those in D (see Table 3). We note that densities in the TELs obtained here using Fe xii lines is about an order of magnitude lower than those obtained by Liu et al. (2011) using the line ratios of Si x observed by CDS. In Fig. 13, we also plot the EM Loci curves obtained for locations L and D for E2 raster period. The plots show that the plasma at location D is more multi-thermal than location L. We further note that the temperature for TELs and loops corresponding to the active region in the southern hemisphere are somewhat higher (and more multi-thermal) than those obtained P i.e., within the X-shaped region (see Table 3). The temperatures ob- tained for locations L and D are in agreement with those obtained for TELs by Sheeley et al. (1975); Delannée & Aulanier (1999); Glover et al. (2003);Pevtsov (2004); Balasubramaniam et al. (2005). This temperature range is maintained throughout the four rasters. Doppler velocities and Spectral Line Width We obtain the Doppler velocity and line width maps corresponding to the X-region for all four rasters in Fe xii, Fe xiii and Fe xv. The maps are displayed in Figs. 14 & 15. The four columns represent the four EIS rasters. The over-plotted white (black) boxes in Doppler (line width) maps correspond to the blue boxes in Figs. 6 & 7, enclosing the X-shaped structure. The Doppler velocity maps for Fe xii and Fe xiii are very similar. During the second raster E2, when the region is closest to the disk center, the X-shaped region shows dominant redshifts of ∼5-8 km s −1 . On the contrary, during rasters E1 and E3, the same region shows predominant blueshifts of ∼10 km s −1 . For the raster E4 when the active regions is farthest from the disk center, the LOS velocities are predominantly upflows (∼5-8 km s −1 ). At higher temperature mapped by Fe xv line, a significant fraction of the pixels is observed to have blueshifts (E1), redshifts (E2, E3) and very close to zero (E4) velocities. The average Doppler velocities in the box region are noted in Table 3. Taking into ac-count an uncertainty of ∼5 km s −1 in EIS velocity measurements (see e.g., Young et al. 2012), we note that the direction of plasma flows along the LOS do not change. The line width maps shown in Fig. 15 are essentially the Full-Width-Half-Maxima (FWHM) obtained after subtracting the instrumental width (56 mÅ; Doschek et al. 2008). In all the maps, the loops at the bottom have relatively higher FWHMs, but are best identified in Fe xii. The FWHMs obtained within the box is ∼0.03 Å for Fe xii and Fe xiii lines, for all four rasters (see , Table 3), which is equivalent to the non-thermal velocity of ∼27 km s −1 . To derive the non-thermal velocities, we have used the temperature in the X-region to be around 1.6 MK (log T [K] = 6.20) as was obtained using the EM-Loci procedure in 3.4.1.3. However, there are a few individual pixels with FWHM as large as 0.068 Å with an equivalent non-thermal velocity of ∼ 62 km s −1 . The Fe xv line has an average FWHM of ∼0.048 Å (∼30 km s −1 ) in X-shaped structure for all four rasters. There are pixels within the box, where the FWHM is as large as 0.18 Å, which is equivalent to 115 km s −1 (assuming log T [K] = 6.2). The average Doppler velocity and non-thermal width in P2 region are also noted in Table 3 and seen to be fairly similar to those in the X-region (within the error limits). In addition, the other plasma parameters in P2 (averages) are similar to those in the X-region. Therefore, we conclude that the X-region well represents the reconnection region, for all events. Fig. 11. Emission Measure (EM) loci curves for the four EIS rasters for the X-region as shown in Fig. 10. Also plotted are the histograms representing number of crossings within a given temperature bin. No background/ foreground emission has been considered. Summary and Conclusions This piece of study combines observations recorded by the AIA and the HMI on-board SDO and EIS on-board Hinode to study the formation, dynamics and plasma parameters of the transequatorial loops (TELs). The TELs were observed between preexisting AR 12230 in the southern hemisphere and the newly emerging AR 12234 in the northern hemisphere. We have performed a comprehensive study of the evolution of TELs using AIA observations. The physical plasma parameters such as density, temperature, Doppler and non-thermal velocities in the loops as well as the interaction region between those loops are studied using EIS spectroscopic observations. The observations recorded by AIA reveal that initially, loops belonging to the individual ARs evolve. These loops come closer to each other and form the X-shaped topology, leading to the formation of the TELs. This is highlighted in Fig. 3 and is suggestive of the occurrence of the process of magnetic reconnection. The xt-plots in Fig. 4 show that the inflow speed for the reconnection is 1-2 km s −1 whereas the outflow speed is ∼2 km s −1 . These values are in close comparison to that of Yokoyama et al. (2001);Liu et al. (2011) in a flaring TEL system (∼5 km s −1 ). However, we note that these are projected velocities measured in the plane-of-the-sky, hence gives the lower limit on the estimates. The intensity maps corresponding to the four EIS raster periods are derived using various spectral lines formed between log T [K] = 5.80-6.35. At lower temperatures, corresponding to Fe viii to Fe x, the X-region is seen very clearly with the adjacent loops being very bright. The intensity images of Fe xii (formed at log T [K] ≤ 6.20), show bright X-regions whereas at Fe xv (log T [K] = 6.35), the X-region appears to be filled with plasma at E1 but completely void at E2 and partially filled at E3 and E4. Such darkening at the reconnection site has been attributed to density diminution (Delannée & Aulanier 1999;Tripathi et al. 2007;Sun et al. 2015). This result is reinforced by plotting the light curves in Fig. 8. Yokoyama et al. (2001) reported similar voids in soft X-ray observations of magnetic reconnection events leading to flares. Within the X-region for all the four EIS raster periods, the electron densities are maintained steadily at log N e = 8.46 to 8.67 cm −3 for the Fe xii and Fe xiii lines. The EM-Loci curves suggest that the plasma is very nearly isothermal at log T [K] = 6.20 (i.e. 1.6 MK) within the X-region, which is somewhat larger than those reported by Liu et al. (2011) at the cusp region (1.3 MK). However, Sun et al. (2015) found higher temperatures, between 1-5 MK at the magnetic reconnection site between TELs, but with an associated flare. Also, there are hints of multithermality of plasma in the X-region at E3. High cadence observations, at an interval of 12 seconds, further, reveal intermittent brightenings occurring close to the Xregion (marked as P2 in Fig. 5). Coincidentally, such brightenings occurring during E2 phase of EIS raster observation were Fig. 9. Also plotted are the histograms representing number of crossings within a given temperature bin. No background/ foreground emission has been considered. captured exactly by four exposures. This gave us the opportunity to understand the plasma parameters in this brightening point. Density estimates at P2 show that it has an intermediate value for Fe xii line. Further, the plasma at P2 has slightly higher temperature, log T [K] = 6.20-6.30. We emphasize that we have not incorporated background/foreground emission for temperature estimates for reasons explained in §3.4.1.1. We have also compared the densities and temperatures obtained in the X-region with those obtained in TELs and other AR loops. The average electron densities in the TELs (location L of Fig. 9) exceed those in the X-region for all the four rasters. Similarly, the electron densities at location D is larger than those in the TELs. However, the densities at L is noted to be lower by an order of magnitude as reported by (see, e.g., Liu et al. 2011), using the Si x line of CDS. We also find that just beneath the X-region, a set of loops persist at ∼2 MK and the TELs are at intermediate temperatures. Existence of such hot loops neighbouring the X-region has also been reported by Delannée & Aulanier (1999); Glover et al. (2003); Pevtsov (2004); Balasubramaniam et al. (2005), though in the presence of eruptive events like flares or coronal mass ejections. The peak formation temperature in the loops belonging to the ARs individually is similar to that obtained at P2. The Doppler velocity maps show a mixture of upflows and downflows for the comparatively cooler spectral lines in the Xregion. Near the disk center position, Fe xii, Fe xiii and Fe xv show strong downflows but a mixture of zero velocities and blueshifts away from the center. These flows are ∼5-8 km s −1 upflows/downflows depending on the raster period. We note that the loops emanating from the reconnection show similar magnitude of Doppler velocities. With off-limb observations, Harra et al. (2003); Brosius (2006); Liu et al. (2011) have also reported similar bidirectional flows in such loops. The on-disk LOS velocities form the third mutually orthogonal component to the outflow speeds at the X-region noted above. In the X-region, the average FWHM is 0.03 Å which translates to about 27 km s −1 with the maximum being ∼ 62 km s −1 (corresponding to the brightest pixels) for Fe xii and Fe xiii. However, for Fe xv, the X-region show a FWHM of ∼0.05 Å translating to ∼30 km s −1 whereas the maximum is 0.18 Å, equivalent to ∼115 km s −1 . It is interesting to note that in case of Fe xv, the enhanced FWHM region coincides with the dark X-region in intensity map. We note that the non-thermal velocities are obtained considering the temperature of the X-region (log T [K] = 6.20). The Doppler velocities and FWHM in P2 region is also comparable to those in the X-region. We highlight that the temperatures used for these estimates are representative of electron temperature of the plasma which can be considerably different than ion temperatures. This is particularly true in regions undergoing magnetic reconnection where equilibrium conditions are no longer valid. Considering that the ion temperatures are generally larger than the electron temperatures, it is apparent that the non-thermal velocities are somewhat overestimated here (Seely et al. 1997;Tu et al. 1998;Landi 2007). The results obtained based on the xt-plots derived using the AIA observations combined with those from the EIS suggests that the TELs formed through the process of reconnection at the X-region formed between the two active regions. This study, for the first time, provides measurements of plasma parameters such as electron density, temperature, Doppler shifts as well as non-thermal velocities at X-region. We interpret that this is an example of homologous low-intensity magnetic reconnections occurring in TELs where the energy released is predominantly used in plasma flows and as kinetic energy source for the field lines snapping away and getting reoriented in some other direction. The intensity increment observed in AIA 193 Å images is very small but not accompanied by any flares or coronal mass Table 2. Intensities averaged over the X-region ('P'), TELs ('L'), hot loops beneath the X-region ('D'), the bright region hosting recurrent brightenings ('P2') and region considered for background/ foreground assessment ('B'), respectively in Fig. 9. The averaged intensities for Mg 7 and Si 7 are not noted here because the background averaged intensities exceed those in the region(s) under consideration for all four EIS observations. Fe lines Table 3. Plasma parameters for different locations in the FOV using different EIS spectral lines. These are computed over the white (enclosing the point 'P' of panel A in Fig. 4), blue (TELs, marked 'L') and black (loops at the bottom, marked 'D') boxes marked in Fig. 9. Similar computations are done for 'P2' region, which undergoes concurrent intensity enhancements during E2 (as shown in Fig. 5). The density estimates consider background/ foreground emission over the region marked 'B' in the Fig. 9. No background/ foreground emission treatment has been done for the temperature estimates.
2020-05-27T01:01:07.433Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "66b80374f05fec91494cf59592ee00f0226d124d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.12839", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ad5b27bf50624f981a0d86dbda5bc43dff528cf9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
19642856
pes2o/s2orc
v3-fos-license
Increased Intraocular Pressure after Extensive Conjunctival Removal: A Case Report A 50-year-old woman, who had undergone extensive removal of conjunctiva on the right eye for cosmetic purposes at a local clinic 8 months prior to presentation, was referred for uncontrolled intraocular pressure (IOP) elevation (up to 38 mmHg) despite maximal medical treatment. The superior and inferior conjunctival and episcleral vessels were severely engorged and the nasal and temporal bulbar conjunctival areas were covered with an avascular epithelium. Gonioscopic examination revealed an open angle with Schlemm's canal filled with blood to 360 degrees in the right eye. Brain and orbital magnetic resonance imaging and angiography results were normal. With the maximum tolerable anti-glaucoma medications, the IOP gradually decreased to 25 mmHg over 4 months of treatment. Extensive removal of conjunctiva and Tenon's capsule, leaving bare sclera, may lead to an elevation of the episcleral venous pressure because intrascleral and episcleral veins may no longer drain properly due to a lack of connection to Tenon's capsule and the conjunctival vasculature. This rare case suggests one possible mechanism of secondary glaucoma following ocular surgery. Episcleral venous pressure (EVP) has been reported to be elevated in patients with carotid cavernous sinus fistulae, thyroid ophthalmopathy, Sturge-Weber syndrome, other general venous static conditions, and, rarely, in the absence of any identifiable cause [1]. Elevated EVP increases intraocular pressure (IOP) because IOP is determined by the balance between aqueous production rates and the EVP [2][3][4][5][6]. Thus, the cause and extent of EVP elevation needs to be determined. We recently treated a patient with intractably elevated EVP and IOP after extensive removal of bulbar conjunctiva. A deficiency of conjunctival vasculature in this patient may have been the cause of the pathogenic EVP elevation, or may have at least aggravated the condition. Case Report A 50-year-old Asian woman who had undergone conjunctival and Tenon's capsule removal surgery on the right eye for cosmetic purposes at a local clinic 8 months prior to presentation, was referred to our University-associated tertiary-care eye center due to uncontrolled elevated IOP. She did not have any other significant past medical or social history. She did not take any systemic medications. Before conjunctival removal surgery, she had complained of redness in the right eye for 3 years, although she did not have ocular pain or irritation. The left eye did not show redness. Before surgery, IOP measurement was performed at two different visits. Goldmann applanation tonometry revealed that IOPs were 19 and 13 mmHg at the first visit, and 19 and 12 mmHg at the second visit, in the right and left eye, respectively. The optic disc and visual field (VF) were apparently normal. Average retinal nerve fiber layer (RNFL) thickness, determined by Stratus optical coherence tomography (OCT), revealed mild asymmetry, the average RNFL thickness was 91.77 microns in the right eye and 103.22 microns in the left (Fig. 1A). Her prior surgery included extensive removal of nasal © 2013 The Korean Ophthalmological Society This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses /by-nc/3.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. A 50-year-old woman, who had undergone extensive removal of conjunctiva on the right eye for cosmetic purposes at a local clinic 8 months prior to presentation, was referred for uncontrolled intraocular pressure (IOP) elevation (up to 38 mmHg) despite maximal medical treatment. The superior and inferior conjunctival and episcleral vessels were severely engorged and the nasal and temporal bulbar conjunctival areas were covered with an avascular epithelium. Gonioscopic examination revealed an open angle with Schlemm's canal filled with blood to 360 degrees in the right eye. Brain and orbital magnetic resonance imaging and angiography results were normal. With the maximum tolerable anti-glaucoma medications, the IOP gradually decreased to 25 mmHg over 4 months of treatment. Extensive removal of conjunctiva and Tenon's capsule, leaving bare sclera, may lead to an elevation of the episcleral venous pressure because intrascleral and episcleral veins may no longer drain properly due to a lack of connection to Tenon's capsule and the conjunctival vasculature. This rare case suggests one possible mechanism of secondary glaucoma following ocular surgery. and temporal bulbar conjunctiva and Tenon's capsule on exposed areas of the sclera. Upper and lower triangular layers of conjunctiva and Tenon's capsule, from horizontal incisions at nasal and temporal canthal areas, were excised. The superior and inferior conjunctiva which were covered by the eyelid were not removed. The surgical area remained bare sclera and the sclera healed with thin overlying epithelium. After conjunctival removal surgery, the patient was treated with Fluorometholone (0.1%) for 2 months. A B The IOP began to increase in the right eye 1 week after conjunctival removal surgery. The IOP ranged from 30 to 35 mmHg, despite prescription of maximum tolerated medications. An OCT scan performed 4 months postoperatively revealed reduction in average RNFL thickness in the right eye from 91.77 to 78.09 microns, while the left eye demonstrated minimal change, from 103.23 to 100.86 microns (Fig. 1B). The patient was then referred to our clinic. At first examination in our clinic, the IOP in the right eye was 38 mmHg with three anti-glaucoma medications (dorzolamide-timolol, latanoprost 0.005%, and brimonidine tartrate 0.15%). The superior and inferior conjunctival and episcleral vessels were severely engorged ( Fig. 2A and 2B), and the nasal and temporal bulbar conjunctival areas were covered with an avascular epithelium, within which a few nasal and temporal episcleral vessels were also engorged (Fig. 2C). Gonioscopic examination revealed an open angle, with Schlemm's canal filled with blood to 360 degrees in the right eye (Fig. 2D). The left eye was normal. Ocular examination revealed no evidence of bruit, chemosis, proptosis, or extraocular muscle limitation, suggesting no apparent causes for EVP elevation in either eye. Brain and orbital magnetic resonance imaging and angiography results were normal. The VF was also normal 8 months after surgery. We prescribed timolol 0.5%/brinzolamide 1% combination twice a day, apraclonidine 0.5% three times a day, travoprost 0.004% once a day, and oral acetazolamide (250 mg four times a day). After 4 months, the IOP gradually decreased to approximately 25 mmHg, and the oral acetazolamide was tapered. Current treatment includes three topical anti-glaucoma medications, and the IOP remains approximately 25 mmHg. We will consider surgical treatment if a VF defect develops. Discussion EVP elevation is one cause of IOP elevation. EVP elevation may result from a carotid cavernous sinus fistula, thyroid ophthalmopathy, or Sturge-Weber syndrome. However, in some patients no definite cause of EVP elevation can be determined [1]. We suspect that the current patient had signs of mild EVP elevation in the right eye prior to surgery. There was evidence that the IOP in her right eye was relatively higher than the left (19 mmHg vs. 12-13 mmHg on two separate measurements): her right eye was red (perhaps attributable to episcleral venous engorgement) and the average RNFL thickness was relatively thinner in her right eye than in her left. These data may indicate that either chronic or mild IOP elevation, caused by a rise in EVP, and subsequent slight glaucomatous structural loss in the right eye, may have occurred prior to surgery. Aqueous humor drainage is achieved principally by the conventional trabecular meshwork, which is connected to Schlemm's canal. Aqueous outflow drains from Schlemm's canal to the aqueous vein via collector channels, and then to the episcleral and conjunctival veins [5]. Thus, extensive removal of conjunctiva and Tenon's capsule layers may result in loss of the conjunctival veins that contribute to aqueous drainage. Therefore, in the present case, the aqueous humor, which had drained preoperatively through nasal and temporal conjunctival veins, along with the episcleral vein, was forced to exit through residual episcleral veins. Overloading of such veins with aqueous humor may aggravate any pre-existing elevation in EVP. Alternatively, the observed increase in IOP may be attributable to other causes, including postoperative inf lammation or steroid use. However, no anterior chamber reaction occurred after surgery and immediate IOP elevation after surgery is not a typical response to steroid treatment. More importantly, the presence of blood in Schlemm's canal to all angles, and engorged superior and inferior episcleral vessels, are highly suggestive of EVP elevation. In summary, extensive removal of limbus based conjunctiva and Tenon's capsule, leaving bare sclera, likely led to an elevation of the EVP because intrascleral and episcleral veins could no longer drain properly, due to the lack of connection to Tenon's capsule and the conjuctival vasculature. In a retrograde manner, the pressure in the collector channels and the IOP were increased in this eye. Although this particular eye could have been predisposed to such a reaction (signs of a mildly elevated EVP were retrospectively found to have been present prior to surgery), this rare case suggests one possible mechanism of secondary glaucoma following ocular surgery.
2017-12-30T22:15:16.581Z
2013-02-27T00:00:00.000
{ "year": 2013, "sha1": "15b63ab6713f9f12ebbd8dad2d122fc55df60789", "oa_license": "CCBYNC", "oa_url": "https://europepmc.org/articles/pmc3596620?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "15b63ab6713f9f12ebbd8dad2d122fc55df60789", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
210133475
pes2o/s2orc
v3-fos-license
Women’s Decision-Making about PrEP for HIV Prevention in Drug Treatment Contexts Despite pre-exposure prophylaxis’s (PrEP) efficacy for HIV prevention, uptake has been low among women with substance use disorders (SUDs) and attributed to women’s lack of awareness. In semistructured interviews with 20 women with SUD and 15 key stakeholders at drug treatment centers, we assessed PrEP awareness and health-related decision-making. Women often misestimated their own HIV risk and were not aware of PrEP as a personally relevant option. Although women possessed key decision-making skills, behavior was ultimately shaped by their level of motivation to engage in HIV prevention. Motivation was challenged by competing priorities, minimization of perceived risk, and anticipated stigma. Providers were familiar but lacked experience with PrEP and were concerned about women’s abilities to action plan in early recovery. HIV prevention for women with SUD should focus on immediately intervenable targets such as making PrEP meaningful to women and pursuing long-term systemic changes in policy and culture. Efforts can be facilitated by partnering with drug treatment centers to reach women and implement PrEP interventions. Introduction Women with substance use disorders (SUDs) experience high HIV risk by virtue of substance use behaviors (including injecting) and overlapping sex and drug use networks. They are more likely than women without SUD to interact with the criminal justice system (CJS), 1 become involved in sex work, 2,3 and experience physical and sexual violence 4 each of which independently increases HIV risk. [5][6][7] Genderspecific social and structural barriers to health-care and service engagement are often overlooked in HIV prevention interventions. 8,9 Public health campaigns to reduce HIV risk have focused on promoting condom use 10 and safe injection practices 11 but these methods are often not fully user-controlled and may not be feasible in the context of reduced autonomy and exposure to intimate partner violence (IPV). In contrast, pre-exposure prophylaxis (PrEP) is potentially a partner-independent, womancontrolled tool. Clinical trials [12][13][14] and post hoc analyses 15 have demonstrated that PrEP effectively prevents HIV in high-risk groups of heterosexual women and people who inject drugs when adherence is optimized. There are multiple additional ongoing and planned PrEP demonstration and implementation projects relevant to women with SUD. 16 Yet, PrEP remains underutilized among women. [17][18][19] A recent review contends that interventions to increase PrEP uptake for people who use drugs would be more effective if they were based on an adapted information-motivation-behavioral skills (IMB) theoretical framework. 20 The IMB model of PrEP uptake asserts that, to the extent that an eligible person is informed, motivated, and has the necessary behavioral skills to initiate PrEP, they will successfully overcome obstacles to do so. We applied the IMB model to assess potential barriers and facilitators to PrEP uptake and other forms of HIV prevention among women with SUD in treatment settings. By conducting an in-depth analysis of individual, social, and structural-level barriers to PrEP uptake women with SUD, we sought to advance the public conversation about PrEP for women with SUD beyond merely increasing awareness to targeting contextually relevant barriers. [21][22][23] Wherein few women are often aware of PrEP as a personal option for HIV prevention, we expanded the scope to consider how women with SUD make HIV prevention and health-related decisions in general, thereby informing future PrEP interventions by anticipating potential areas of decisional conflict. Setting The parent study, known as OPTIONS, was designed to inform, develop, and test a patient-centered decision aid about PrEP for women with SUD (registered on Clinicaltrials.gov as NCT03651453). This study was conducted at the largest drug treatment center in a mid-sized city in New England, with nearly 5000 patients on methadone annually, approximately one-third of whom are women. A full array of medications for opioid use disorder and behavioral therapies are offered across multiple sites. HIV testing is available as "opt-out" on initial intake, and PrEP is available through onsite medical providers for people who meet CDC-recommended clinical criteria. 24 Participant Recruitment Women with SUD were recruited through brochures and fliers at drug treatment facilities. Treatment center staff could refer participants through a HIPAA secure Qualtrics link. A dedicated research assistant and study coordinator screened referred participants via a private study phone line for the following inclusion criteria: self-identification as female (cis-or trans-), age 18 years, self-reported HIV-uninfected or status unknown, and receiving treatment at the collaborating drug treatment center. Participants were excluded if they were experiencing symptoms of physiological withdrawal that could interfere with informed consent. Stakeholders were recruited by a trained research assistant onsite at the partnering drug treatment center. Treatment center staff in any professional capacity, including physicians, nurses, social workers, counselors, case managers, medical assistants, and administrators, were eligible to participate. Interview Procedures A semistructured interview protocol, based on the Ottawa Decisional Needs Assessment, 25 incorporated questions about HIV prevention needs, PrEP awareness, perceived role of substance use in HIV risk, opportunities for HIV prevention interventions in drug treatment settings, and basic participant characteristics. The interview focused on key domains relevant to decisional needs, including factors contributing to decisional conflict, knowledge, values, and resources for support. Women were asked to reflect on options to protect themselves from HIV, hepatitis C, and other sexually transmitted infections and decisional conflict in terms of how they weigh the pros and cons of each option (see Supplementary Appendix for topic guide). Interviews were conducted by 2 trained research assistants at research offices, in private rooms at treatment centers, or over the phone and lasted approximately an hour. All interviews were audiorecorded. Participants were compensated with a $20 gift card. Stakeholders did not receive cash compensation for participation. Analysis Recorded interviews were transcribed using an HIPAAcompliant transcription service and imported into Dedoose. Data were independently coded by 2 authors using predetermined nodes which were generated based on the IMB model for PrEP uptake. 20 Through a dynamic process, findings were discussed in team coding meetings and further nodes were added or expanded to generate a hypothesized framework of What Do We Already Know about This Topic? Women with substance use disorders (SUD) have multiple overlapping risk factors for HIV and stand to benefit from HIV prevention tools like pre-exposure prophylaxis (PrEP), but thus far uptake in this population has been suboptimal. Prior cross-sectional and qualitative surveys have revealed that women with SUD have low awareness but high acceptability of PrEP. How Does Your Research Contribute to the Field? To probe deeper into women's low PrEP uptake and health-related decision-making, we conducted qualitative interviews with women and key stakeholders in drug treatment programs. We found important HIV risk misperceptions and competing priorities that contribute to lack of health-care engagement and would make PrEP engagement more challenging. What Are Your Research's Implications toward Theory, Practice, or Policy? For PrEP to be successfully implemented in drug treatment for women with SUD, it must be made contextually relevant by addressing key motivating factors and reframing estimations of personal HIV risk. health-related decision-making among women with SUD (Figure 1). Information was assessed in terms of knowledge about HIV and PrEP. Motivation was defined in terms of beliefs about HIV, HIV risk, PrEP, health (more generally), and health-care providers in relation to trust and perceived stigma. Motivation was also evaluated in the context of competing priorities, including individual-level (substance use and cravings, mental health), social (violence-exposure, commercial sex work [CSW], parenting), and structural (basic subsistence needs, criminal justice involvement) priorities. Skills included action planning or clarification of steps to achieve a specific goal, critical thinking or objective analysis and evaluation of an issue, and impulse control or the ability to resist an urge or impulsive behavior. The main behavioral outcome of interest was use of PrEP but since so few women were on PrEP, we also assessed engagement in other HIV prevention activities including HIV testing, condom negotiation with partners, safe injecting practices, and engagement in drug treatment or other medical/psychiatric care. Salient themes with exemplary quotes are presented here, organized according to the IMB model. Ethical Approval and Informed Consent This study was approved by the Yale University IRB and the Operations Management Team at the APT Foundation, Inc. All eligible participants who wished to enroll were asked to complete written informed consent and a release of information. Only participants who provided informed consent completed the survey and provided data for analysis. Participant Characteristics We interviewed 20 women with SUD aged between 25 and 62 years ( Table 1, Panel A). Eight women reported supplemental security income as their primary source of income and none were employed at the time of interview. Most were unstably housed-12 women rented an apartment or room, 2 were staying in shelters, and the rest reported being homeless. Half of the women reported experiencing some form of physical or sexual violence in their lifetimes. Twelve women with SUD were people who injected drugs (PWIDs) and 17 had used opioids. Fourteen women identified as mothers. We interviewed 15 stakeholders (Table 1, Panel B), all of whom had direct patient contact and experience working in addiction services (ranging between 1 and 23 years). Table 2 depicts each theme with exemplary excerpts, described further below. Information Participants generally understood basic principles of HIV transmission and treatment. Nine women reported that they underwent HIV testing frequently. Some women talked about how knowing their sexual partner, and their testing history was important for self-protection and harm reduction. Seven women reported that were abstinent. Women with friends or family with HIV were highly knowledgeable about treatment and prognosis but a recurring worry expressed across multiple interviews was that general awareness about HIV had waned in recent years. Most women interviewed had not heard of PrEP. Among the 7 who had, 1 had learned about PrEP through a research study while the rest were through word-of-mouth. Some providers felt concerned about the low level of PrEP awareness among their clients and recommended increasing visibility through TV commercials, pamphlets, and targeted marketing ads. Although many providers had heard of PrEP, most expressed an interest in receiving more training to better counsel clients. Some saw themselves more as "gateway providers" who could refer patients out to specialists for PrEP counseling and initiation. One provider said she raised PrEP with her male or transgender clients who had sex with men but had not considered it for heterosexual women. Motivation Women's motivation to engage in health-promoting behaviors was influenced by multiple competing priorities and certain preformed beliefs. One counselor commented: "Helping people make decisions can be tricky . . . you really can't ignore the systems that people are involved with and their histories as far as the impact of trauma, poverty, mental health and how all of those intersect and really can make things difficult for people." Women often had to make calculated tradeoffs between basic subsistence needs (food, housing, income, transportation), responsibilities of motherhood, meeting the demands of addiction, and coping with trauma. In this way, health-promoting behaviors including HIV prevention (and potentially, PrEP) were sometimes deprioritized. Competing structural factors. Both women and providers noted that basic survival needs took priority over HIV prevention or health. One woman described how overcoming homelessness allowed her to gain autonomy in sexual decision-making: "I went through years back and forth of homelessness . . . livin' in shelters . . . sleepin' in streets, sleepin' with people just to have a place to stay . . . [now] if I decide to do something we gotta have condoms on, lights on, I wanna see it, I wanna smell it, I wanna look at it." One provider noted that his clients were being pushed out of urban centers by gentrification, thereby isolating them from resources "and having them connected to HIV prevention is almost impossible when they can't even get to the clinic daily because their transportation is so irregular." Beyond transportation, most women's concerns about PrEP were logistical, including identifying and accessing a PrEP provider, having insurance to adequately cover PrEP, and being able to afford the cost of copays and associated doctor visits for follow-up care. Stigma and fear of criminalization discouraged women with SUDs from seeking treatment and accessing health-related resources, which further contributed to isolation and generated additional logistical challenges. This isolation would likely extend to PrEP care engagement. In discussing their perceived HIV risk and prior health-care engagement, 10 women raised their prior interactions with the CJS. Seven had been previously arrested or incarcerated for drug-related offenses or prostitution, and one was on probation for prescription tampering. Preventative services and drug treatment were felt to be lacking in prisons and jails and the punitive handling of addiction by the CJS created undue burden (frequent court dates, urine drug screens, law enforcement mandates). Justice-involved women felt stigmatized and restricted: "I'm a licensed EMT. I can't get a job anywhere cause of my criminal record." Unemployment and financial insecurity forced some women to rely on partners for basic subsistence needs. Providers noted that, for many women, partners controlled living arrangements, household finances, and any outside communication. This may have extended to health-care engagement. Competing social factors. Women described interpersonal relationships that often detracted from health-promoting behaviors, including using condoms for HIV prevention. This suggests the importance of a woman-controlled, partner-independent HIV prevention tool like PrEP. Many of the women interviewed had lifetime experiences of physical and sexual violence that forced them to choose between personal health and safety. One woman described how condom negotiation precipitated violence: "My ex . . . I know he slept around . . . when I'd say something to him, like, 'Wear protection' or something, he'd give me a beating thinking I didn't trust him." Another woman shared that her partner initiated her to injecting heroin and often secondarily injected her without her consent. Four women who engaged in CSW tried to use condoms but felt pressured to comply with client requests because men paid more for condomless sex. Violence from commercial partners was common in CSW: "With the prostitution stuff . . . first you "What I would want from my healthcare provider was not to feel like I'm dirty or like I'm just some scumbag. I'll be treated like everybody else whether I'm a drug addict or not . . . once you become a drug addict on paper, God forbid you ever need help again. You're faking. You're drug-seeking . . . I won't go to the doctor because of those things." (Key informant, 25-39 years old) " . . . A lot of our clients would rather go without their methadone than tell the doctor treating them that they're on methadone . . . In general, I think a lot of our clients do not trust healthcare professionals. I think they trust their medical judgement, but they don't trust that they'll be treated with respect." (Counselor) Behavioral skills Action planning "The way that I managed to not contract [HIV] was, there was a needle truck that was in the area that I was using in, and they were there on certain days at certain times, and I made sure to get my ass to that truck." (Key informant, 25-39 years old) "One of the difficult things I think for patients, sometimes, with something like PrEP, is the idea of planning ahead, that it's not necessarily in the moment . . . Whereas I think if you're offering something around, oh, I can get you situated in terms of your housing. That's immediate." (Clinician) Critical thinking skills and health literacy "I think some people may feel overwhelmed thinking about [HIV prevention]. They may not really know how to protect themselves. They may not really know who to ask, as far as providers, what kind of information they could receive." (Social worker) "I find it hard to make a decision if I don't know all my options out there . . . I think the more options you're given, the better decision you can make . . . I'm lucky that I moved here and have the options that I do now to get on my feet, to get off the drugs, and hopefully I'll be a productive member of society one day, but you have to be able to have the options and education to do that." (Key informant, 25-39 years) Impulse control "Once you're smoking, your mindset is not right, so you're liable to have sex with somebody for money, just do the things that you wouldn't necessarily do . . . Mostly anybody that I got high with is dead from AIDS or getting shot or ODing. It's crazy." (Key informant, 40-60 years) Abbreviation: PrEP, pre-exposure prophylaxis. gotta not be murdered, and then you can worry about condoms." Pregnancy prevention was a much stronger behavioral motivator for condom use than HIV prevention, as one clinician explained: "Typically the women I've treated have been more focused on, 'Is my partner gonna be wearing a condom?' Or, 'I don't wanna become pregnant' rather than, 'Oh maybe I'm at risk of HIV.'" Until dual HIV and pregnancy prevention modalities are developed, PrEP will not be able to address women's needs for pregnancy prevention. Providers felt that social connections empowered clients to make healthier decisions. Five stakeholders suggested community outreach and peer support for HIV prevention: "I feel like the community does need more education on how to treat people who have an addiction or HIV, and how to give that information in a way that doesn't make them feel condemned or shameful, like in a way that motivates them to change . . . If there were peer support that might help." Women were also interested in peer-driven knowledge: "I'm gonna tell everybody I know about this medicine [PrEP] . . . information like this changes people's life." Competing individual-level factors. Women's HIV risk was primarily driven by substance use. Many women provided vivid descriptions of how getting high was, at times, the "sole and primary focus" of their lives. The need to avoid withdrawal superseded all other priorities, including personal health and safety: "I didn't really think about [HIV risk] because it didn't really matter. I needed to get high. I was getting high regardless, even if you told me you had AIDS. If I was sick and needed to get high and you had a needle that I had to use, I'd still use it." Women experiencing active withdrawal also took more risks while engaging in CSW: "If your body's sick from something, heroin or whatever, you're not thinking about does so-and-so have a condom . . . You're just gonna go, I need this money to get myself well. I'll worry about that risk later." PrEP programs need to consider that women at highest risk for HIV (and most in need of PrEP) might require extra support to engage. Many women experienced lifetime trauma and 2 women discussed using drugs to cope with trauma. Providers noted that trauma reduced autonomy, self-efficacy, and self-esteem, which made it difficult for women to advocate for themselves with partners or health-care providers, including using condoms, safe injecting techniques, or PrEP to prevent HIV. Although many women were broadly aware of HIV, most did not feel they were personally at risk despite past unprotected sex with unknown or multiple partners, engaging in transactional sex, having sex while intoxicated, or sharing injecting equipment. Two women specifically cited concerns about HIV as a reason for never sharing needles. Providers felt that women with SUD were less concerned about HIV than other health issues: "They're more focused on the pregnancy prevention and thinking about condoms more than anything else . . . I don't hear a lot of talk of [HIV] . . . there's a lot more talk about Hep C just because it's so prevalent in our population." Many women believed that sharing needles or having unprotected sex was safer if it was with a known partner, even if the partner was engaging in high risk injecting or had HIV: "I thought, you know, it's just me and him. He's clean. I've been with him for such and such time . . . I'm not gonna catch HIV or anything like that from him because if I was going to catch anything, I would have already caught it." Providers tried to help women reshape their perceptions of risk by challenging statements of denial or minimization of risk. Five providers were concerned about some clients' false sense of security or "invincibility" if they had averted HIV despite high-risk behaviors, subscribing to the notion that "it won't happen to me" or underestimating their own behaviors. Some providers felt that risk misperception was not due to a deficit of knowledge but rather rationalization: "If you try and get them to do something different, you're taking away that choice . . . there's some function to the behavior that's not immediately obvious to other people." Many observed that clients regretted their risky decisions, but developed thought patterns, such as "if I don't get tested then I don't have HIV" or "if I don't think about [HIV risk] then it's not real and it's not gonna happen." Most women who had not heard about PrEP were receptive to the idea and felt that it would be useful for women with SUD: "At the end of the day it comes down to just staying protected . . . We're not gonna stop sharing needles . . . but knowing you can take a pill every day-using protection when we can-if we do have to use somebody's needle, then we know we're still gonna be okay." Other women expressed enthusiasm about PrEP in terms of it preventing HIV but needed more information before deciding if it was right for them. Four women and 1 provider cited concerns about risk compensation. One woman stated that the fear of HIV made her take "more precautions about other things and [HIV is] certainly not the only STD to be worried about." PrEP stigma was also a concern, both internalized (the implications of taking PrEP in terms of the kind of person they were) and externalized (what others and partners would think of them for taking it). Providers expressed concerns that PrEP might lead to partner retaliation and IPV exposure risk. Since PrEP is only currently available by prescription, we assessed women's interactions with systems of care. Reluctance to interact with medical systems was largely due to perceived stigma and past experiences. Five mentioned negative past experiences with health-care providers but 2 said they would still listen to the recommendations of a trusted provider. Many stakeholders also discussed how incorrect assumptions (ie, PrEP being mistaken for HIV treatment) perpetuated stigma and resulted in negative experiences for clients. In contrast, as 1 clinical supervisor elaborated: "Having people encounter as many positive experiences with healthcare as possible, I think is really important too in helping make those decisions easier." Behavioral Skills Many women demonstrated skills to plan out health-promoting actions that are necessary for engagement in PrEP and other HIV prevention services, such as managing doctor's appointments, navigating medical insurance, obtaining sterile injecting equipment, and completing HIV testing. One provider noted that even in active addiction, some women manage to practice safe behaviors: "There are people that are emphatic about well, whether I'm using or not, I'm gonna keep myself safe." All 8 women who had ever tested for HIV reported being frequent testers. Women who injected drugs described obtaining sterile injecting equipment from doctor's offices, needle syringe programs, pharmacies, and family members with diabetic supplies. Some counselors tried to work on building condom negotiation skills but recognized that power dynamics were often unfavorable. Four women said they did not like condoms because they detracted from excitement and diminished pleasure. Planning ahead for sex was seen as "no fun" and a provider concluded: "For some women and men, I think the idea of being intimate or having sex involves spontaneity . . . The idea of having a lot of forethought and planning . . . somewhat takes the fun out of the whole situation." Other women did think planning ahead was a part of healthy relationships, and one woman told her partner, "I'm not gonna be intimate with you until you actually go to the doctor, get the [HIV testing] paperwork, just like I did." This is particularly relevant to PrEP which, though requiring action planning to obtain and adhere to a daily medication, does not require planning for sex. Women also demonstrated critical thinking skills when asked to describe their decision-making processes for various health-related behaviors, such as whether to start a new medication, which is particularly relevant to PrEP initiation. For example: "I think about the pros and the cons. I read the paperwork that comes along with the medication, and I look at the side effects. Then I see how my body adjusts. If I don't like the side effects, then I'm gonna go in [to] whoever prescribed it to me. If [side effects are] too much to deal with, then I'm gonna say, 'Is there another drug you can give me?' or, 'Maybe I don't need it.'" Both providers and patients mentioned information overload as a barrier to health-related decision-making that was especially challenging for women with low-health literacy. Eight women listed coping with side effects as the primary concern about starting a new medication, and women weighted side effects in terms of perceived severity. For example, headaches might be manageable whereas side effects "like going through chemo" would be intolerable for any medication, though not necessarily relevant to PrEP. Women were heterogeneous in their ability to action plan and control impulses. Six providers expressed concerns about clients realistically engaging in long-term goal-directed behavior, especially during early stages of recovery. Women also identified that lack of impulse control contributed to their risk behaviors related to substance use ( Table 2). Although some PrEP providers may see this as a reason to defer PrEP in women with SUD, 2 providers suggested incentivizing PrEP with immediate rewards to increase uptake: "One of the difficult things I think for patients, sometimes, with something like PrEP, is the idea of planning ahead, that it's not necessarily in the moment . . . Whereas I think if you're offering something around, oh, I can get you situated in terms of your housing. That's immediate." Discussion In this qualitative study, we explored why and how women with SUD, a key target population for HIV prevention, have low awareness but high potential acceptability of PrEP and other HIV prevention tools. Qualitative findings illustrate how a combination of information, motivation, and behavioral skills are necessary to engage in health-promoting behaviors in general and PrEP specifically (Figure 1). Decision-making practices around health were driven by competing priorities, health beliefs, and health attitudes. This deep dive into women's decision-making processes and choice heuristics is critical to developing and implementing effective multilevel interventions to increase PrEP uptake among women with SUD. Although stakeholders acknowledged that limited directto-consumer marketing and lack of inclusive messaging affected women's PrEP awareness, other issues shaped women's health-related decision-making more broadly. Women consistently underestimated personal risk for HIV so that PrEP, when pitched as an HIV "risk reduction tool," was not personally relevant. Risk misperception among women with SUD seemed to have resulted from rationalization and minimization of risk rather than from knowledge gaps. Women tended to appraise HIV risk in ways that ultimately supported the conclusion they desired, which has been described in other at-risk populations, 26 and simplifies complex HIV risk estimation into rules that conflate familiarity with trust and safety (eg, known partners are safe partners, monogamous sex is safe sex) or ascribe absolute predictive value to social indicators (eg, people who are married or monogamous are not at risk for HIV). In our study, women selectively focused on partner familiarity while minimizing partners' risk. This unconscious cognitive bias is particularly problematic for women because current risk assessments and PrEP eligibility criteria require women to appraise their partners' behaviors (eg, whether they also have sex with men or inject drugs). 27 One benefit of PrEP over other HIV prevention tools (like condoms) is that it is effective regardless of personal or partners' type of risk behaviors. Some women described personal "invincibility" or deliberately avoided HIV testing despite high-risk exposures. For many women, this compounded impulsivity and difficulties with long-term planning, which has been observed in other studies of individuals with SUDs 28 and people with behavioral addictions like gambling. 29 Mindfulness-based interventions and goal management training may improve executive function and realign risk perceptions among people with SUDs. 30,31 Given the high prevalence of violence, trauma, and posttraumatic stress disorder (PTSD) among women with SUD, 1,8,9 2 upcoming trials are adapting mindfulness-based interventions for women with SUD with a history of trauma. 32,33 These interventions may also be effective at increasing PrEP uptake or engagement in harm reduction programs, though further research is needed. Women with SUD in this study often struggled to meet basic subsistence needs such as housing, transportation, medical care, and a source of stable legal income. These competing priorities may have decreased motivation to engage in HIV prevention. Other large studies, including HPTN 064, 34 have shown how poverty, food insecurity, and ongoing substance use contribute to disparate HIV incidence rates. For PrEP to be meaningful to the women who need it most, it needs to be part of a program (not simply a drug) that includes wraparound services that improve the quality of their daily lives, like housing, employment assistance, and vocational training. Other social determinants of health played a key role in women's health behaviors related to PrEP and HIV prevention. The prevalence of lifetime gender-based violence exposure among US women is 36% 35 and 2 to 5 times higher among women with SUD. 4 Women with SUD experience excess mortality due to violence compared to age-matched peers and to men who use drugs. 6 Previous studies have shown a direct correlation between violence and HIV risk, 1,2 the confluence of which among women with SUD is known as the substance abuse, violence, and AIDs syndemic. 4,36 Women with SUD have high rates of PTSD, 1 which can reduce autonomy, selfefficacy, and self-esteem. Moreover, women with SUD, particularly those who exchange sex, often have limited social capital to negotiate condom use or advocate for personal health and safety. Consistent with findings from other studies, women engaging in sex work in this study reported financial incentives for unprotected sex. 9 Economic dependence on partners is a strong and consistent predictor of condomless sex. 37,38 In contrast, PrEP is a user-controlled tool that does not depend on favorable power dynamics. Other approaches to increase women's empowerment include microfinance interventions, 34 which have focused on building economic skills and generating an independent source of income. Moving forward, strategies to increase PrEP uptake for women with SUD include incorporating contextually relevant messaging. For instance, PrEP messaging campaigns need to be mindful of stigma, 39 which interferes with service engagement and increases HIV risk. Negative stereotypes about women with SUD reinforce internalized stigma and foster poor self-efficacy, which only further reduces women's agency to engage in health-promoting behaviors. Perceived stigma from partners, the local community, and society atlarge makes women less likely to initiate PrEP and deters them from engaging with health-care systems. Effective strategies to integrate HIV prevention into drug treatment programs must take these realities into account so that women do not have to choose between seeking SUD treatment and other priorities, such as childcare. Providers in our study were concerned about overwhelming women if they introduced PrEP during early recovery and treatment engagement. Overwhelming messaging may compound women's mistrust in providers or health systems, rendering providers potentially less effective messengers about PrEP than peer (or "socially concordant") educators. Extant preliminary data on PrEP peer navigators are among men who have sex with men and further research is needed on PrEP peer navigators for women. 40 Providers also identified that women's lack of action planning and impulse control could be major impediments to health-care engagement and medication adherence. This is especially relevant to PrEP uptake and adherence when most current formulations of PrEP are delivered as a once-daily medication. Because of lower concentrations of tenofovir in vaginal or cervical (as opposed to rectal) tissues, women require higher levels of PrEP adherence to achieve similar protective benefit. Underestimations of women's potential to adhere may bias providers against providing PrEP to women with SUD. Similar biases are pervasive in HIV treatment, leading clinicians to defer antiretroviral therapy for people with SUD living with HIV and resulting in increased HIV-related morbidity and mortality. 41 Yet studies from HIV treatment have shown that people with SUD are able to adhere to medications and successfully achieve similar clinical outcomes as people without SUD when appropriate support, including drug treatment, is provided. 42 The same should extend to PrEP. This study is the first, to our knowledge, to qualitatively assess PrEP awareness among women with SUD and consider drug treatment centers as potential sites for PrEP outreach and dissemination for women. Findings may not be generalizable to other geographic settings, although qualitative studies generally aim instead for depth of experience. Some of our participants were older than most patients with PrEP and had had prolonged experience with SUD treatment. Because participants were aware that these interviews were part of an HIV prevention research project, there may have been some selection or reporting bias. All participants were either in treatment or affiliated with a single-drug treatment provider and may differ from other women with SUD who are not currently in drug treatment. Conclusion Pre-exposure prophylaxis is a highly effective evidence-based HIV prevention tool for women with SUD who may lack social capital to negotiate condoms. HIV prevention is not solved alone with a PrEP prescription and housing support, mental health care, domestic violence resources, and accessible childcare are needed in addition to PrEP to comprehensively address the multiple contextual factors that increase women's risk for HIV and prevent engagement in prevention services. Fully integrating PrEP into drug treatment settings is key for reaching women with SUD. publication. We appreciate the participation, guidance, and leadership of APT program staff, particularly Michelle Healy, Scott Farnum, Declan Barry, and Kim DiMeola. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. Funding The author(s) received no financial support for the research, authorship, and/or publication of this article. Supplemental Material Supplemental material for this article is available online.
2020-01-11T14:03:44.871Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "2e0c2c8e0e4ceb48ce5a3cde2d1278f431c27e99", "oa_license": "CCBY", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2325958219900091", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "47f3c8c8cc094cf16213bf2a0acef9647f3bcbb2", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
251348575
pes2o/s2orc
v3-fos-license
Reconsidering the role of glycaemic control in cardiovascular disease risk in type 2 diabetes: A 21st century assessment Abstract It is well known that the multiple factors contributing to the pathogenesis of type 2 diabetes (T2D) confer an increased risk of developing cardiovascular disease (CVD). Although the relationship between hyperglycaemia and increased microvascular risk is well established, the relative contribution of hyperglycaemia to macrovascular events has been strongly debated, particularly owing to the failure of attempts to reduce CVD risk through normalizing glycaemia with traditional therapies in high‐risk populations. The debate has been further fuelled by the relatively recent discovery of the cardioprotective properties of glucagon‐like peptide‐1 receptor agonists and sodium‐glucose cotransporter‐2 inhibitors. Further, as guidelines now recommend individualizing glycaemic targets, highlighting the importance of achieving glycated haemoglobin (HbA1c) goals safely, the previously observed negative influences of intensive therapy on CVD risk might not present if trials were repeated using current‐day treatments and individualized HbA1c goals. Emerging longitudinal data illuminate the overall effect of excess glucose, the impacts of magnitude and duration of hyperglycaemia on disease progression and risk of CVD complications, and the importance of glycaemic control at or early after diagnosis of T2D for prevention of complications. Herein, we review the role of glucose as a modifiable cardiovascular (CV) risk factor, the role of microvascular disease in predicting macrovascular risk, and the deleterious impact of therapeutic inertia on CVD risk. We reconcile new and old data to offer a current perspective, highlighting the importance of effective, early treatment in reducing latent CV risk, and the timely use of appropriate therapy individualized to each patient's needs. | INTRODUCTION The increased risk of cardiovascular disease (CVD) in people with diabetes is well established, 1 and is now estimated to be between 1.6-and 2.3-fold. 2 The availability of effective cholesterol and blood pressure treatments facilitated a shift in the management approach for type 2 diabetes (T2D) from the glucocentric focus of decades ago, to one of CVD risk reduction. 3 Indeed, one study reported that targeting multiple risk factors in people with T2D reduced the risk of CVD and microvascular events by approximately 50%. 4 This resulted in the widely adopted "ABC" (glycated haemoglobin [HbA1c], blood pressure, and cholesterol) approach to T2D treatment, which coincided with the identification of metabolic syndrome as a cluster of glucose intolerance, hypertension, dyslipidaemia, and central obesity, with insulin resistance as the source of pathogenesis, 5 all of which increase the risk for developing T2D and atherosclerotic CVD (ASCVD). 6,7 More recently, the exceptional findings of cardiovascular outcome trials (CVOTs) demonstrating unequivocal reductions in CVD risk with the newer sodium-glucose cotransporter-2 (SGLT2) inhibitors [8][9][10][11][12] and glucagon-like peptide-1 receptor agonists (GLP-1RAs) [13][14][15][16][17] have perhaps overshadowed the place of glucose control in the ABC management of T2D, reinvigorating the debate on whether glucose control itself matters in the efforts to minimize cardiovascular (CV) risk. Despite the advances in treatment approaches and the availability of newer classes of therapy, data from the National Health and Nutrition Examination Survey have revealed that the proportion of people with T2D who achieved HbA1c <48 mmol/mol has not improved, and has actually declined over time, from 57.4% for 2007 to 2010, to 50.5% for 2015 to 2018. 18 Other estimates (2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017) suggest that the global glycaemic control target achievement rate is currently only 42.8%. 19 This is despite guidelines from the American Diabetes Association (ADA), 20 the American Association of Clinical Endocrinologists, 21 and the American College of Physicians (ACP), 22 which universally recommend achievement and maintenance of glucose control to reduce the risk of long-term complications. Therapeutic inertia-defined as "the failure to initiate or intensify therapy in a timely manner according to evidence-based clinical guidelines in individuals who are likely to benefit from such intensification" 23remains commonplace, resulting in years of unnecessary exposure to hyperglycaemia. With more devastating consequences, between 2007 and 2017, CVD affected approximately one-third of people with T2D across the globe, and caused one-half of all deaths, 24 a figure that is projected to increase in tandem with the increasing prevalence of diabetes. 25 There is therefore a need to evaluate the risk of uncontrolled glycaemia on CVD in people living with T2D and to mitigate this risk moving forward. This review aims to reconcile the debate on the role of glycaemic control in mitigating CVD risk, with a focus on chronic ambulatory care. We consider the effect of treatment inertia on CVD risk through reviewing the evidence that supports a key role for hyperglycaemia as a key risk modifier for the macrovascular complications associated with T2D. We compare the results of landmark T2D trials with those of the newer CVOTs in consideration of the improved outcomes with newer T2D therapies and the evolution of guidelines for the treatment of T2D. In particular, the potential effect of early control of blood glucose in reducing macrovascular risk is discussed, and current recommendations for translating such insights into improvements in patient care to reduce the overall burden of diabetes-related complications are highlighted (Table 1). THERAPEUTIC INERTIA ARE WELL KNOWN, BUT WHY IS IT STILL SO PREVALENT? Findings from a systematic review revealed that the median time to treatment intensification ranges from 0.3 to over 7.2 years, increasing with the number of antihyperglycaemic agents used. 23 Initiation of injectable therapy is particularly challenging, with intensification to insulin therapy typically being delayed by more than 7 years, 26 and started only when HbA1c is 75 mmol/mol or above, dramatically decreasing the likelihood of achieving glycaemic targets. 27 Longitudinal studies have highlighted the increased CVD risk with increasing glycaemia, 28 including within the normoglycaemic range. 29 Coupling this fact with the demonstration that delay of treatment intensification in people with T2D and HbA1c 53 mmol/mol or above by just 1 year increases the risk of myocardial infarction (MI), stroke and heart failure (HF) at 5.3 years by 67%, 51% and 64%, respectively, 30 it is worthwhile to consider the potential reasons for therapeutic inertia. | THE CONSEQUENCES OF Multiple reasons have been cited as contributing to therapeutic inertia 26 ; these are categorized as either patient-level factors (eg, perceptions about medication use and side effects), provider-level factors (eg, competing demands, discomfort or lack of familiarity with new medications, delays in guideline adoption), or health system factors (eg, cost and access). We believe that debate around the evidence and differing interpretations of existing evidence (eg, by different guidelines and professional societies) may contribute to therapeutic inertia within the broader society, and we explore this further here. T A B L E 1 Key take-home messages and clinical perspective 1 Microvascular disease predicts macrovascular disease; achieving and maintaining glycaemic control plays a critical role in reducing microvascular and macrovascular complications for people with T2D 2 Separately, and independent of glycaemic control, agents from the SGLT2 inhibitor and GLP-1RA class have been shown to reduce CV risk in individuals with established/high risk of CVD Although the UK Prospective Diabetes Study (UKPDS) definitively linked intensive glucose control with a reduction in CVD risk and mortality in people with T2D, 31,3231 the delayed macrovascular risk benefit was not observed until after 10 years, at which time the between-group glycaemic differences had been lost ( Figure 1A,B). 32 These results were similar to the "legacy effect" observed in the Dia- [34][35][36] However, importantly, unlike the UKPDS, the ADVANCE, ACCORD and VADT studies enrolled people with poorly controlled T2D of long duration who had established CVD or additional risk factors. [34][35][36] Furthermore, these intensive treatment studies extensively employed pharmacological approaches associated with increased risk of hypoglycaemia (eg, sulphonylureas; insulin) and weight gain (sulphonylureas; insulin; thiazolidinediones), and aimed for what some might consider nonphysiological glycaemic targets (eg, HbA1c < 42 mmol/ mol) based on the therapies available at the time. To illustrate, hypoglycaemic episodes requiring medical assistance occurred in 10.5% of individuals in the intensive treatment group of the ACCORD study, 37 and in 21% of those in the VADT, 38 with severe hypoglycaemia being associated with increased occurrence of CV events in the subsequent 3 months. Significant weight gain also occurred in these studies, with nearly 28% of participants in the intensive treatment group of the ACCORD study gaining more than 10 kg over the study. 37 This level of hypoglycaemia and weight gain would not be accepted in current clinical trial design or care and likely reflects a mismatch between therapeutic goals and therapeutic modalities available at the time of the study. While primary care-focused guidelines have appropriately emphasized blood pressure and lipid management for CV risk reduction in people with T2D, the interpretation of these treat-to-target trials has paradoxically led to clinical recommendations for less stringent glycaemic targets, and for deintensification of therapy. The ACP, for example, in 2018 issued its updated recommendations that "Clinicians should aim to achieve an HbA1c level between 53 and 64 mmol/mol in most patients with type 2 diabetes" and that "Clinicians should consider deintensifying pharmacologic therapy in patients with type 2 diabetes who achieve HbA1c levels less than 48 mmol/mol". 22 These conflict with recommendations from diabetes societies (eg, the ADA, ADA-European Association for the Study of Diabetes [EASD] Consensus) 39 that promote individualized targets, but generally recommend an HbA1c target of lower than 53 mmol/mol in most adults with diabetes, with consideration of lower goals for those in whom they can be achieved safely without significant hypoglycaemia or adverse effects of treatment, and without deintensification of treatment when lower targets can be safely achieved. Such discrepancies in recommendations probably contribute to a level of uncertainty in the management of care and introduce a dimension of therapeutic inertia at the societal level. It is important to reconcile these conflicting messages and highlight the importance of both glycaemic control and therapeutic approach to achieve the optimal treatment for each individual with T2D. Adding to the debate on the relevance of glucose control, recent CVOTs have shown unequivocal benefits of both SGLT2 inhibitors and GLP-1RAs in reducing CVD events in people with T2D at high risk for or who have ASCVD. Notably, these CVOTs did not have intensive glycaemic targets and were conducted on background treatment that comprised current standards of care. Improvements in the risk of composite major adverse CV events (MACE), hospitalization for heart failure (HHF), and renal outcomes 8,9 with SGLT2 inhibitors were observed in as little as 3 months. [10][11][12] Similarly, GLP-1RAs proved particularly effective in reducing ASCVD outcomes, including MI and stroke. [13][14][15][16][17] While these predominantly high-risk populations are not representative of many patients with T2D who have multiple CVD risk factors without established CVD, a small number of CVOTs (DECLARE-TIMI 58: 59% without CVD 10 ; REWIND: 69% without CVD 15 ) demonstrated benefits of SGLT2 inhibitors and GLP-1RAs even in those without established ASCVD. The benefits of some agents have been shown to be independent of baseline glycaemic status, 40 potentially fuelling the debate on whether glycaemic control is necessary in the quest to reduce CVD risk. | THE PRESENCE OF MICROVASCULAR DISEASE PREDICTS CVD RISK Although it is widely acknowledged that blood glucose control lowers microvascular risk, 31,32,34,36,41 because of the complex relationship between glycaemia and macrovascular events, the potential benefit of blood glucose control on macrovascular risk remains an area of great debate. Although the temporal relationship between microvascular disease and macrovascular risk remains poorly understood, clinical data strongly support the role of microvascular disease in predicting macrovascular risk. A bidirectional interaction between the macro-and microvasculature is known to exist, which maintains a deleterious relationship between diabetes and the circulatory system. For example, increased large artery stiffness accentuates pulse waves, causing microvascular damage. 42,43 Similarly, abnormalities in microvascular structure and function may increase the risk of macrovascular events. 42 Also, neovascularization of the vasa vasorum is increased in people with versus without diabetes, which precedes endothelial dysfunction and increases plaque inflammation. 44,45 The presence of microvascular complications predicts CVD and coronary artery disease death in individuals with T2D but without CVD. [46][47][48] Diabetic retinopathy has been associated with an excess risk of HF 49 and CVD, 50 and is proposed to be caused by microangiopathy. 58 | RECONCILING OLD AND NEW DATA-GLUCOSE IS A MODIFIABLE FACTOR FOR CVD RISK Risk factors for CVD are numerous, with multiple classic factors-age, sex, obesity, dyslipidaemia, hypertension-and also more recently identified factors-oxidative stress, epigenetics, inflammation, and endothelial dysfunction-being linked with T2D. 59 Together with metabolic syndrome, these factors are known to increase CVD risk. 60,61 Further, it has been shown that people with versus without T2D have higher atheroma volume, greater atherosclerotic plaque burden, and impaired compensatory positive remodelling of arteries. 62 Although the mechanisms for these changes have not been completely characterized, many studies support a role for hyperglycaemia. Results of a large retrospective analysis of the US Veterans Affairs Healthcare System showed a linear relationship between increased CVD mortality and mean HbA1c levels higher than 53 mmol/mol versus HbA1c levels of 42 to 52 mmol/mol. 63 Similarly, a study of demographically adjusted models showed that, compared with an HbA1c lower than 31 mmol/mol, an HbA1c level higher than 36 mmol/mol was associated with an increased risk of ASCVD, an HbA1c level higher than 44 mmol/mol with an increased risk of chronic kidney disease, and an HbA1c level of 53 mmol/mol with an increased risk of HF, suggesting that a significant gradient of risk exists across HbA1c levels well below the diagnostic cutoff for diabetes. 29 The above observations concur with physiological findings that moderate hyperglycaemia (11.1 mmol/L) impairs endothelial cell function, thus augmenting vasoconstriction and promoting inflammation, thrombosis 64 and vascular damage [65][66][67] (Figure 3). Hyperglycaemia also directly affects both the microvasculature and macrovasculature by causing phenotypic switching of vascular smooth muscle cells to an activated state 68 or to foam cells, 69 resulting in an increased inflammatory response, 70 B-cell activation, and epigenetic changes that persist even after return to normoglycaemia. 71 In the heart, hyperglycaemia can cause vascular changes independently of atherosclerosis, resulting in the accumulation of advanced glycation end-products which, together with proinflammatory cytokines and chemokines, recruit leukocytes to the vascular endothelium, causing fibrosis. 72 Postprandial hyperglycaemic excursions also augment oxidative stress, systemic inflammation, and endothelial dysfunction, all of which contribute to atherosclerosis and CVD risk. 73,74 While not designed to study the value of glucose control in reducing CVD risk, 8,75-77 mediation analyses of GLP-1RA studies suggest that the lower CVD risk with this class tracks with their glycaemic effect (possibly in addition to other associated factors). 78,79 For example, an exploratory analysis of the LEADER trial suggested that mean HbA1c is a potential mediator of the CV protective effect of liraglutide. 78 Likewise, an analysis of the REWIND study reported that 50% to 60% of the reduction in stroke risk with weekly dulaglutide 1.5 mg was related to glucose reduction. 80 Further, a meta-analysis of dipeptidyl peptidase-4 inhibitor, GLP-1RA, and SGLT2 inhibitor CVOTs demonstrated a significant association between HbA1c and MACE risk, 81 predicting a 33% reduction in MACE if all CVOTs achieved an HbA1c reduction of 9.8 mmol/mol. The authors noted that the only CVOT to achieve an HbA1c reduction of this magnitude was SUSTAIN-6 (9 mmol/mol), which had an associated 26% reduction in MACE risk in a population composed largely (83%) of individuals with established CVD. 14,81 This consideration contrasts with those in another recently published article, in which the authors concluded that because of the benefits shown by some of these agents in people without diabetes, the MACE benefits of GLP-1RAs and SGLT2 inhibitors are "exclusive of their glucose-reducing actions", 82 querying whether glucose reduction perhaps prevents early atherosclerosis, but not the final processes leading to CVD events. Parallels can be drawn for landmark trials. Despite initial reports of a lack of benefit of intensive glycaemic control in ACCORD, post hoc analyses have produced findings more consistent with the UKPDS, in that participants without prior CVD or those with baseline HbA1c less than 64 mmol/mol who received intensive treatment had fewer CV events than those receiving standard therapy. 35 Of those who received intensive treatment, only those with mean baseline HbA1c above 69 mmol/mol were found to have a higher mortality risk, 83 with a higher mean on-treatment HbA1c being associated with an increased mortality 84 Several meta-analyses have shown an association between glucose control and reduction in CVD events. Two meta-analyses that included data from the UKPDS, ACCORD, ADVANCE, and VADT trials showed that intensive versus conventional therapy reduced the risk of MACE by 9%, nonfatal MI by 15%, 85 and CVD by 10%. 86 Another two meta-analyses that included data from the UKPDS, ACCORD, ADVANCE, VADT and PROACTIVE trials reported that a decrease in mean HbA1c of 9.9 mmol/mol with intensive therapy reduced the likelihood of CVD events by 11%, MI by 14% to 17%, and coronary artery diease by 15%. 87,88 Other findings suggested that intensive glucose control was associated with a 10% to 15% reduction in nonfatal MI. 89 | THE LEGACY EFFECT: A MATTER OF TIMING AND EARLY GLYCAEMIC CONTROL The opportunity to meaningfully reduce CVD risk during the early stages of T2D fits with the findings of the UKPDS, which reported a benefit of intensive treatment on CVD endpoints in individuals with newly diagnosed T2D who were younger (mean age 53 years) than participants of other trials. 90 Also supportive of this hypothesis, findings of a meta-analysis suggested that the benefit of intensive glycaemic control on macrovascular risk was particularly prominent in younger people with shorter duration of diabetes. 91 In further agreement, the use of intensive glycaemic control in military veterans (mean age 60.4 years) with T2D diagnosed a mean of 11.5 years earlier, 40% of whom had a prior CVD event, did not improve the rates of MACE, death, or microvascular complications (except for progression of albuminuria). 34 Related to this, even the presence of prediabetes is known to be associated with substantial CVD risk. 29 Although results of the Diabetes Prevention Program/Diabetes Prevention Program Outcomes Study showed that metformin decreased coronary artery calcification in men with prediabetes, 92 recently published findings confirmed that the use of metformin did not reduce the occurrence of nonfatal MI, stroke, or CV death. 93 Results of the VA-IMPACT study, which was also designed to assess whether metformin can reduce mortality and CVD morbidity in people with prediabetes and established ASCVD, are awaited. 94 The importance of early blood glucose control on the risk of later complications is highlighted by several longitudinal studies. The Diabetes and Aging study 95 showed that early glycaemic control (HbA1c < vs. >) was associated with a lower risk of microvascular complications, macrovascular complications, and mortality, which F I G U R E 3 Contributory mechanisms of hyperglycaemia to vascular and kidney disease. ASCVD, atherosclerotic cardiovascular disease; NO, nitric oxide; SMC, smooth muscular cells persisted for 7 years. Newer data from a control-matched cohort of individuals with T2D from the Swedish National Diabetes Register 96 revealed that among five risk factors (elevated HbA1c, elevated lowdensity lipoprotein cholesterol, albuminuria, smoking, and elevated blood pressure), an HbA1c level outside of the target range was consistently the most important risk marker/predictor for stroke and acute MI, although it was not a predictor for death or HHF. Follow-up of the DCCT demonstrated that for the same average HbA1c over 20 years, reaching goal early versus late was associated with a 33% reduction in the risk of CVD and a 52% reduction in estimated glomerular filtration rate worsening. 97 In reviewing DCCT/EDIC and UKPDS data, the same group concluded that the concepts of metabolic memory for type 1 diabetes and the legacy effect for T2D are likely to be biologically similar, endorsing use of early intensive therapy to maintain normal glycaemia for as long as possible to limit the risk of complications. 98 A unique challenge in T2D management is the high rate of natural progression of disease, even despite therapy. This is highlighted by follow-up of UKPDS participants, which showed that maintenance of target glycaemic levels declined markedly over 9 years, with only 24% of those who received sulphonylurea monotherapy achieving a fasting plasma glucose (FPG) level lower than 7.8 mmol/L, and 24% achieving HbA1c lower than 53 mmol/mol. 99 Whether higher-efficacy therapies such as GLP-1RAs can affect the natural course of T2D is not known, although it is plausible that higher-efficacy approaches, and approaches that dually support glucose and weight reduction, will help alter the natural course of T2D and prolong control. 100 6 | IMPROVING EARLY GLYCAEMIC CONTROL: A ROLE FOR COMBINATION THERAPY? Whereas traditional approaches have used stepwise, sequential addition of therapy, recent data suggest that the use of early combination therapy may achieve and sustain glycaemic control more effectively. Indeed the recently updated ADA Standards of Care 101 support the use of initial combination therapy for either more rapid attainment of glycaemic goals 102,103 or longer durability of glycaemic effect, 104 recommending that "initial combination therapy should be considered in patients presenting with HbA1c values 15.9 mmol/mol above target." 101 Furthermore, results from the VERIFY study confirmed that initial metformin/vildagliptin combination therapy in people with newly diagnosed T2D resulted in better long-term glycaemic control than metformin monotherapy 105 (a 49% reduction in the time to initial treatment failure) and also reduced the risk of time to secondary treatment failure by 26%. 105,106 Although the trial was not powered to assess CV outcomes, early combination therapy was associated with a numerically longer time to first adjudicated macrovascular event than metformin monotherapy. 105 However, it is important to note that, in this study, 40% of people who received metformin monotherapy had no treatment failure after 5 years. As such, it is possible that initiation of dual treatment in this population could represent overtreatment. That said, the use of combination therapy later in the course of diabetes has been shown to impact the durability of glycaemic effect. The results of the DUAL VIII study showed that after failure of oral therapy, treatment with the basal insulin/ GLP-1RA fixed-ratio combination IDegLira was associated with longer time to treatment intensification versus insulin glargine 100 U/mL (median >2 vs. 1 year) with fewer participants requiring treatment intensification over 104 weeks (37% vs. 66%). 104 Further analysis confirmed greater reduction in HbA1c with lower hypoglycaemia rates for the fixed-ratio combination compared with basal insulin alone. 107 | HYPOGLYCAEMIA/GLYCAEMIC VARIABILITY AS A MODIFIABLE CVD RISK FACTOR The ACCORD study was the first to identify increased mortality associated with intensive glycaemic goals of lower than 6% in high-risk patients with T2D. 35 Although severe hypoglycaemia was associated with increased risk of mortality, a post hoc analysis showed that the risk of death was in fact lower for those who received intensive versus conventional therapy, 108 which could reflect the increased risk of hypoglycaemia in older adults with diabetes. Further, another analysis of the ACCORD study demonstrated that the risk of mortality in the subset of individuals who received intensive control increased linearly with HbA1c (from 42 to 75 mmol/mol) and was highest in those unable to achieve target glycaemia (HbA1c < 53 mmol/mol). 84 Separate analyses have confirmed that intensive therapy increases the risk of severe hypoglycaemia by two-to threefold [85][86][87][88][89] and a metaanalysis showed a correlation between risk of severe hypoglycaemia and CVD death with intensive therapy. 87 Collectively, these results suggest that intensive glucose control may reduce CVD events in people with T2D, but this needs to be balanced against CV events associated with severe hypoglycaemia. It is pertinent to note that these trials were conducted some time ago using intensive treatment modalities that do not reflect the current standard of care. Increases in hypoglycaemia rates are not observed with SGLT2 inhibitors and GLP-1RAs; [10][11][12]109 furthermore, guidelines now recommend individualizing glycaemic targets, highlighting the importance of achieving HbA1c goals safely. 20 As such, these negative influences on CVD risk might not be present if trials were repeated using current-day treatments. How glycaemic control is monitored and assessed also has considerable potential to guide advancement of therapy and more fully address the relationship between glycaemia and CV risk. Although HbA1c reflects the average glucose concentrations over a 3-month period, it does not account for day-to-day glucose variability, which is proposed to be more deleterious to CV health than the average change in HbA1c over time. 82,110 Indeed, data from the FinnDiane study in people with type 1 diabetes showed that HbA1c variability rather than mean HbA1c better predicted CVD events. 111 Similarly, the ALLHAT study 112 revealed that increased visit-to-visit variability of FPG is associated with increased mortality risk in individuals without CVD. High variability in HbA1c has been shown to be associated with increased risk of MACE and all-cause mortality, even in individuals with no history of diabetes or CVD. 113 Moreover, it is known that HbA1c is contributed to by both FPG and postprandial glucose (PPG). 114 Epidemiological studies suggest that PPG is an independent risk factor for CVD and MI, in people both without 115 and with diabetes. 116 In people with T2D, PPG (but not FPG) has been shown to be a predictor of CVD-related mortality, 117 Hypoglycaemia with blood glucose lower than 3.9 mmol/L (time below range; TBR) should be limited to less than 4% (1 hour) of the day, and with blood glucose lower than 3.0 mmol/L to less than 1% (15 minutes) of the day. Hyperglycaemia with blood glucose higher than 10 mmol/L or higher than should be limited to less than 25% (6 hours) and less than 5% (1 hour 12 minutes), respectively, of each day. Because of the increased risk of hypoglycaemia in older adults, the updated version of the guidelines recommends lowering the TIR target from greater than 70% to greater than 50% and reducing TBR to less than 1% at blood glucose levels lower than 3.9 mmol/L to place greater emphasis on reducing hypoglycaemia and less emphasis on maintaining target glucose levels. 122 Metrics from continuous glucose monitoring, including TIR, time above range, and time below range, can facilitate the safe achievement of glycaemic control and mitigation of the risks associated with hypoglycaemia. Several studies have determined that a decrease in TIR is strongly associated with an increased risk of microvascular complications, including microalbuminuria and retinopathy, 123 peripheral neuropathy, 124 as well as an increased risk of macrovascular disease. 125 Furthermore, lower TIR has been shown to be associated with an increased risk of allcause and CVD mortality among patients with T2D. 126 PEER REVIEW The peer review history for this article is available at https://publons. com/publon/10.1111/dom.14830. DATA AVAILABILITY STATEMENT Not applicable.
2022-08-06T06:16:29.059Z
2022-08-05T00:00:00.000
{ "year": 2022, "sha1": "8e0863f093534efd6d5f53dd8f63c270736bf844", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Wiley", "pdf_hash": "4d04767f3589749920c52570af3f88e1663b30d6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17298280
pes2o/s2orc
v3-fos-license
The Resident-intruder Paradigm: A Standardized Test for Aggression, Violence and Social Stress This video publication explains in detail the experimental protocol of the resident-intruder paradigm in rats. This test is a standardized method to measure offensive aggression and defensive behavior in a semi natural setting. The most important behavioral elements performed by the resident and the intruder are demonstrated in the video and illustrated using artistic drawings. The use of the resident intruder paradigm for acute and chronic social stress experiments is explained as well. Finally, some brief tests and criteria are presented to distinguish aggression from its more violent and pathological forms. Introduction Aggressive behavior belongs to the natural behavioral repertoire of virtually all animal species. From a biological point of view, aggressive behavior can be considered as a highly functional form of social communication aimed at active control of the social environment. It is characterized by a set of species-typical behaviors performed in close interaction with the opponent. Overt aggression and physical conflicts are potentially harmful not only for the victim but for the aggressor as well. Therefore, throughout the animal kingdom, mechanisms have been developed to minimize and control physical aggression in order to prevent its potentially adverse consequences. Such mechanisms include, for example, threatening behavior that often predicts and may thereby prevent physical attacks. Other mechanisms to keep aggression in control are taboos, ritualization, submission, reconciliation and appeasement. This holds in particular for offensive aggression, which is a form of aggressive behavior characterized by initiative of the offender and a range of introductory, often threatening, behavioral displays before attempting to reach the back and neck as non-vulnerable targets for the consummatory aggressive attack bites. Despite such highly adaptive control mechanism, examples exist of functional aggression turning into violence, which can thus be defined as an injurious form of offensive aggression that is out of control and out of context; it is a pathological form of offensive behavior that is no longer subject to inhibitory control mechanisms and that has no additive functional value to normal aggressive behavior in social communication 8 . Violence thus differs both quantitatively and qualitatively from normal adaptive offensiveness. This may include bites attacks targeted at vulnerable body parts such as the throat, belly and paws are normally off limits 5,13,20,24 . Defensive aggression is the form of aggressive behavior performed in response to an attack by another individual. It is distinctly different from offense in terms of its behavioral expression and inhibitory controls 5 . Note that extreme forms of defensive behavior can have violent characteristics. Much of the preclinical aggression research is conducted in territorial male resident rats or mice confronting an intruder conspecific. This socalled resident-intruder paradigm allows the spontaneous and natural expression of both offensive aggression and defensive behavior in laboratory rodents in a semi natural laboratory setting. By recording the frequencies, durations, latencies and temporal and sequential patterns of all the observed behavioral acts and postures in the combatants during these confrontations, a detailed quantitative picture (ethogram) of offensive (resident) and defensive (intruder) aggression is obtained. For extensive descriptions of the various behaviors see 3,12,18 . The paradigm is based on the fact that an adult male rat will establish a territory when given sufficient living space. Territoriality is strongly enhanced in the presence of females and/or sexual experience 1 . As a consequence of territoriality, the resident will attack unfamiliar males intruding in its home cage. Hence, offensive aggression can be studied by using the resident as the experimental animal. To determine the violent nature of aggression one can assess whether the offense is out of context and inhibitory control by using different types of intruders such as females or anesthetized males or a novel environment. A detailed quantitative analysis of the offensive behavioral repertoire is required to reveal to what extent the observed aggression is out of control. The intruders in the resident-intruder paradigm will show defensive behavior in response to the offensive attacks by the resident. The paradigm therefore also allows one to study defensive behavior and social stress by using the intruder as the experimental animal. A form of chronic social stress can be created by repeatedly using the experimental animal as intruder, or by housing it in the cage (territory) of the resident, separated by a wire mesh screen 4 . Like any kind of stress paradigm, the resident-intruder paradigm is not free from ethical concerns. We therefore want to present a number of ethical considerations. Aggression, violence and social stress are serious problems in our human society. A report of the World Health Organization shows that interpersonal violence is not only a major source of death worldwide it is also a major source of serious health problems in the surviving victims of aggression 19 . Hence, there is a need to understand these behaviors in terms of their underlying causal mechanisms and modulating factors. Animal models are essential to obtain experimental support of the causal nature of physiological and environmental factors. From a biological point of view, aggression is a natural, biologically functional form of social behavior aimed at the establishment of a territory, social dominance and defense of resources. The resident-intruder paradigm brings this natural form of behavior into a laboratory setting that allows for controlled studies of both the aggressor and the victim. An issue of ethical concern is the question to what extent the animal's welfare is compromised when exposed to this paradigm. Several studies show that engaging in offensive aggressive behaviors and winning a fight are highly rewarding and/or reinforcing 11 . From that perspective, there is no suffering in the resident. However, an aggressive interaction requires an opponent as well. Defensive behavior and submission belongs to the natural repertoire to cope with dominance. Defeat and subjugation triggers an adaptive behavioral and physiological response aimed at adopting a subordinate position in a social group. From that point of view, the initial response during the defeat will lead to a well-adapted animal that does not necessarily suffer 15 . Only repeated exposure to a dominant and social isolation after the defeat may lead to a condition that goes beyond the adaptive capacity of the animal. This makes the paradigm suitable as an animal model for stress pathology with a high ecological validity 17 . Although the stress of social defeat is mainly of a psychosocial nature, physical harm and injury may occur. In a normal (non-violent) social interaction, this physical harm is limited. Biting occurs mainly at the back and flanks of the opponent; an area of the body with a thick and tough skin 5,6 . Biting is in fact brief nipping of the skin, leaving behind small imprints of the incisors. This type of skin damage does not require any veterinary care. However, biologically functional aggression may change to a pathological, violent form of aggression which is out of control and out of context. In these situations, more serious wounds inflicted in particular at vulnerable body regions (belly, throat and paws) may occur 14 . To be clinically relevant, experimental model systems for violent aggressive behavior need to be valid, and this development poses a central ethical dilemma of this type of aggression research, namely harm and injury. Two countervailing principles govern this research: face validity is achieved when the behavior is potentially harmful and injurious, yet, at the same time, every ethical research guideline emphasizes the reduction and avoidance of the risk to be harmed or injured. Each research question and protocol needs to probe how much harm and injury is necessary or acceptable to generate scientifically valid information that can be translated into concerns of the public health system. When research on violence is the main aim of the experiment, it is self-evident that great care should be taken of the victim in terms of wound care or even euthanasia. The presence of serious wounding at vulnerable body regions should be the humanitarian endpoint of the intruder. When social stress is the main aim of the experiment, the interaction should be stopped when the resident shows signs of violence causing serious bite wounds at vulnerable body parts. After all, the psychosocial nature of the stress paradigm should not be mixed with the stress of severe physical injury. When residents show these signs of violence they should be excluded from the experiment. The Experimental Setup 1. Use for each resident a cage with a floor space of about half a square meter. Offensive aggression is a highly active form of behavior that requires sufficient space for its full expression. The cage should be made of sanitizable material. 2. House each resident with a female for at least one week before the start of the experiments, which will facilitate the development of territoriality. At the same time this will prevent social isolation, which is known to be stressful for social animals and may lead to reduced welfare and aberrant forms of social behavior. 3. Use companion females that are sterilized by ligation of the oviducts. In this way, the female stays hormonally intact and will be regularly receptive without becoming pregnant and developing maternal aggression. Procedure 1. House the resident male and the companion female together in the resident cage for at least one week prior to testing. 2. Do not clean the bedding of the cage during that initial week or prior to later testing, since territoriality is strongly based on the presence of olfactory cues. These cues are both important for the resident in establishing its own territory and for the intruder to know that it is in the home cage of the resident. Please, notice that this deviation from the general animal care taking procedures may require special permission from the authorities. 3. Remove the companion female from the residential cage one hour before the test. 4. Introduce an unfamiliar male into the home cage of the resident at the start of the test. Preferably, the intruder should be slightly smaller than the resident and should not have been used in previous interactions with the same resident. 5. Record the behavior of the resident, preferably using a light sensitive video camera. 6. A test duration of 10 min is usually sufficient for the expression of the full offensive behavioral repertoire. For the purpose of standardization one may consider to continue recording for ten minutes after the first attack. 7. After completion of the test, remove the intruder male from the cage and reunite the resident male with its companion female. 8. Although aggression may occur at all times of day, it is best to test only during the dark phase; the rats' main activity phase. 9. Testing can be performed once or twice per day. The level of aggressive behavior often increases across the first couple of tests but generally stabilizes after three to four tests. 1. In principle, any strain of rats can be used as residents. However, strains may differ considerably for their absolute level of aggressive behavior. Moreover, there may be a considerable individual variation within strains. 2. Standardize intruders as much as possible in terms of strain, age and weight. Use rats of a non-aggressive strain that are slightly lower in body weight than the resident male. 3. Determine in the resident male the duration and frequency of the following behavioral parameters: i. Representative Results There is a considerable variation between strains and within strains in the level of offensive aggressive behavior. This is demonstrated in Figure 1 which shows the frequency distribution of the offensive aggression score in a laboratory bred but originally feral strain of rats (Wild Type Groningen strain (WTG)) ( Figure 1a) and a more common strain of laboratory rats (Wistar, Figure 1b). In the WTG strain, about one third of the animals is extremely aggressive whereas another third is not or very low aggressive. This is in contrast to the frequency distribution of a Wistar strain in which the highly aggressive phenotype is absent and about fifty percent of the animals can be considered as low or non-aggressive 16 . Figure 2 shows the distribution of different behavioral categories in the resident-intruder paradigm with the WTG strain as residents (Figure 2a) and the Wistar rats as intruders (Figure 2b). Shown is the average composition of offensive behavior in the WTG resident rat and the average composition of defensive behavior in the Wistar intruder in terms of the relative amount of time spent on the various behaviors. Figure 3 shows an example of the use of the resident-intruder paradigm in behavioral pharmacology. The selective serotonin 1a receptor agonist Alnespirone induces a dose dependent reduction in offensive aggression, which is accompanied by a dose dependent increase in social exploration. The absence of any significant effects on non-social exploration and inactivity supports the view that the behavioral effects of this compound are specific for offensive aggression 9 . In some individuals offensive aggression may escalate into a violent form of aggression. The distinction between high levels of aggression and violence is illustrated in Figure 4. Despite the fact that there is no statistical difference in offense score, the violent form of aggression is characterized by a very short attack latency, attack of an anesthetized male or a female, serious wounding and a very low threat attack ratio 8 . Discussion The resident-intruder paradigm can be used to study offensive aggression, defensive behavior, violence and social stress in rats and, with some small modifications for other rodent species as well. When studying aggression, principally all rat strains can be used. However, strains are not equally suitable. Depending on the exact purpose of the experiment, some specific characteristics of the animals should be considered. It is important to notice that there are large strain differences in the level and intensity of offensive aggression shown as a resident. Figure 1 shows the frequency distribution of offensive aggression in an originally feral rat strain (Figure 1a) and in a standard laboratory Wistar strain ( Figure 1b). The two strains differ considerably in the number of animals that will show aggressive behavior at all. Moreover, there is a large difference in the absolute scale of offense. The feral rat strain ranges from zero up to 80% offense in our standard 10 min test whereas the Wistar strain has a maximum of 25% offense; the highly aggressive phenotype is absent in this latter strain 16 . When the resident-intruder paradigm is used to study defensive behavior and social stress in the intruder, one needs these highly aggressive phenotypes as residents. After all, the resident has to reliably defeat any intruder entering his territory. Of course, the most aggressive individuals should be chosen and one should realize that even in a highly aggressive strain, not all individuals will be suitable for this purpose. True adult males of at least four months of age 10 should be used and one may consider using former breeder males. It is recommended to give the resident males some further experience with intruders during the days before the actual start of the social stress experiment. Any strain can be used as intruder. However, to guarantee clear winning by the resident and defeat of the intruder, we advise to use intruders with a slightly lower body weight than the resident male. Because olfactory cues are important in social communication and territoriality, cleaning the cage of the resident prior to testing will be a serious confounder. It is recommended to videotape and record the full behavioral repertoire of the experimental animal during the test. This allows an unbiased analysis of the results, i.e. when one behavior goes up, another is likely to go down. For example, the results depicted in Figure 3 shows that a reduction in offense following drug treatment is accompanied by an increase in social exploration and not by immobility. This provides evidence that the drug induced reduction in offense is not due to some kind of a sedative or motor inactivity effect. The total offense score is an index of the intensity of aggression and the latency of the first attack and number of attacking animals can be used as a measure of the readiness to attack 22 . For social stress experiments one should have a clear criterion for social defeat. When the intruder adopts a submissive posture (see above) and stays in this posture even when the resident moves away, this is a reliable criterion for social defeat. Notice that housing conditions of the intruders are extremely important in social stress research. First, the social stress should not be administered in the same room where the non-stressed controls are housed. Control animals witnessing (social) stress in other individuals may experience major stress themselves
2016-08-09T08:50:54.084Z
2013-07-04T00:00:00.000
{ "year": 2013, "sha1": "c810cb601f164d02f61855f1b5ec6cfe6f35632f", "oa_license": "CCBYNCND", "oa_url": "https://www.jove.com/pdf/4367/the-resident-intruder-paradigm-standardized-test-for-aggression", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f9b61171d113f45e2b050b7f60497fbb0c2cc15d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
262128309
pes2o/s2orc
v3-fos-license
A qualitative study on hope in iranian end stage renal disease patients undergoing hemodialysis Background End Stage Renal Disease (ESRD) patients undergoing hemodialysis are faced with serious problems in their lives. Hope, as a multifaceted factor, plays a critical role in these patients’ lives. Given the multifaceted process of hope, this study aimed to describe hope and identify the challenges, strategies, and outcomes of hope in Iranian ESRD patients undergoing hemodialysis. Methods This is a qualitative study using content analysis. The participants were selected using purposive sampling. The data were collected using deep, semi-structured interviews with 14 participants; it continued until reaching data saturation. Graneheim and Lundman content analysis approach was used to analyze the data. Results Five main categories and twenty-two subcategories emerged; the categories consisted of (1) Hope described as a particular event to happen, (2) Opportunities and threats to achieve hope, (3) Negative emotions as barriers to achieve hope, (4) Positive coping strategies to achieve hope, and (5) Growth and excellence as the outcomes of hope. Conclusions Based on the findings, ESRD patients undergoing hemodialysis described hope as a positive feeling of expectation and desire for a special thing to happen. They faced threats and opportunities to achieve hope, which exposed them to negative emotions as barriers of hope. Thus, they make use of positive coping strategies to achieve hope. Moreover, hope led to growth and excellence. Through awareness of hope, definition and strategies to achieve it, and teaching them, physicians and nurses working in hemodialysis wards can enhance hope in patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12882-023-03336-6. Background End Stage Renal Disease (ESRD) is a chronic and lifethreatening disease, the prevalence of which is increasing around the world [1,2].ESRD is accompanied by disabilities, which reduce the patients' quality of life [3].Additionally, this disease can lead to uncertainty, social activity disorders, and disruption of social relationships [4].Thus, coping strategies should be used in this area.Hope, as a coping strategy, can be used to reduce these limitations [5].ESRD patients undergoing Hemodialysis (HD) experience better outcomes if they are hopeful [6,7]. Hope is a process and source of compatibility and adaptation with chronic diseases [8].It is a multidimensional, universal, and dynamic construct that helps the patient to adjust to the treatment of various diseases.In fact, hope is a cognitive process, in which individuals seek for opportunities, pathways, and probable and specific goals [9,10].Hope can help individuals to overcome the darkness resulting from the illness and have the ability to battle through adverse events [8].It is a complicated process of thoughts, emotions, and functions that change over time and facilitates the achievement of goals and outcomes in future [11].Evidence has also indicated that hope is related to excellence, acceptance of compatibility [11], depression, mental health, religious beliefs [12], and spirituality [13].Furthermore, social support and relationship with others have been found to be effective in facilitation and maintenance of hope [14].Moreover, more hopeful individuals benefit from higher satisfaction with their lives [15]. In a study, it was indicated that family function predicted hope in hemodialysis patients [16].As mentioned above, most of the studies on hope in hemodialysis patients have been conducted quantitatively [11][12][13], and these studies could not explain the concept of hope fully with quantitative research.This means that definition of hope is unknown in hemodialysis patients, especially those who lived in Iran as well.Moreover, the challenges and concerns of hemodialysis patients and strategies that these patients use to improve their hope are not well known around the world and in the context of Iran.In addition, with the existing views, it is difficult to explain the hemodialysis patients' challenges, concerns, and strategies they used to improve their hope, especially in the Middle East countries such as Iran.Therefore, conducting a qualitative research is essential for knowing and explaining this concept and presenting a specific view of it [17].Qualitative methods help the healthcare workers to describe and understand the concept and obtain in-depth and rich insights [18] about hope.Based on the researchers' experience in the field of Iranian ESRD and hemodialysis patients, the number of these patients, as a chronic disease, has increased during two decades.Therefore, exploring hope and the factors associated with it using a qualitative approach improves the evidence-base practice in these patients.Thus, this qualitative study aimed to describe hope and identify the challenges, strategies, and outcomes of hope in Iranian ESRD patients undergoing hemodialysis. Study design This research was conducted using a qualitative design and content analysis approach.Content analysis focuses on the lived experience, interpretations, and meanings encountered by individuals.Content analysis is usually used in the design of studies the goal of which is to explain a phenomenon.This type of design, unlike the directed method, is often suitable when the existing theories or research literature about the phenomenon under study are limited.In this case, researchers avoid using preconceived categories and instead arrange for the categories to emerge from the data [19]. Study setting The study was performed in hemodialysis centers in Nemazi and Shahid Dr. Faghihi hospitals affiliated to Shiraz University of Medical Sciences in Fars province in Southern Iran. The study participants included 14 males and females over 18 years of age selected through purposive sampling.The inclusion criteria for the patients were speaking and understanding the Persian language, experience of dialysis for at least 6 months, and age over 18 years.Patients in the acute phase of ESRD were excluded from the study. Data collection The study data were collected through semi-structured interviews.Observation and field notes were utilized as well.The total data collection procedure lasted for two months.The interviews were performed in the Persian language by the first author, who was experienced in qualitative studies.The interviews started with the following open questions: "How do you describe hope in your life?", "How does hemodialysis and renal failure affect your hope?", "What factors are facilitators and barriers of hope in your life?", "What circumstances or conditions help you to increase or decrease your hope?", "What would be the outcomes of achieving hope?", and "How does achieving hope affect your life?"Then, probing questions were asked, and the patients' explanations were continued (see supplementary file).These faceto-face interviews were individually conducted at the patients' bedside during hemodialysis.They lasted for 30-50 min.The interview was done with 14 hemodialysis patients.Two patients were interviewed twice.Therefore, 16 interviews were used for data analysis.All the patients who had inclusion criteria and were invited to the study accepted to participate in the study.Data collection continued until we reached data saturation.All the conducted interviews with the participants were recorded and immediately transcribed verbatim after the end of the interview sessions.In fact, data analysis was done in parallel with data collection by all three researchers.For this purpose, first, the interview of each participant was analyzed and then the next interview was conducted. Data analysis The data were analyzed based on the 5 steps of Graneheim and Lundman's method [20], including "writing down the entire interview immediately after conducting each interview", "reading the entire interview text to gain a general understanding of its content", "determining semantic units and primary codes", "categorizing similar primary codes into more comprehensive categories", and "determining latent content in the data".Thus, semantic units (words and paragraphs that included aspects related to each other) were first formed from the participants' speeches.Then, the code was extracted from the meaning units.The codes were placed in subcategories, and similar subcategories were placed in one category (as shown in Table 1).MAXQDA 10.0 software was used to organize the data. Rigor In order to confirm the results and determine the study rigor, we used Lincoln and Guba's (1985) four criteria, namely credibility, dependability, conformability, and transferability [21].To gain the data credibility, the researchers listened to the interviews carefully, immersed in the data, selected a variety of patients, and performed the interviews where the participants felt comfortable.To ensure the conformability of the data, the second and third authors of this study reviewed the extracted notes and codes; an external researcher who was familiar with qualitative studies was also employed.To verify the dependability and transferability of the findings, we recorded and reported the steps and process of the research step by step as accurately as possible.In terms of age and gender, the participants were decided to be diverse, which in addition to dependability, also helped the transferability of the findings. Results This study was conducted on 14 hemodialysis patients.There were both male and female patients with the mean age of 32.5 years.Other demographic characteristics of participating patients are summarized in Table 2.During the process of content analysis, five categories, namely hope described as particular events to happen, opportunities and threats to achieve hope, negative emotions as barriers to achieve hope, positive coping strategies to achieve hope, and growth and excellence as outcomes of hope were extracted (Table 1). Hope described as particular things to happen The hemodialysis patients described hope as having positive thoughts, expectation and desire for future events; having goal-oriented thoughts; developing strategies to achieve goals; being motivated to expand effort to achieve goals; and having family support in future. Having positive thoughts Having positive thoughts was reported by two participants (p 11, p 6). "To me, hope means thinking about positive happenings (p 11), thinking about future, ignoring the renal failure and seeing the future clearly….I describe hope as having good dreams (p 6)". Expectation and desire for future events Expectation and desire for future events was expressed by participant 3. "I describe hope as being healthy after hemodialysis, coming back home after hemodialysis, being recipient of kidney transplantation, being a healthy person, and having a good life without any disease complication (p 3)". Having goal-oriented thoughts, developing strategies to achieve goals, and being motivated to expand effort to achieve goals The patients described hope as having goal-oriented thoughts, developing strategies to achieve goals, and being motivated to expand efforts to achieve goals.In this regard, participants 4, 6, 5, 14 said "To me hope means having goals in the life (p 4)". Having family support in future Having family support in future was described by the hemodialysis patients as the meaning of hope. Opportunities and threats to achieve hope Perception of opportunities and threats to achieve hope was extracted from the participants' experiences.On one hand, the patients faced such threats as physical challenges, challenges in social interactions, educational failure, economic challenges, occupational restrictions, and familial conflicts to achieve hope.On the other hand, they pointed to the negative and positive effects of the disease and hemodialysis.Furthermore, spirituality alongside the positive impacts of treatment provided a great opportunity for facing the threats. Physical challenges The hemodialysis patients have physical challenges including skin and nail change, fatigue, weakness, bone pain, difficulty sleeping, muscle cramps, nausea, drowsiness, etc. that affect the patients' hope.Patients have facial changes such as paleness, swelling under the eyes, spots on the face, changes in the appearance color, and, in other words, a sickly face.In addition, the smell of urea can come out of their mouths.Sometimes, the Table 1 Categories, subcategories, and meaning units of the factors related to hope in the patients under hemodialysis Subcategories Meaning unit (primary code) Hope as particular things to happen Having positive thoughts Thinking about positive events, thinking about future and ignoring the renal failure, seeing the future clearly, having good dreams Expectation and desire for future things Being healthy after hemodialysis, coming back home after hemodialysis, being recipient of kidney transplantation, being a healthy person, and having a good life without any disease complication Having goal-oriented thoughts, developing strategies to achieve goals, and being motivated to expand effort to achieve goals Having goals in the life, thinking about buying a house, getting a job in future, waiting for the children go to the university, being successful in the family functions, job, social relationship and life in spite of renal failure and undergoing hemodialysis Having family support in future Having a family that understand, love, care and accept the patients with all of their disease limitations, and living with the husband and children at the same home Opportunities and threats to achieve hope Physical challenges Changes in appearance like turning pale, swelling under the eyes, dark spots on the face, and skin color changes may affect the patients' hope Challenges in social interactions Dialysis ward environment, presence of other patients, seeing other patients' problems while communicating with them, restrictions for choosing a place to travel, challenging trips may affect the patients' hope Educational failure Inability to continue education, physical and mental problems in the classroom may be correlated to hope Economic challenges Losing one's properties for treating the disease, cost of referral to the hemodialysis center and transplantation, financial problems for following up the treatment process Occupational limitations Job restrictions for patients, obligation for choosing specific jobs, quitting job due to the disease, impatience for finishing works may affect the patients' hope Familial conflicts Challenges for choosing a spouse, being rejected by the family, inability to fulfill one's role in the family affect the hemodialysis patients' hope Spirituality Belief in a supreme power named God as the source of hope, belief in the purposefulness of the world and creation of living things, belief in God as the creator of life, belief in the fact that being thankful leads to blessing in life, and considering the disease as a divine destiny may increase the patients' hope Negative emotions as barriers to achieve hope Depression Impatience in doing personal and social activities, being intolerant and unmotivated, considering nothing as important in life, being isolated, feeling alone, crying, tendency to die are barriers to achieving hope sadness Being sad due to physical and social limitations are barriers to achieving hope uncertainty Lack of confidence in being ready for transplantation and transplant rejection are barriers to achieving hope Anger and hate Being nervous, getting angry by even simple stimulants, showing aggressive behavior, hatred of dialysis barriers to achieve hope Positive coping strategies to achieve hope Positive solutionoriented strategies Modification of goals, using positive and solution-oriented strategies are to achieving hope Staying motivated Making attempts, listening to music, grooming oneself, banishing negative emotions, taking part in recreational activities to achieve hope Positive psychological constructs Positive thinking, positive expectations, cheering oneself up, comforting oneself, being thankful to achieve hope Supportive exchanges Emotional, informational, instrumental, and spiritual supports to achieve hope Connection to transcendence Talking to God, praying, hope in God, complaining to God to achieve hope Improvement of well-being Optimism, happiness, vitality, being energetic, gaining tranquility, health, building better social relationships with others, promotion of self-efficacy, acceptance of the disease as outcomes of hope Finding the meaning of life Purposefulness in life, finding order in life, following goals in life, being interested in life, recognition of oneself and God, self-care, adherence to treatment, understanding life, love for continuation of life as outcomes of hope accumulation of toxic substances in the body causes nausea and anorexia in the patient.In this regard, two participants (p 14, p1) expressed "When my kidney failed, I turned pale; there was swelling and edema under my eyes down to my cheeks, and I had pain all over my body.From then on, I haven't been able to sleep or sit; I suffer from shortness of breath; these may affect my hope" … (p 14); "We undergo dialysis 3 times a week.This causes our physical strength to decrease.I became very weak and do not drink water and liquids as much as we want.I feel nauseous and I have no appetite…, I think these impact my hope" (p 1). Challenges in social interactions Hemodialysis clients are limited in their social activities due to dependence on hemodialysis machines 2 or 3 times a week.Moreover, because of ESRD, they have no physical energy, and are not interested in communicating with others and participating in social activities.Based on the patients' reports, these challenges in social interactions my influence the patients' hope.Statements of the participants in this regard are presented below: "We have restrictions on commuting; for example, there must be a dialysis center in the place we intend to visit, so that it is close to us in terms of location…" (p 10); "As much as we want, we don't have free time to be with friends and relatives.It means that due to our condition and dialysis, we cannot be together andachieve hope; that's why, our communication has decreased…" (p2). Educational failure Inability to continue their education and studies and physical and mental problems in the classroom are threats to the lives of ESRD patients undergoing hemodialysis and affect the patients' hope.They stated: "I was accepted in the university.I was not able to study.The disease ruined all my thoughts; I had no motivation and did not have patience for this work.I had no energy…, so how I can be hopeful?"(p 4); "I really could not go to class.You know, I had dialysis in the morning three days a week.I had to go to class in the evening, and I really could not sit in class.Every time I went, I was so sleepy and dull that I can't tell…" (p13). Economic challenges The cost of going to and from the dialysis center and the existence of financial problems to follow the treatment processes are some of the economic challenges.Two participants (p 8, p 3) stated: I sold my house and shop.In these 4 years that I have been ESRD, I spent all my family money on treatment and doctor's fees ….These led to loss of hope during these years." (p 8); "I was at work for 12 years when I was transplanted, but when I got dialysis, I said I can't come, and I am unemployed now…." (p 3). Occupational restrictions Being limited in work for patients, having to engage in certain tasks, being unable to work, giving up work at the onset of illness, and being impatient to finish work threatened the hemodialysis patients.Two participants' expressions (p 12, p 11) are as follows: "We cannot choose any job.Many of us do not go to work after we get dialysis…" (p12); "We are really having trouble finding a job.We are all involved in the hospital and tests, but our conditions do not allow us to find a job…" (p 11). Familial conflicts These patients have difficulty choosing a spouse.They face family rejection, family disputes, and inability to fulfill family roles.These may affect the patients' hope.In Spirituality Five participants (p 5, p 7, p 9, p 11, p 13) believed in a supreme power called God as the source of hope.They believed in the purposefulness of creation of living things, were thankful to God, and believed that the disease had come from God, and God would help them.In this regard, two participants (p 5, p 9) mentioned: "God wanted me to become sick.I know that the disease has come from God. God wanted to tell me to be careful.I think the disease is an examination sent by God…" (p 5); "I know that every problem and pain that happen to me, are from God.I say, God, I will accept whatever you give me.I say, God, you are right.If you have faith in God, you will find faith in yourself, and you will find purpose and hope in your life…" (p 9). Negative emotions as barriers to achieve hope Depression, sadness, uncertainty, anger, and hate were the main concerns of the patients that were barriers to achieve hope.These patients were isolated, felt lonely, cried, and talked about tendency towards death.They were also unhappy with their physical and social limitations.Moreover, they were not certain about future and disease complications.Some patients were nervous, lost their temper easily, and showed aggressive behaviors.They said that these negative emotions were the barriers of hope.Besides, some of them hated the treatment process. Depression These patients showed signs and symptoms of depression.They felt tired and energy less.They lost their interest and pleasure in most or all-normal activities.The patients also reported tearfulness and hopelessness.They felt worthless and had frequent thoughts of death. Two participants (p 13, p 10) stated: "The disease ruined my motivation and my boredom.I got tired of the disease, treatment, and its side effects…" (p 13); "The disease had a bad effect on my mood.I will not live anymore.There is no hope for me to survive.I should get a kidney and be transplanted. .I do not want to undergo dialysis anymore.In the end, I am seeing death…" (p 10). Sadness Patients were upset about starting dialysis to treat the disease, not being with their family members during the week, and not being able to fulfill some of their tasks and roles for their children. A participant said in this regard: "On days of dialysis, I can't go to a party with my family.Well, they certainly like us to be together, but I cannot.I get upset and sad…you know, it leads to loss of my hope" (p 12). Uncertainty Seven participants mentioned their uncertainty about whether they can get a transplant, as well as the complications associated with a transplant.One of them said: "I am undergoing dialysis.I am also a transplant candidate.I see people who were transplanted, and it did not work… Now they are undergoing dialysis.I tell myself that this problem should not happen to me.This uncertainty is frustrating…and leads to loss of my hope" (P1). Anger and hate Patients have become nervous since the onset of the disease, very quickly become frantic due to a simple shock, and show aggressive behavior.In addition, hatred towards dialysis and the equipment of the dialysis department was evident as another negative emotional reaction.One of them said: "I hate dialysis and the equipment of the dialysis department.When I want to go for dialysis, I get very upset that day…" (p 3). Positive coping strategies to achieve hope Positive coping strategies refer to the behaviors shown by eight participants for solving their concerns in the process of hope.In fact, the participants used positive and solution-oriented strategies, stayed motivated, utilized positive psychological constructs, had supportive exchanges, and were connected to transcendence to cope with negative emotions and achieve hope. Using positive and solution-oriented strategies The patients with ESRD tried to have purpose in their lives, modify their goals, and make use of positive and solution-oriented strategies to cope with their negative emotions and are hopeful.In this regard, one of the participants stated: "I am counting the days to reach my goal.When a person has a goal in his/her life, he/she becomes hopeful.When there is hope in life, life becomes orderly…" (p 2). Staying motivated The patients with ESRD made genuine attempts to stay motivated by using constructive approaches such as trying, listening to music, grooming themselves, avoiding negative memories, and taking part in recreational activities to overcome negative emotions and achieving hope.In this regard, two participants stated: "When I think about my kidney, to avoid getting depressed, I stand in front of a mirror.I comb my hair, change my clothes, and wear perfume…" (p 7); "When I go to a recreational place with my family or friends, I get a lot of energy… my motivation and hope to continue life increase…" (p 9). Using positive psychological constructs To cope with negative emotions and achieve hope, the participants used positive psychological constructs including positive thinking, having positive expectations, cheering themselves up, comforting themselves, and being thankful.Statements of the participants in this regard are presented below: "See!I always try to see the glass half full.I tell myself that the disease is there, and I have to think positively about it…" (p 6); "I compare myself with other diseases such as cancer and thalassemia.Thalassemia patients have many appearance and physical problems.I console myself and say, thank God, there is dialysis.And we hope…" (p 14). Supportive exchanges Holistic support was found to be a strong strategy for coping with the disease and achieving hope.This included emotional, informational, instrumental, and spiritual support.Two patients' expressions (p 9, p11) are as follows: "In this world, you can only count on your family…Your family gives you hope…Your family always stays by your side…"(p 9); "Whenever I have a question, I ask the hemodialysis center; there is also a counseling room here… I ask a lot of questions, and this helps me to reduce my problems …" (P 11). Connection to transcendence Connection to transcendence was another strategy used by the patients to achieve hope.This was done through talking about God, praying, having hope in God, and complaining to God.In this regard, three participants (p 9, p 5, p 10) stated: "I asked God to heal me.I easily talk to God…" (p 9); "I am a person who prays and even attend religious ceremonies.Well, this gives me peace.You know that God always cares about His servants… With this disease, my relationship with God has increased…" (p 5); "Now that I have this disease, I am closer to God… There is not a day that I do not wake up and call the name of God… "(p 10). Growth and excellence as outcomes of hope Using the above-mentioned positive coping strategies to improve hope and eliminate negative emotions led to growth and excellence.In other words, hope resulted in the improvement of well-being and finding the meaning in life. Improvement of well-being Utilization of positive coping strategies led to optimism, happiness, vitality, full energy, tranquility, health, and better social relationships with others, promotion of selfefficacy, and disease acceptance.In other words, using these strategies improved the patients' well-being. In Finding meaning in life Utilization of positive coping strategies helped the patients with ESRD find meaning in their lives.In other words, they became purposeful, found order in their lives, and pursued their goals.They were interested in life, recognized themselves, achieved faith in God, understood life, and were able to care for themselves.In this regard, two participants (p 1, p 7) stated: "Maybe, I didn't think about life very deeply before I got sick…but now that I'm sick and I'm having a hard time, I appreciate life more.I just understand how sweet life can be despite the hardships.Life seems to have a different meaning to me than when I was healthy… " (p 1); "I know that a person should be comfortable and hopeful for him-self… these are necessary to endure problems; I don't know that maybe this disease will bother us more than what it is now… but I am hopeful about the future and have a happy ending.It makes life sweeter for us…" (p 7). Discussion The results of this study indicate that hope in hemodialysis patients means having positive thoughts, expectation and desire for future events and goal-oriented thoughts; developing strategies to achieve goals; being motivated to expand effort to achieve goals; and having family support in future.In a qualitative study, hope was reported that as "lively, active, and, above all, a personal choice".Hope was the desired possibility; however, it needs an effort to be achieved [22]. The findings of the present study showed that threats to achieve hope in hemodialysis patients consisted of physical challenges, challenges in social interactions, educational failure, economic challenges, occupational restrictions, and familial conflicts.Although no qualitative study have reported the factors that affect hope in hemodialysis patients, a qualitative study showed that mental well-being and depression were associated to hope in patients who have undergone dialysis [23].The results of other studies indicate that hemodialysis patients face many physical, social, and financial complications due to the nature of their illness and treatment [24,25].In the qualitative studies conducted in Iran, fatigue, physical disability, social limitations and financial and employment problems were some of the challenges based on the experiences of hemodialysis patients.It seems that the continuation of the disease and treatment with dialysis and changes in the lifestyle, such as financial problems, unemployment, and changes in family duties and roles may affect the hemodialysis patients' hope. Despite these threats, the patients studied revealed opportunities like spirituality, which were effective in the process of hope.Five participants believed in a supreme power called God as the source of hope.In general, Muslim patients consider diseases as the divine destiny that remove their sins.In line with the current study results, other studies demonstrated that spiritual experiences increased hope and had a positive effect on the reduction of stress [7,26,27].Another study reported that hope was associated with spirituality in patients with chronic kidney disease [9].It seems that belief in God in patients makes them see God's hand in all matters; as a result, they tolerate discomforts of life better, protect themselves against accidents, and get more patient. The findings of this study showed that the patients who were undergoing hemodialysis encountered negative emotions including depression, sadness, uncertainty, anger, and hate as barriers to achieve hope.A qualitative study on end-of-life patients reported that "limitations imposed by illness, feelings of anguish and helplessness and poor communication with clinicians" were barriers of hope [22].A quantitative study indicated that in chronic diseases hope was negatively associated with depression and anxiety.Moreover, hope can change the relationship between uncertainty and depression and anxiety symptoms [28].Review of the literature also revealed that hemodialysis patients experienced stress in their lives [7,26,29,30].In general, the physical and social limitations of hemodialysis patients, along with fear, anxiety, uncertainty, and limitations of recreational activities and dialysis treatment make depression inevitable [31,32].Fear of future and uncertainty seem to be due to the fluctuating conditions that patients experience during treatment as well as complications of the disease that may affect the patients' hope. Positive coping strategies were another finding of the study to achieve hope.Hemodialysis patients used positive solution-oriented strategies, positive psychological constructs, and supportive exchanges; connection to transcendence; and motivation to achieve hope.In line with our study, a qualitative research on end-of-life patients reported that "supportive others, positive thinking and sense of humor, connection with nature, faith in religion and science, and a sense of compassion with others and altruism" were facilitators of hope [22].The results of other studies indicate that there is a positive relationship between hope and optimism and positive coping style [27,33].As a result, people who have a positive coping style are more likely to understand cognitive optimism and make positive behavioral efforts to cope with illness, which is useful in creating positive psychological concepts such as hope [34]. Supportive exchanges were also the strategies used for overcoming negative emotions and developing hope in hemodialysis patients.In various studies, the role of social support and its effect on improving hope have been proven, which is consistent with the present study [13,14,35].Moreover, researchers have reported that the hemodialysis patients who benefit from higher social supports are more hopeful [12].It was also shown that if a person receives support from a network and a peer, s/ he can cope with difficult situations of the illness, which in turn raises the level of hope [14]. In the current study, growth and excellence were the other prominent strategies related to hope in patients under hemodialysis.In other words, hope resulted in the promotion of the patients' well-being, which helped them find meaning in their lives.In various studies, in line with the current research, the use of positive coping strategies by patients with ESRD under hemodialysis leads to happiness, vitality, disease acceptance, and well-being [36,37].Another study reported that hope was associated to better quality of life in hemodialysis patients [7].When patients are hopeful, they can benefit from their energy to rebuild their health and well-being [38].The results of another study on hope among chronic patients indicated that hope led to the improvement of health, facilitation of compatibility, promotion of life quality, and improvement of self-esteem [39].In the present study, using positive coping strategies in response to negative emotions resulted in the meaningfulness of life.In other words, the patients became purposeful and found order in their lives.They pursued their goals and made attempts to reach their ideal lives.Evidence has shown that hope is the key to spiritual well-being and encourages the individuals to move and act in their lives [27,40]. Limitation We used qualitative content analysis to examine the patients' experiences of hope.Using a design such as phenomenology can help to better understand the patients' lived experience of this concept.In addition, the lack of similar studies inside the country limited the possibility of comparing the findings with local studies.For this reason, repeating similar studies can help clarify the concept and its effective factors. Strength To the best of our knowledge, this is the first study which aimed to describe hope and identify the challenges, concerns, strategies, and outcomes associated with hope in Iranian ESRD patients undergoing hemodialysis.An indepth interview with maximum diversity and the use of purposeful sampling was the main strength of this study.The qualitative approach can also help us to describe hope and deeply understand the factors related to hope in patients with ESRD who are under hemodialysis, which can be fundamental for designing nursing interventions. Conclusion The results of this study indicate that hope in hemodialysis patients means having positive thoughts, expectation and desire for future events, and goal-oriented thoughts; developing strategies to achieve goals; being motivated to expand effort to achieve goals; and having family support in future.The study findings indicated that the patients with ESRD who were undergoing hemodialysis were faced with threats and opportunities to achieve hope.These caused them to develop negative emotions as barriers to achieve hope.In response to these concerns, they made use of positive and solution-oriented strategies, stayed motivated, used positive psychological constructs, had supportive exchanges, and resorted to transcendence to achieve hope.Being hopeful eventually led to growth and excellence.Therefore, there is a need for educational programs to be held with a focus on the use of active and constructive coping styles to increase hope.Thus, healthcare providers should support the patients to achieve coping strategies and in turn create hope and motivation to cope with stress and emphasize active coping in the daily life of these patients; also, the supporting role of the family is emphasized to remain hopeful to effectively deal with the disease and its treatment. Table 2 14)ographic characteristics of hemodialysis patients who participated in the research (N =14) this regard, two participants (p 14, p 7) stated: "Hope gives you positive energy.It gives you a feeling of happiness and increases your energy.When you are hopeful, you laugh; you think about good things; you have good dreams in your mind, positive dreams for future, and good feeling; and you feel relaxed…" (p 14); "Hope keeps you physically and mentally healthy.It reduces the disease symptoms and pain.Hope leads to living, earning money, working, everything…I'm sure that I'll be successful in my life and in my job in future…" (p 7).
2023-09-23T13:37:12.541Z
2023-09-22T00:00:00.000
{ "year": 2023, "sha1": "ce1bf5b9ef651dd2330d5104d7f68c8ebe2488ab", "oa_license": "CCBY", "oa_url": "https://bmcnephrol.biomedcentral.com/counter/pdf/10.1186/s12882-023-03336-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d17d2a219674caffb451a9e1e28eae0ed36be34", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
155100085
pes2o/s2orc
v3-fos-license
Effective Sentence Scoring Method using Bidirectional Language Model for Speech Recognition In automatic speech recognition, many studies have shown performance improvements using language models (LMs). Recent studies have tried to use bidirectional LMs (biLMs) instead of conventional unidirectional LMs (uniLMs) for rescoring the $N$-best list decoded from the acoustic model. In spite of their theoretical benefits, the biLMs have not given notable improvements compared to the uniLMs in their experiments. This is because their biLMs do not consider the interaction between the two directions. In this paper, we propose a novel sentence scoring method considering the interaction between the past and the future words on the biLM. Our experimental results on the LibriSpeech corpus show that the biLM with the proposed sentence scoring outperforms the uniLM for the $N$-best list rescoring, consistently and significantly in all experimental conditions. The analysis of WERs by word position demonstrates that the biLM is more robust than the uniLM especially when a recognized sentence is short or a misrecognized word is at the beginning of the sentence. Introduction A language model (LM) is an essential component in recent automatic speech recognition (ASR) systems. Since the LM captures the possibility of any word sequence, it can help to distinguish between words with similar sounds. Conventionally, LMs are used to predict the probability of the next word given its preceding words. Many state-of-the-art speech recognition systems have achieved performance improvements with these unidirectional LMs (uniLMs), including n-gram LMs [1] and recurrent neural network LMs [2]. Recently, bidirectional LMs (biLMs) have achieved significant success in many applications of the natural language processing [3, 4, ?]. In speech recognition, there have also been several studies that use biLMs to capture the full context rather than just the previous words [5,6]. Even though bidirectional networks are superior to unidirectional ones in many applications from phoneme classification [7] to acoustic modeling [8], the biLMs for ASR did not show their excellence compared to the uniLMs when applying the LMs to the rescoring. This is because there is no interaction between the past and future words in their biLMs, although the words on both sides are used to predict the current word. Namely, the forward and backward representations are not be fused in their models, and it may limit the biLM's potential. Furthermore, because their biLM architectures are restricted on one encoding layer, the capacities of the LMs are insufficient to model complex patterns of the human language. Similar issues have been addressed in recent studies on pre-training the language representation models [3,4]. In this paper, we propose a novel sentence scoring method which reflects the interactions between the past and the future words on the biLM. The main idea of our sentence scoring is replacing a word with a special token <M> in a given sentence, and then making the biLM predict the original word of the masked position using only its surrounding words. In our model, the past and the future representations interact each other during computing the probability of the masked word. It is important to make the prediction task non-trivial by hiding the meaning of the word while leaving only its positional information, otherwise the model simply copies the exposed words. The score of a sentence is obtained by computing the probability of each word at a time in the given sentence, and then aggregating all the probabilities. This sentence score is used to rescore the N -best list of the speech recognition outputs, following the previous studies on biLMs for ASR [5,6]. Experiments on the 1000-hour LibriSpeech ASR task [9] demonstrate that the proposed scoring method with the biLM is considerably better than the traditional scoring method with the uniLM for rescoring the N -best sentence list. Our biLM achieves a 22.2% relative improvement in word error rate (WER) to the baseline recognition system on the test-clean set, while the uniLM shows a 16.3% relative improvement. Moreover, the additional analysis of WERs by word position shows that the biLM is more robust than the uniLM especially when a recognized sentence is short or a misrecognized word is at the earlier part of the sentence. To the best of our knowledge, this is the first study that the biLM significantly and consistently outperforms the uniLM for ASR. The rest of this paper is organized as follows: Section 2 introduces our sentence scoring method including the biLM and uniLM we use here and how to rescore the N -best list using the LMs. Section 3 describes experimental setups for the baseline recognition system and the language model parameters. We present the results of the experiment and its discussion in Section 4, and draw the conclusion in Section 5. Methodology In this section, we present the sentence scoring method which uses bidirectional language model (LM) for rescoring the Nbest list. First, we review the key operations of the self-attention network (SAN), which is the base architecture of our LMs. We then outline the architecture of the bidirectional SANLM (biSANLM) and how to train it. In Section 2.3, we introduce the procedure of the sentence scoring, which is the core development in this paper. Section 2.4 describes the unidirectional LM consisting of the SAN (uniSANLM), which is used as a comparison model. Self-Attention Network This section briefly overviews the key operations of the selfattention network (SAN) known as the Transformer [10]. While recurrent neural networks (RNNs) appear to be a natural choice for language modeling, SANs have recently shown competitive performance on sequence modeling with a slight trade-off between speed and accuracy [11,12]. Note that the main interest in this paper is the comparison between the biLM and the uniLM, and we choose the SAN for the LMs' common architecture, primarily for speed. Self-attention, often called intra-attention, is an attention mechanism that computes the representation of a single sequence by relating all positions by themselves. This computation can be done by using the scaled dot-product attention: where Q, K, V are query, key, value matrices respectively, which are generated from the input sequence X ∈ R n×d with the number of words n, the input dimension d. To make the model aware of the word position in a sentence, we use the position embedding that is added to the sentence matrix of the input. To leverage the capacity of the SAN, multi-head self-attention is applied: are the parameter matrices for projections with the number of heads h, and d k = d/h is used for reducing the computational cost. In addition, the position-wise feed-forward network, the layer normalization, residual connection, and dropout are also used in the SAN module for effective training. Figure 1 shows the selfattention network with the scaled dot-product attention, and detail formulas are the same as in the original Transformer [10]. Bidirectional SANLM We now explain the architecture of the biSANLM, which is used for scoring a given sentence in the next section. As shown in Figure 1a, our biSANLM architecture is similar to the encoder of the Transformer [10] except for having more layers such as the softmax layer with the linear projection. Let XS ∈ R n×d be a sentence matrix which is the input embeddings of the LM. In order to make the model aware of the order of the words in the sequence, we add absolute position embeddings XP ∈ R n×d to the input embeddings at the bottom of the encoder, and thus we have X = XS + XP as the input of the LM. On top of this input, we can build an encoder by stacking the SAN layers as many as we want. The output sequence of the highest SAN, YL ∈ R n×d with the number of layers L, is used to predict the probabilities of the words through the softmax layer with the linear projection. To train our biSANLM, we consider the masked language modeling (MLM) objective of the BERT [4]. Specifically, the biSANLM learns to predict the original word in the masked position in the input sentence, which is also known as the Cloze task [13]. However, our training approach has many differences from that of the BERT because our purpose of training bidirectional LM is for scoring a sentence rather than for fine-tuning the model to the other task [4]. First, we sample randomly 15% of the words from the sentence as in the BERT, but replace them by <M> tokens all the time unlike in the BERT [4]. Second, our training instance has a single sentence (maximum 128 words) instead of multiple sentences. Lastly, the maximum number of masked tokens in a training instance is limited by the small number 4, because our instance has only one sentence and too much loss in information is unhelpful to train the model. Note that we make the training instances have multiple masked tokens <M> for efficient training, while we make the inference instance have only one <M> for the sentence scoring. Our Sentence Scoring Method This section introduces our sentence scoring method, the core development in this paper. The basic principle of our sentence scoring method is to mask one word in a given sentence and then compute the probability of the original word on the masked position using the biSANLM. Because the whole sentence with the masked word is taken to the biSANLM as an input, both past and future representations can be fused without making the task trivial. Our sentence scoring method follows the procedure: First, we create a set of instances from a given sentence by replacing each word with the predefined token <M> at a time. For example, if the sentence has seven words, we create seven instances as below: • A given sentence: move the vat over the hot fire After the creation, our biSANLM takes each instance and computes the probability of the original word in the masked position as shown in Figure 2a. By aggregating all probabilities of the words from all instances, the score of the given sentence is obtained. In this work, we consider the sum of all log-likelihoods of a masked word in each input sentence as the sentence score of the biSANLM. Following the previous works on bidirectional language models for speech recognition [5,6], we use our sentence score for rescoring the N -best hypotheses. For simplicity, we linearly interpolate the scores obtained by the acoustic model (AM) and the language model (LM): and λ is the interpolation weight, which is determined empirically on development data. Although it is not straightforward to compare the perplexity of the biSANLM with that of the traditional (unidirectional) LM, the log-likelihood of the masked word can still be used as the score for the N -best list rescoring. Unidirectional SANLM This section outlines the unidirectional SANLM (uniSANLM), which is a comparison model for our biSANLM. For a fair comparison, our uniSANLM and biSANLM have almost the same architecture, including the summation of sentence embeddings and position embeddings, the number of SAN layers, and the softmax layer. However, the uniLM has one additional operation of masking the key-query attention, which is an optional operation in the scaled dot-product attention shown in Figure 1b. This masking operation prevents words from attending to the future words, by making the upper triangle of the key-query attention to be 0 as in the decoder of the Transformer [10]. The uniSANLM is trained with the next word prediction task as in the traditional LMs. Figure 2b shows an example of the uniSANLM that predicts next word using only its preceding words. Following the scoring method in Section 2.3, we consider the sum of all log-likelihoods of the next word in an input sentence as the sentence score of the uniSANLM. We also use Equation 3 to combine the AM with the uniSANLM. Experimental Setups We evaluate the proposed approach on the LibriSpeech ASR task [9]. The 960-hour of training data is used to train an acoustic model, which is our baseline recognition system. We obtain the 100-best hypothesis list for each audio in development and test data using the acoustic model, and then use the biSANLM and the uniSANLM to rescore these 100-best lists, following the previous studies on biLMs for ASR [5,6]. The details of the baseline acoustic model and language model settings are explained in the following sections. Baseline Acoustic Model In this study, we use the attention-based seq2seq model Listen, Attend and Spell (LAS) [8] as our baseline acoustic model with some differences. First, there are additional bottleneck fully connected (FC) layers between every bidirectional long-short term memory (BLSTM) layer. Second, the number of time steps is reduced in half by just subsampling hidden states for even number time steps before the FC layer, instead of concatenating every two hidden states. Third, LAS is trained with additional CTC objective function because the left-to-right constraint of CTC helps LAS learn alignments between speech-text pairs [14]. The details of our acoustic model follow the default settings provided in ESPNet toolkit v.0.2.0 [15]. For the input features, we use 80-band mel-scale spectrogram derived from the speech signal. The encoder consists of 5-layer pyramidal-BLSTM with subsampling after second and third layers. The decoder is comprised of 2-layer LSTM with location-aware attention mechanism [16]. The target sequence is processed in 5K case-insensitive sub-word units created via unigram byte-pair encoding [17]. All the LSTM and FC layers have 1024 hidden units each. Our model is trained for 10 epochs using Adadelta optimizer [18] with learning rate of 1e-8. Using this baseline acoustic model, we obtain 100-best decoded sentences for each input through hybrid CTC-attention based scoring [14] method, and these 100-best lists will be used for rescoring. Table 1 shows the word error rates (WERs) obtained from the baseline model and the oracle WERs, which is the best possible errors of the 100-best lists on the LibriSpeech tasks. Language Model Setups The model parameters of our language model (LM) are as follows: L = 3 for the number of layers, d = 512 for the dimensions of the model and the embeddings, h = 8 for the number of head. 2048 hidden units are used in the position-wise feedforward layers. We use trainable positional embeddings with supported sequence lengths up to 128 tokens. We use a gelu activation [19] rather than the standard relu, following [20,4]. Weight matrix of the softmax layer is shared with the word embedding table. The word vocabulary is used in three sizes: 10k, 20k and 40k most frequent words. For a fair comparison, our biSANLM and uniSANLM have the same architecture and parameters except for the vocabulary size. We train the LMs with the 1.5G normalized text-only data of the official LibriSpeech corpus. We use Adam optimizer [21] with learning rate of 1e-4, β1 = 0.9, β2 = 0.999. We use a dropout probability of 0.1 on all layers. Batch size is set to 128 for biSANLMs and 64 for uniSANLMs, and all the LMs are trained for 1M iterations. We confirmed that all our LMs are converged before the 1M training steps. Results and Discussion In this section, we compare the uniSANLM and biSANLM for the N -best rescoring on all test sets of the LibriSpeech ASR corpus, in which the test sets are classified as 'clean' or 'other' set based on their difficulties. We first prepare 100-best hypotheses using our acoustic model (AM), which is referred to as the "baseline" recognition system in our experiments. For rescoring the 100-best list, the baseline AM is linearly interpolated with one of our language models as in Equation 3. The interpolation weight is set to a value that achieves the best performance in the development sets. We find that λ = 0.2 and 0.3 are the best weights for dev-clean and dev-other sets respectively. Considering that the dev-other set is more difficult for the acoustic model to recognize, it is reasonable to have larger interpolation weight in dev-other (λ = 0.3) than in dev-clean (λ = 0.2). Table 2 shows rescoring results of the biSANLMs and the uniSANLMs with different test sets and different vocabulary sizes |V |. The WER results show that the biSANLM with our approach is consistently and significantly better than the uniSANLM regardless of the test set and the vocabulary size. To see where the "word error" occurs, we analyze the position of the misrecognized words. Figure 3 shows the total number of the misrecognized words for each model according to the position of the final hypotheses. It can be seen that the biSANLM is more robust than the uniSANLM at the earlier position (< 30) of a sentence. At the latter position (> 30) of a long sentence, however, the gap between the two LMs is reduced. This fact shows that, although the uniSANLM also performs well, our biSANLM is more effective particularly when a recognized sentence is short or a misrecognized word is at the beginning of the sentence. In this analysis, we use |V | = 40k for both LMs, and the choice of vocabulary size does not affect the tendency. Finally, we conduct linear interpolation of the two LMs for further improvements: where the scoreLM is used in Equation 3. We find α = 1 shows the best performances on all test sets, which means only the biSANLM is used for interpolation (log-linear interpolation of the two LMs shows the same phenomenon). Contrary to our first expectation, the biSANLM and the uniSANLM do not complement each other in our experiments. Consequently, all experimental results demonstrate that our sentence scoring method using the biSANLM is almost strictly better than the traditional method using uniSANLM for the Nbest list rescoring. As far as we know, this is the first study that the bidirectional language model significantly and consistently outperforms the unidirectional language model for speech recognition. Conclusion In this paper, we propose a novel sentence scoring method that uses a biLM for rescoring in ASR. We used the biLM to predict the probability of the masked word, and thus made the model enable to capture the interactions between the past and the future words. Experimental results on the LibriSpeech ASR tasks demonstrated that the proposed sentence scoring with our biLM significantly and consistently outperforms the conventional uniLM for rescoring the N -best list. In addition, we confirmed that the biLM is more robust than the uniLM especially when a recognized sentence is short or the earlier part of the sentence is misrecognized.
2019-05-16T11:00:49.000Z
2019-05-16T00:00:00.000
{ "year": 2019, "sha1": "2670354c1231f1aa230bf7b2ee3e468931093e6b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2670354c1231f1aa230bf7b2ee3e468931093e6b", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
849835
pes2o/s2orc
v3-fos-license
Biomedical Event Trigger Identification Using Bidirectional Recurrent Neural Network Based Models Biomedical events describe complex interactions between various biomedical entities. Event trigger is a word or a phrase which typically signifies the occurrence of an event. Event trigger identification is an important first step in all event extraction methods. However many of the current approaches either rely on complex hand-crafted features or consider features only within a window. In this paper we propose a method that takes the advantage of recurrent neural network (RNN) to extract higher level features present across the sentence. Thus hidden state representation of RNN along with word and entity type embedding as features avoid relying on the complex hand-crafted features generated using various NLP toolkits. Our experiments have shown to achieve state-of-art F1-score on Multi Level Event Extraction (MLEE) corpus. We have also performed category-wise analysis of the result and discussed the importance of various features in trigger identification task. Introduction Biomedical events play an important role in improving biomedical research in many ways. Some of its applications include pathway curation and development of domain specific semantic search engine (Ananiadou et al., 2015). So as to gain attraction among researchers many challenges such as BioNLP'09 (Kim et al., 2009), BioNLP'11 (Kim et al., 2011), BioNLP'13 (Nédellec et al., 2013) have been organized and many novel methods have also been proposed addressing these tasks. An event can be defined as a combination of a trigger word and arbitrary number of arguments. Figure 1 shows two events with trigger words as "Inhibition" and "Angiogenesis" of trigger types "Negative Regulation" and "Blood Vessel Development" respectively. Pipelined based approaches for biomedical event extraction include event trigger identification followed by event argument identification. Analysis in multiple studies (Wang et al., 2016b;Zhou et al., 2014) reveal that more than 60% of event extraction errors are caused due to incorrect trigger identification. Existing event trigger identification models can be broadly categorized in two ways: rule based approaches and machine learning based approaches. Rule based approaches use various strategies including pattern matching and regular expression to define rules (Vlachos et al., 2009). However, defining these rules are very difficult, time consuming and requires domain knowledge. The overall performance of the task depends on the quality of rules defined. These approaches often fail to generalize for new datasets when compared with machine learning based approaches. Machine learning based approaches treat the trigger identification problem as a word level classification problem, where many features from the data are extracted using various NLP toolkits (Pyysalo et al., 2012;Zhou et al., 2014) or learned automatically (Wang et al., 2016a,b). In this paper, we propose an approach using RNN to learn higher level features without the requirement of complex feature engineering. We thoroughly evaluate our proposed approach on MLEE corpus. We also have performed categorywise analysis and investigate the importance of different features in trigger identification task. Related Work Many approaches have been proposed to address the problem of event trigger identification. Pyysalo et al. (2012) proposed a model where various hand-crafted features are extracted from the processed data and fed into a Support Vector Machine (SVM) to perform final classification. Zhou et al. (2014) proposed a novel framework for trigger identification where embedding features of the word combined with hand-crafted features are fed to SVM for final classification using multiple kernel learning. Wei et al. (2015) proposed a pipeline method on BioNLP'13 corpus based on Conditional Random Field (CRF) and Support vector machine (SVM) where the CRF is used to tag valid triggers and SVM is finally used to identify the trigger type. The above methods rely on various NLP toolkits to extract the hand-crafted features which leads to error propagation thus affecting the classifier's performance. These methods often need to tailor different features for different tasks, thus not making them generalizable. Most of the hand-crafted features are also traditionally sparse one-hot features vector which fail to capture the semantic information. Wang et al. (2016b) proposed a neural network model where dependency based word embeddings (Levy and Goldberg, 2014) within a window around the word are fed into a feed forward neural network (FFNN) (Collobert et al., 2011) to perform final classification. Wang et al. (2016a) proposed another model based on convolutional neural network (CNN) where word and entity mention features of words within a window around the word are fed to a CNN to perform final classification. Although both of the methods have achieved good performance they fail to capture features outside the window. Model Architecture We present our model based on bidirectional RNN as shown in Figure 2 for the trigger identification task. The proposed model detects trigger word as well as their type. Our model uses embedding features of words in the input layer and learns higher level representations in the subsequent layers and Input Feature Layer For every word in the sentence we extract two features, exact word w ∈ W and entity type e ∈ E. Here W refers the dictionary of words and E refers to dictionary of entities. Apart from all the entities, E also contains a N one entity type which indicates absence of an entity. In some cases the entity might span through multiple words, in that case we assign every word spanned by that entity the same entity type. Embedding or Lookup Layer In this layer every input feature is mapped to a dense feature vector. Let us say that E w and E e be the embedding matrices of W and E respectively. The features obtained from these embedding matrices are concatenated and treated as the final word-level feature (l) of the model. The E w ∈ R nw×dw embedding matrix is initialized with pre-trained word embeddings and E e ∈ R ne×de embedding matrix is initialized with random values. Here n w , n e refer to length of the word dictionary and entity type dictionary respectively and d w , d e refer to dimension of word and entity type embedding respectively. Bidirectional RNN Layer RNN is a powerful model for learning features from sequential data. We use both LSTM (Hochreiter and Schmidhuber, 1997) and GRU (Chung et al., 2014) variants of RNN in our ex-periments as they handle the vanishing and exploding gradient problem (Pascanu et al., 2012) in a better way. We use bidirectional version of RNN (Graves, 2013) where for every word forward RNN captures features from the past and the backward RNN captures features from future, inherently each word has information about whole sentence. Feed Forward Neural Network The hidden state of the bidirectional RNN layer acts as sentence-level feature (g), the word and entity type embeddings (l) act as a word-level features, are both concatenated (1) and passed through a series of hidden layers (2), (3) with dropout (Srivastava et al., 2014) and an output layer. In the output layer, the number of neurons are equal to the number of trigger labels. Finally we use Sof tmax function (4) to obtain probability score for each class. Here k refers to the k th word of the sentence, i refers to the i th hidden layer in the network and ⊕ refers to concatenation operation. W i ,W o and b i ,b o are parameters of the hidden and output layers of the network respectively. Training and Hyperparameters We use cross entropy loss function and the model is trained using stochastic gradient descent. The implementation 1 of the model is done in python language using T heano (Bergstra et al., 2010) library. We use pre-trained word embeddings obtained by Moen et al. (2013) using word2vec tool (Mikolov et al., 2013). We use training and development set for hyperparameter selection. We use word embeddings of 200 dimension, entity type embeddings of 50 dimension, RNN hidden state dimension of 250 and 2 hidden layers with dimension 150 and 100. In both the hidden layers we use dropout of 0.2. 1 Implementation is available at https: //github.com/rahulpatchigolla/ EventTriggerDetection 4 Experiments and discussion Dataset Description We use MLEE (Pyysalo et al., 2012) corpus for performing our trigger identification experiments. Unlike other corpora on event extraction it covers events across various levels from molecular to organism level. The events in this corpus are broadly divided into 4 categories namely "Anatomical", "Molecular", "General", "Planned" which are further divided into 19 sub-categories as shown in Table 1. Here our task is to identify correct subcategory of an event. The entity types associated with the dataset are summarized in Table 2 Experimental Design The data is provided in three parts as training, development and test sets. Hyperparameters are tuned using development set and then final model is trained on the combined set of training and development sets using the selected hyperparameters. The final results reported here are the best results over 5 runs. Table 1 We have used micro averaged F1-score as the evaluation metric and evaluated the performance of the model by ignoring the trigger classes with counts ≤ 10 in test set while training and considered them directly as false-negative while testing. Performance comparison with Baseline Models We compare our results with baseline models shown in Table 3. Pyysalo et al. (2012) defined a SVM based classifier with hand-crafted features. Zhou et al. (2014) also defined a SVM based classifier with word embeddings and hand-crafted features. Wang et al. (2016a) defined window based CNN classifier. Apart from the proposed models we also compare our results with two more baseline methods FFNN and CNN ψ which are our implementations. Here FFNN is a window based feed forward neural network where embedding features of words within the window are used to predict the trigger label (Collobert et al., 2011). We chose window size as 3 (one word from left and one word from right) after tuning it in validation set. CNN ψ is our implementation of window based CNN classifier proposed by Wang et al. (2016a) due to unavailability of their code in public domain. Our proposed model have shown slight improvement in F1-score when compared with baseline models. The proposed model's ability to capture the context of the whole sentence is likely to be one of the reasons of improvement in performance. We perform one-side t-test over 5 runs of F1-Scores to verify our proposed model's performance when compared with FFNN and CNN Ψ . The p value of the proposed model (GRU) when compared with FFNN and CNN ψ are 8.57×10 −07 and 1.178 × 10 −10 respectively. This indicates statistically superior performance of the proposed model. Category Wise Performance Analysis The category wise performance of the proposed model is shown in Table 4. It can be observed that Method Precision Recall F1-Score SVM (Pyysalo et al., 2012) 81.44 69.48 75.67 SVM+W e (Zhou et al., 2014) 80.60 74.23 77.82 CNN (Wang et al., 2016a) 80 Table 3: Comparison of performance of our model with baseline models model's performance in anatomical and molecular categories are better than general and planned categories. We can also infer from the confusion matrix shown in Figure 3 that positive regulation, negative regulation and regulation among general category and planned category triggers are causing many false positives and false negatives thus degrading the model's performance. Further Analysis In this section we investigate the importance of various features and model variants as shown in Table 5. Here E w and E e refer to using word and entity type embedding as a feature in the model, l and g refer to using word-level and sentence-level feature respectively for the final prediction. For example, E w + E e and g means using both word and entity type embedding as the input feature for the model and g means only using the global feature (hidden state of RNN) for final prediction. Index Method F1-Score 1 E w and g 76.52 2 E w and l + g 77.59 3 E w + E e and g 78.70 4 E w + E e and l + g 79.11 Table 5: Affect on F1-Score based on feature analysis and model variants Examples in Table 6 illustrate importance of features used in best performing models. In phrase 1 the word "knockdown", is a part of an entity namely "round about knockdown endothelial cells" of type "Cell" and in phrase 2 it is trigger word of type "Planned Process", methods 1 and 2 failed to differentiate both of them because of no knowledge about the entity type. In phrase 3 "impaired" is a trigger word of type "Negative Regulation" methods 1 and 3 failed to correctly identify but when reinforced with word-level feature the model succeeded in identification. So, we can say that E e feature and l + g model variant help in improving the model's performance. Index Phrase 1 silencing of directional migration in round about knockdown endothelial cells 2 we show that PSMA inhibition knockdown or deficiency decrease 3 display altered maternal hormone concentrations indicative of an impaired trophoblast capacity Conclusion and Future Work In this paper we have proposed a novel approach for trigger identification by learning higher level features using RNN. Our experiments have shown to achieve state-of-art results on MLEE corpus. In future we would like to perform complete event extraction using deep learning techniques.
2017-05-26T10:36:12.000Z
2017-05-26T00:00:00.000
{ "year": 2017, "sha1": "9cb2d90a2bf786ec50352c067c3eae681612ddfb", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/W17-2340.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "9cb2d90a2bf786ec50352c067c3eae681612ddfb", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Computer Science" ] }
268295988
pes2o/s2orc
v3-fos-license
Plasma oligomer beta-amyloid is associated with disease severity and cerebral amyloid deposition in Alzheimer’s disease spectrum Background Multimer detection system-oligomeric amyloid-β (MDS-OAβ) is a measure of plasma OAβ, which is associated with Alzheimer’s disease (AD) pathology. However, the relationship between MDS-OAβ and disease severity of AD is not clear. We aimed to investigate MDS-OAβ levels in different stages of AD and analyze the association between MDS-OAβ and cerebral Aβ deposition, cognitive function, and cortical thickness in subjects within the AD continuum. Methods In this cross-sectional study, we analyzed a total 126 participants who underwent plasma MDS-OAβ, structural magnetic resonance image of brain, and neurocognitive measures using Korean version of the Consortium to Establish a Registry for Alzheimer’s Disease, and cerebral Aβ deposition or amyloid positron emission tomography (A-PET) assessed by [18F] flutemetamol PET. Subjects were divided into 4 groups: N = 39 for normal control (NC), N = 31 for A-PET-negative mild cognitive impairment (MCI) patients, N = 30 for A-PET-positive MCI patients, and N = 22 for AD dementia patients. The severity of cerebral Aβ deposition was expressed as standard uptake value ratio (SUVR). Results Compared to the NC (0.803 ± 0.27), MDS-OAβ level was higher in the A-PET-negative MCI group (0.946 ± 0.137) and highest in the A-PET-positive MCI group (1.07 ± 0.17). MDS-OAβ level in the AD dementia group was higher than in the NC, but it fell to that of the A-PET-negative MCI group level (0.958 ± 0.103). There were negative associations between MDS-OAβ and cognitive function and both global and regional cerebral Aβ deposition (SUVR). Cortical thickness of the left fusiform gyrus showed a negative association with MDS-OAβ when we excluded the AD dementia group. Conclusions These findings suggest that MDS-OAβ is not only associated with neurocognitive staging, but also with cerebral Aβ burden in patients along the AD continuum. Background Alzheimer's disease (AD) is the most common cause of dementia, accounting for 60 to 80 percent of all cases [1].The classic pathophysiological hallmarks of AD, including β-amyloid (Aβ), tau protein, and neurodegeneration, can be measured using cerebrospinal fluid (CSF) studies and imaging techniques [2].With the recent approval of disease-modifying drugs targeting Aβ in the brain, such as aducanumab in 2021 [3] and lecanemab in 2023 [4], biomarker-based studies are receiving increased attention.Moreover, serial measurements of biomarkers are needed for drug development trials and applicability to real clinical settings.However, PET imaging of Aβ, tau and fluorodeoxyglucose (FDG), and CSF studies of β-amyloid 42 (Aβ 42 ) and 40 (Aβ 40 ), phosphorylated tau (p-Tau), and total tau (t-Tau) are expensive, invasive, or both, which hinders repetitive collections of these biomarkers.Thus, many studies have increasingly focused on blood-based biomarkers, with recent work showing promising results of showing correlation with cerebral Aβ deposition by measuring blood level of Aβ [5] and p-Tau (pT181, pT217, and pT231) [6][7][8]. Multimer detection system-oligomeric Aβ (MDS-OAβ) can measure the oligomerization dynamics or oligomerization tendencies in plasma samples after spiking synthetic Aβ [9].MDS-OAβ selectively detects oligomeric forms of Aβ (OAβ) [10,11].Research has repeatedly shown that patients with dementia due to AD, also called AD dementia, had a higher plasma concentration of MDS-OAβ compared to normal control (NC) [12,13].The study also found a positive correlation between MDS-OAβ and cerebral Aβ deposition severity measured using the standardized uptake value ratio (SUVR) of Pittsburgh compound B (PiB) (MDS-OAβ with PiB SUVR; r = 0.430) [12].A more recent voxel-based morphometry (VBM) study further showed that MDS-OAβ level had a correlation with brain volume reduction in cortical regions of AD [14].However, the relationship between MDS-OAβ level and disease severity along the AD continuum is not clear.A study showed that MDS-OAβ had a negative correlation with cognitive function, but MDS-OAβ did not differ between NC and mild cognitive impairment (MCI) patients [15].In contrast, others found that MCI and dementia patients with positive amyloid positron emission tomography (A-PET) had higher MDS-OAB level than NC, but MDS-OAβ level showed a decreasing trend as the clinical dementia rating (CDR) score increased from 0.5 to 1 and 2 [16]. A possible explanation for these contradictory results is that subjects' disease severities were not precisely stratified.One of the two studies did not utilize biomarkers, such as A-PET or CSF Aβ measures, and grouped the subjects merely based on neurocognitive measures [15]. Inevitably, a significant proportion of subjects in the NC and MCI groups might have shown overt cerebral Aβ deposition if they were tested with A-PET or CSF Aβ studies.The second study included subjects who underwent A-PET, but all MCI subjects included were A-PETpositive [16].Thus, the study failed to elucidate whether the difference in MDS-OAβ was attributed to the cerebral Aβ burden, neurocognitive function, or mixture of the two.Moreover, none of the previous studies compared MDS-OAβ between A-PET-positive MCI and A-PETnegative MCI.Likewise, the cerebral Aβ burden of the subjects in the VBM analysis was also not confirmed using either A-PET or CSF studies.Moreover, only 3% (14/162) of subjects had a diagnosis of MCI [14].Due the skewness of subjects, the VBM study was not able to correctly investigate association between the MDS-OAβ level with that of cortical atrophy in the AD continuum.The fact that VBM is known to be more affected by diverse cortical gray matter pathologies when compared to the cortical thickness measurement is another important shortcoming [17]. To fill in this gap, we investigated whether MDS-OAβ differed according to stage of AD.We hypothesized that MDS-OAβ level would be higher in patients with significant cognitive impairment (MCI to dementia), and MDS-OAβ would differ according to cerebral Aβ in MCI.We also investigated the association between MDS-OAβ and cerebral Aβ deposition, neurocognitive measures, and cortical thickness in subjects along the AD continuum. Subjects A total of 122 subjects consisting of 39 A-PET-negative cognitively normal older adults (normal control: NC), 31 A-PET-negative MCI patients, 30 A-PET-positive MCI patients, and 20 A-PET-positive dementia patients (AD dementia) were included in the study.Subjects were recruited from volunteers in the Catholic Aging Brain Imaging (CABI) database, which contains the brain scans of patients who visited the outpatient clinic at Catholic Brain Health Center, Yeouido St. Mary's Hospital, The Catholic University of Korea, between 2017 and 2022.The inclusion criteria for all subjects were as follows: [1] age ≥ 55 years and [2] no clinically significant psychiatric disorders (depressive disorder, schizophrenia, or bipolar disorder).In terms of NCs, they visited our outpatient clinic to undergo a brain examination as part of the health checkup.Their normal cognitive functions were confirmed with the Korean version of the Consortium to Establish a Registry for Alzheimer's Disease (CERAD-K), which includes a verbal fluency (VF) test, the 15-item Boston Naming Test (BNT), the Korean version of the Mini-Mental State Examination (MMSE), word list memory (WLM), word list recall (WLR), word list recognition (WLRc), constructional praxis (CP), and constructional recall (CR) [18].The criteria for MCI were as follows: (1) presence of memory complaints corroborated by an informant; (2) objective cognitive impairment in more than one cognitive domain on CERAD-K (at least 1.0 standard deviation (SD) below age-and education-adjusted norms), (3) intact activities of daily living (ADL); (4) CDR of 0.5; and (5) not demented according to the Diagnostic and Statistical Manual of Mental Disorders (DSM)-V criteria.The patients with AD dementia met the probable AD criteria proposed by the National Institute of Neurological and Communicative Disorders and Stroke and AD and Related Disorders Association (NINCDS-ADRDA) [19], as well as those proposed by the DSM-V with A-PET-positive results [20]. We excluded subjects with the following: (1) systemic diseases that can cause cognitive impairment, such as thyroid dysfunction, severe anemia, and syphilis infection; (2) severe hearing or visual impairment; (3) other neurological diseases that can cause cognitive impairment, such as brain tumor, encephalitis, and epilepsy; (4) clinically significant cerebral infarction or cerebral vascular disease; (5) prescription medications that may cause changes in cognitive function; and (6) contraindications for magnetic resonance imaging (MRI) examination.Diagnoses of cognitively normal status, MCI, and dementia were conducted separately by two psychiatric specialists, and they also confirmed the inclusion and exclusion criteria.The study was conducted in accordance with the ethical and safety guidelines set forth by the Institutional Review Board of Yeouido St. Mary's Hospital, The Catholic University of Korea (IRB number: SC21TISI0017), and all subjects provided written informed consent. Measurement of Aβ oligomerization in plasma MDS-OAβ was used to measure the plasma level of OAβ.We used an ethylene-diamine-tetraacetic acid (EDTA) vacutainer tube to collect subjects' blood plasma through venipuncture.In terms of the sampling process, we followed a previous procedure that used EDTA to measure MDS-OAβ level [21].The EDTA plasma was centrifuged at 3500 rotations per minute for 15 min at room temperature, and then was stored in 1.5-ml polypropylene tubes at a temperature between − 70 and − 80 °C.The samples were then sent to PeopleBio Inc. to assess the levels of MDS-OAβ.Before analysis, the plasma aliquots were defrosted for 15 min at 37 °C.The measurement of MDS-OAβs was performed utilizing the multimer detection system, which has received Conformité Européene (CE) marking and has been authorized by the Korean Food and Drug Administration [9][10][11][12][13][14][15]. MRI acquisition and pre-processing for morphometric analysis All study participants received MRI scans using a Siemens MAGETOM Skyra machine with Siemens head coils.The T1-weighted three-dimensional magnetization-prepared rapid gradient-echo (3D-MPRAGE) sequence used the following parameters: time to echo (TE) of 2.6 ms, repetition time (TR) of 1940 ms, inversion time of 979 ms, field-of-view (FOV) of 230 mm, matrix of 256 × 256, and voxel size of 1.0 × 1.0 × 1.0mm 3 .In terms of pre-processing, we utilized the FreeSurfer software (version 6.0.0, available online at https:// surfer.nmr.mgh.harva rd.edu) to perform cortical reconstruction and volumetric segmentation of the whole brain [22].The process involved several steps, which have been described previously [23].Briefly, the steps included removal of non-brain tissue using a hybrid watershed algorithm, bias field correction, automated Talairach transformation, and segmentation of subcortical white matter (WM) and deep gray matter (GM) structures.Afterward, we normalized the intensity and inflated the cortical surface of each hemisphere to locate both the pial surface and the GM/WM boundary, which allowed us to compute cortical thickness using the shortest distance between the two surfaces at each point across the cortical mantle [24].For the entire cortex analyses, we smoothed the cortical map of each subject using a Gaussian kernel with a full width at half-maximum (FWHM) of 10 mm.Finally, we parcellated the cerebral cortex based on gyral and sulcal information implemented in FreeSurfer. Amyloid positron emission tomography All participants received PET scans using 18 F-flutemetamol ( 18 F-FMM).Information about the production of 18 F-FMM, data collection, and analytical results was previously described [25].T1 MRI images were used for each participant to co-register, define regions of interest, and correct partial volume effects caused by cerebral atrophy.The analysis of 18 F-FMM PET data was based on the standardized uptake value ratio (SUVR) 90 min post-injection.In terms of regional SUVR values, we measured six cortical regions of interest (frontal, superior parietal, lateral temporal, striatum, anterior cingulate cortex, and posterior cingulate cortex/precuneus) using PMOD Neuro Tool.Thereafter, the global Aβ burden in the brain was acquired by averaging the SUVR values of these six cortical ROIs using the PMOD Neuro Tool.Lastly, two nuclear medicine radiologists confirmed the presence of Aβ deposition using visual readings. Statistical analysis We used a free and open-source data analysis tool, Jamovi (Version 2.3.18.0), to conduct statistical analysis [26].We used the analysis of variance (ANOVA) to assess potential differences between groups (NC, A-PET-negative MCI, A-PET-positive MCI, and AD dementia) for continuous variables and the chi-square test for categorical variables.When the group difference was statistically significant, Bonferroni tests were utilized for post hoc analysis.A two-tailed α level of 0.05 was chosen to indicate statistical significance for all statistical tests. Baseline demographic and clinical data Table 1 shows the baseline demographic data of the NC (n = 39), A-PET-negative MCI (n = 31), A-PET-positive MCI (n = 30), and AD dementia (n = 22) groups.All variables were normally distributed, and there were no significant differences in sex ratio and education level among the 4 groups.The NC group had a significantly lower age than the A-PET-positive MCI and AD dementia groups (P < 0.05 for ANOVA and for post hoc analysis with Bonferroni correction), but there were no significant differences in age among the A-PET-negative MCI, A-PET-positive MCI, and AD dementia groups.Global cerebral Aβ deposition, or global SUVR values, were significantly higher for A-PET-positive MCI and AD dementia groups than for CN and A-PET-negative MCI groups (P < 0.001 for ANOVA and P < 0.05 for post hoc analysis with Bonferroni correction).In terms of neuropsychological measures, group differences were noted in the order of CN > A-PET-negative MCI > A-PETpositive MCI > AD dementia groups (P < 0.001 for ANOVA and P < 0.05 post hoc analysis with Bonferroni correction). Cortical thickness and MDS-OAβ level There was no statistically significant association between MDS-OAβ and cortical thickness.When we excluded patients with AD dementia, MDS-OAβ showed a negative correlation with the cortical thickness of the left fusiform (age as a covariate; p < 0.05, multiple comparisons by Monte Carlo simulation; Fig. 5). Discussion To the best of our knowledge, this is the first study to investigate MDS-OAβ in A-PET-confirmed patients having different stages of AD.In line with previous research showing that MDS-OAβ peaked in MCI and lowered as the disease progressed [16], we showed that MDS-OAβ was highest in the A-PET-positive MCI group.Previous studies have not distinguished patients' cerebral Aβ status using A-PET or CSF studies.Thus, we are the first to show that MDS-OAβ was higher in subjects with A-PETpositive MCI (1.07 ± 0.17) than in subjects with A-PETnegative MCI (0.946 ± 0.137).Our results indicated that the MDS-OAβ has a potential as a pre-screening tool for brain amyloidosis even in the MCI population.PET scans are known to measure cumulative effects and capture topographic information of insoluble aggregates of cerebral Aβ [27,28].In contrast, blood-based biomarkers may reflect net rates of production and clearance of Aβ in near real time [5][6][7].Thus, blood-based biomarkers and neuroimaging biomarkers may not necessarily manifest identical results and exhibit divergent timing in exhibiting abnormal findings.Taken this account, the 2023 Revised Criteria for Diagnosis and Staging of Alzheimer's Disease has incorporated bloodbased biomarkers as one of the two important pivots [29].The criteria distinguished between imaging and fluid analyte biomarkers, and it suggested that the bloodbased and neuroimaging biomarkers are not interchangeable but rather should be used complementary of each other.Moreover, even within the blood-based biomarkers of tau or T, the timing of abnormality onset also varied among.The p-tau 217, p-tau 181, and p-tau 231 became abnormal around the same time as A-PET [30][31][32], but MTBR-243 and non-phosphorylated tau fragments correlated more strongly with tau-PET [33,34].In this perspective, the MDS-OAβ might be used in tandem with other blood-based and neuroimaging biomarkers.Nevertheless, additional studies investigating association between MDS-OAβ and other blood-based and neuroimaging biomarkers are required to understand clinical utility of MDS-OAβ in the trajectory of AD. It is not clear why patients with AD dementia showed lower MDS-OAβ level than patients with A-PET-positive MCI.The contemporary amyloid cascade hypothesis suggests that OAβ-dependent toxicity precedes amyloid plaque formation, and OAβ is present at earlier stages of the disease [35].Others showed that the plasma ratio of Aβ 1-42 /Aβ 1-40 increased in the early stages of AD but decreased as Aβ 1-42 , a monomer that is most prone to misfolding and aggregation, is deposited into Aβ plaques during progression of the disease [36][37][38].Two longitudinal studies further showed that patients in the AD continuum had higher baseline level of Aβ 1-42 , and a significant decrement of plasma Aβ 1-42 from baseline was associated with progression of MCI to AD dementia [39,40].Taken together, these findings indicate that MDS-OAβ decreased as more of the OAβ was aggregated into Aβ plaques along with advancement of AD severity.In line with our hypothesis, a study measuring CSF OAβ showed that OAβ increased at the onset of the disease, elevated as the disease progressed, and later fell as the disease became more severe [41].However, longitudinal studies are needed to confirm our theory. Our results confirmed those of previous studies that found a negative association between MDS-OAβ and cognitive functions [15].More importantly, our findings that MDS-OAβ was positively associated with global and regional cerebral Aβ are also consistent with those of previous research [12].Since the earlier studies only included NC and patients with AD dementia [12] or cerebral Aβ status-undefined subjects [15], they were not able to elucidate whether the association was due to cognitive function or cerebral Aβ deposition.We advanced previous research by including diverse patients along the AD continuum with cerebral Aβ status defined using A-PET and neurocognitive function measured using CERAD-K.Thus, we were able to show that MDS-OAβ was not only associated with neurocognitive measures but also with cerebral Aβ per se.Previous studies have indicated that the level of OAβ was correlated with the extent of synaptic loss, which would decrease hippocampal function [42].Since we included patients from CN to AD dementia, OAβ associated synaptic loss and cognitive decline might have been more prominent.Nevertheless, multi-center studies using larger sample sizes are needed to verify our results. In line with previous research, patients with A-PET negative MCI showed higher MDS-OAβ level than the NC [14].Subthreshold level of Aβ deposition is known to increases risk of conversion to dementia in patients with A-PET negative MCI [43].A significant number of A-PET-negative MCI subjects in our study might had subthreshold amyloid pathology, which might have contributed to higher oligomerization tendency than NC.In another perspective, MDS-OAβ is known to be associated with neurodegeneration [14].Thus, a large proportion of subjects in the A-PET-negative MCI group might had higher neurodegeneration associated with non-amyloid pathology.Additional studies investigating correlation between MDS-OAβ and non-amyloid pathology are needed to confirm our speculation. A study using VBM analysis already showed that MDS-OAβ was associated with cortical atrophy in cerebral Aβ status-undefined subjects, but the study mainly included cognitively normal older adults and those with AD dementia [14].We advanced previous findings and found novel results that MDS-OAβ had a negative association with cortical thickness in subjects ranging from NC, A-PET-negative MCI, and A-PET-positive MCI.Recent studies suggested that soluble AβOs are associated with earlier stages of AD than the fibrillar Aβ of neuritic plaques [44].Collectively, based on our group analysis which showed that MDS-OAβ was lower in AD dementia than in A-PET-positive MCI patients, MDS-OAβ could be closely linked with earlier neurodegenerative processes in the AD continuum.The detrimental downstream cascade of neurodegeneration after the disease has progressed to dementia could be more dependent on pathologies other than OAβ, such as tau proteins [45,46].The anatomical lesion, fusiform gyrus, showing negative association with MDS-OAβ, is also noteworthy.The fusiform gyrus is known to be involved in facial and lexical recognition, and it is one of the first brain areas to be affected during the progression of AD [47,48].In addition, a previous study demonstrated that atrophy of the fusiform gyrus occurs early in the AD trajectory as a consequence of Aβ within the hippocampus [49].Others showed that fusiform gyrus is one of the regions exhibiting early elevation in tau-PET uptake [50].A significant number of our participants might already had Aβ associated high cerebral tau burden and consequent neurodegeneration in fusiform gyrus.However, longitudinal studies combining multiple pathologies including Aβ, tau, and neurodegeneration are needed to elucidate neurobiological mechanisms underlying the role of OAβ in the AD continuum. Our study contains several limitations.It was performed with samples collected from a single center, which may limit the generalizability of our results.This was a cross-sectional study, so the results can only elucidate correlations and have limited ability to interpret causal relations.We did not include patients with moderate to severe dementia or those with a CDR score of 2 or higher.Thus, we were unable to investigate whether MDS-OAβ decreases further as dementia severity progresses.We did not undertake tau-PET and plasma tau levels, so we were unable to investigate correlation between MDS-OAβ with that of phosphorylated or secreted AD tau and AD tau proteinopathy. Conclusions We showed that MDS-OAβ increased when neurocognitive symptoms became clinically apparent, was heightened with higher cerebral Aβ burden, and decreased as the disease progressed further to dementia.MDS-OAβ was positively associated with cerebral Aβ burden throughout the different stages of AD.There also was a negative association between MDS-OAβ and cortical thickness among cognitively normal older adults and MCI patients.These findings suggest that the MDS-OAβ reflects earlier AD pathology, and it is not only associated with neurocognitive staging but is also correlated with the cerebral Aβ burden in patients along the AD continuum. Fig. 1 Fig. 2 Fig. 1 MDS-OAß level according to disease stage of AD + Analysis of variance (ANOVA), * Post hoc analysis with Bonferroni correction.AD: Alzheimer's disease; A-PET: Amyloid-PET scan; CN: A-PET negative cognitive normal older adults; MCI: Mild cognitive impairment Fig. 3 Fig. 3 Association between MDS-OAß level and global and regional cerebral beta amyloid deposition in subjects with MCI only PC: Precuneus; PCC: Posterior cingulate cortex; SUVR: Standardized uptake volume ratio Table 1 Demographic and clinical characteristics of the study participants The Korean Version of Consortium to Establish A Registry For Alzheimer's Disease, CDR Clinical Dementia Rating, CP Constructional Praxis, CR constructional recall, MMSE Mini-Mental Status Examination, NS not significant, SD standard deviation, VF verbal fluency, WLRc word list recognition, WLM word list memory, WLR word list recall a Bonferroni corrected for multiple corrections BNT 15-Item Boston Naming Test, CERAD-K A-
2024-03-11T13:22:07.193Z
2024-03-11T00:00:00.000
{ "year": 2024, "sha1": "d4e32d7342deae448042421f52c837294e280701", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "27e12ec7dd21baa22a03fad6cd8db18af1a0a621", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
20482941
pes2o/s2orc
v3-fos-license
Fine mapping and candidate gene analysis of qTAC8, a major quantitative trait locus controlling tiller angle in rice (Oryza sativa L.) Rice tiller angle is an important agronomic trait that contributes to crop production and plays a vital role in high yield breeding. In this study, a recombinant inbred line (RIL) population derived from the cross of a glabrous tropical japonica rice D50 and an indica rice HB277, was used to investigate quantitative trait loci (QTLs) controlling rice tiller angle. Two major QTLs, qTAC8 and qTAC9, were detected. While qTAC9 mapped with a previously identified gene (TAC1), using a BC2F2 population qTAC8 was mapped to a 16.5 cM region between markers RM7049 and RM23175. Position of qTAC8 was narrowed to a 92 kb DNA region by two genetic segregating populations. Finally, one opening reading frame (ORF) was regarded as a candidate gene according to genomic sequencing and qRT-PCR analysis. In addition, a set of four near isogenic lines (NILs) were created to investigate the genetic relationship between those two QTLs, and one line carrying qTAC8 and qTAC9 presented additive effect of tiller angle, suggesting that these QTLs are involved in different genetic pathways. Our results provide a foundation for the cloning of qTAC8 and genetic improvement of the rice plant architecture. Introduction Rice (Oryza sativa L.) is one of the most important food crops in China and the world. It is the most effective safeguard for food security and agricultural sustainable development through high-yielding rice breeding. Ideotype breeding strategy is an important approach to increase grain yield potential in rice breeding. Tiller angle, the angle between the main culm and its side tillers [1], is a decisive factor for building ideal plant architecture, whereby neither spreadout rice nor compact type rice is beneficial for grain production [2]. With a spread-out architecture, plants can decrease humidity and escape from some diseases, but they occupy too much space and increase shading and lodging, consequently decreasing photosynthetic efficiency and grain yield per unit area. On the other hand, compact plants prejudice in capturing light and prevention of plant diseases and insect pests, thus appropriate tiller angle is beneficial for improving rice production [3,4]. Although rice tiller angle has long attracted attention of a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 breeders due to the significant contribution to plant architecture and yield potential, the genetic mechanisms determining its characteristics are not fully understood. Rice tiller angle has been recognized as a complex quantitative trait, which is not only controlled by genetic factors but also greatly influenced by environmental conditions, such as light intensity, climate, soil, planting density, watering and fertilizing [4]. In the past two decades, a number of QTLs for tiller angle have been identified on chromosomes 1,2,5,7,8,9,11,12 in various mapping populations of rice. Using an F 2 population helped to identify three major genes controlling tiller angle [5]. A major QTL (ta9) on chromosome 9 flanked by RZ228 to RG667 together with other four QTLs (QTa1, QTa2, QTa3, QTa8) were separated in a F 2:4 genetic segregating population generated from a cross between Lemont and Teqing [6]. In addition, a doubled haploid population generated from a cross between Zhaiyeqing 8 (loose plant architecture) and Jingxi 7 (compact plant architecture) was used and a total of three controlling tiller angle QTLs named qTA-9a, qTA-9b, qTA-12, that account for 22.7%, 11.9% and 20.9% of the variance, respectively, were detected [7]. Moreover, two major QTLs, qTA8-2 and qTA9-2 were determined in a recombinant inbred line population derived from Xieqingzao B/ Miyang 46, and no G×E interaction effect was detected for the additive effect of these two QTLs [1]. Five major QTLs including qTA-9, qTA-2, qTA-7a, qTA-7b and qTA-11 for tiller angle were present in a RIL population from a cross between Asominori and IR24. Then a major QTL, qTA-9, was singled out in a 15 cM region between RFLP markers C609 and C508 by using a CSSLs population [8]. To date, only two major QTLs for tiller angle have been cloned. One is TAC1, a major QTL for tiller angle was isolated by using a large F 2 population that derives from the cross between indica rice IR24, and an introgression line IL55. TAC1 harbors three introns in its coding region and a fourth 1.5 kb intron in the 3' untranslated region, and encodes a 259-amino-acid unknown protein. A mutation in the 3' splicing site of the fourth intron from 'AGGA' to 'GGGA' decreases its transcript levels, resulting in compact plant architecture. tac1 has also been extensively utilized in densely planted rice [4]. The second cloned QTL for tiller angle corresponds to PROG1 (PROSTRATE GROWTH 1), a semidominant gene encoding a newly identified Cys2-His2 zinc-finger transcription factor located on chromosome 7. PROG1 is predominantly expressed in the axillary meristems and the site of tiller bud formation, and disrupting the prog1 function and inactivate prog1 expression can lead to erect growth, increasing grain number and higher grain yield in cultivated rice [9,10]. Dong identified three major QTLs (TAC3, DWARF2 (D2) and TAC1) controlling tiller angles by genome-wide association studies. TAC3 encodes a conserved hypothetical protein of 152 amino acids that is preferentially expressed in the tiller base [11]. Previous reports indicate that the tiller angle is not only affected by QTLs but also could be controlled by single genes following the Mendel's genetics. Thus far, several genes controlling tiller angle have been cloned. LAZY1 (LA1), a novel grass-specific gene, is expressed during gravitropism sensitivity and plays negative role in polar auxin transport (PAT). Loss-of-function of LA1 enhances PAT causing alteration of endogenous IAA distribution in shoots, leading to reduced gravitropism and a tiller-spreading phenotype of rice plants [12,13]. LPA1 (Loose Plant Architecture 1), identified on chromosome 5, encodes a plant-specific INDETER-MINATE DOMAIN protein that regulates tiller angle by controlling the adaxial growth of tiller nodes [3]. PIN-FORMED1 and PIN-FORMED2 are auxin efflux transporters, and suppressing the expression of rice PIN-FORMED1 or enhancing the expression of rice PIN-FORMED2 can alter PAT and increase tiller angles [14,15]. To further elucidate the genetic control of tiller angle, we used a RIL population that derived from a cross between a japonica and an indica rice cultivars to pinpoint two major QTLs for this trait. qTAC8 and qTAC9 were detected, and qTAC8 was narrowed down to a 92 kb region where one candidate ORF was determined as the one encoding for qTAC8. Plant materials A recombinant inbred line (RIL) population of 190 lines was generated from the cross between D50 and HB277 as described previously by Shao [16]. Briefly, one line (RIL-77) that carried the homozygous target segments RM339-RM210 on chromosome 8 and RM201-RM7306 on chromosome 9 from HB277 (S1A Fig) was chosen from the RILs to backcross with D50. The resultant BC 2 F 1 was selfed to obtain a BC 2 F 2 population containing 178 plants for genotyping and phenotyping, and a progeny (BC 2 F 2:3 ) was used for phenotyping (S1B Fig). The RILs and BC 2 F 2 population were grown at the experimental site of Fuyang district of Hangzhou in 2012, and the BC 2 F 2:3 plants were grown at the experimental site of Lingshui (Hainan Province, China) and Fuyang district in 2013, respectively. Six plants per row were transplanted with a distance of 18 cm between plants within a row, and 18 cm between rows, and four rows were used to grow each line. Two large segregating populations BC 2 F 4 and BC 2 F 5 carrying the heterozygous target segment RM339-RM210 on chromosome 8 and the D50 target segment RM201-RM7306 on chromosome 9, were picked for screening recombinant individuals in our target region. As a result, the homozygous recombinants were used for genotyping and phenotyping. One plant from BC 2 F 1 carrying the heterozygous target segment RM339-RM210 and the HB277 target segment RM201-RM7306 was selected to consecutive backcross with D50 and produce BC 4 F 1 and 120 BC 4 F 2 individuals. Finally four NILs designated NIL-qTAC8qtac9, NIL-qtac8qtac9, NIL-qTAC8qtac9 and NIL-qTAC8qTAC9 were developed. This set of four NILs was planted in the experimental site of Lingshui in 2014 following a randomized block design with three repeats. Each line was grown in a six-row plot with 6 plants in each row and spacing of 18 x 18 cm. PCR and development of molecular markers DNA was extracted following the protocols described by Murray and Rogers [17,18]; Each 10 μL PCR reaction system contained 1 μL 10X PCR buffer, 1 μL dNTP (2 mM), 1μL of primer (1 mM), 0.1 μL Taq DNA polymerase (5 U/μL) and 1 μL template DNA. Polymerase chain reaction (PCR) comprised an initial denaturation step (95˚C for 3 min), followed by 35 cycles of 95˚C for 30 s, 55˚C for 30 s and 72˚C for 45 s, and ending with an extension step of 72˚C for 10 min. PCR products were separated by electrophoresis and silver staining procedure. Simple sequence repeats (SSR) markers were selected covering the target region based on the published linkage map of rice (http://www.gramene.org). InDel (Insertion and Deletion) markers used for fine mapping of qTAC8 were designed based on the reference Nipponbare and 93-11 genomic sequences. Sequencing and identification of candidate genes The target gene in the candidate genomic region was predicted using the SIGnAL package (http://signal.salk.edu/). The full-length genomic DNA sequence of the candidate gene was determined by dividing it into several overlapping segments. Sequencing primers were designed according to the sequence of cv. Nipponbare in the target region. The PCR products were sequenced directly. Primer sequences used in this study are listed in S1 Table. RNAiso Plus (Takara), following the manufacturer's instructions. The first cDNA strand was synthesized from 3 ug RNA using the First Strand cDNA synthesis Kit-Rever Tra Ace-α (ToYoBo). qRT-PCR analysis was performed on a Roche Light Cycler 480 device using various gene-specific primers. The rice Ubi gene (LOC_Os03g0234200) was chosen as reference gene. Reactions containing SYBR premix Ex TaqII (TaKaRa) were carried out in a final volume of 20 ul. The 2 -44CT method [19,20] was used to calculate relative levels of transcription. The PCR reaction implied an initial denaturation step (95˚C for 4 min), followed by 50 cycles of 95˚C for 15 s and 55˚C for 30s. Three technical replicates were analyzed for each cDNA sample. Data analysis Rice linkage maps were constructed using MAPMAKER/Exp Ver. 3.0, and genetic distances were converted into cM by using the Kosambi function. Composite interval mapping (CIM) analysis of QTL in the RILs and the BC 2 F 2 population was performed with QTL cartographer Ver. 2.5 (statgen.ncsu.edu/qtlcart/WQTLCart.htm). QTLs were called where their LOD value exceeded 2.5. Mean phenotypic values were compared using the Student's t test. Multiple comparison test and the correlation between genotypes and phenotypes were carried out by the SAS statistical software package. Primary mapping of tiller angle Rice tiller angle serves as an important trait in rice. In this work, a glabrous tropical japonica rice D50 cultivar which exhibits relative compact plant type and an indica rice HB277 cultivar displaying a relative loose architecture were used in this study (Fig 1A and 1B). Then a RIL population was used to detect QTLs for this trait. Measurement of the tiller angle showed a continuous and normal distribution whose variation range was 0.267-1.010 rad (S2 Fig). QTL mapping strategy was conducted and two major QTLs for tiller angle named qTAC8 on chromosome 8 and qTAC9 on chromosome 9 were identified in this RIL population. qTAC8 and qTAC9 were mapped within the region of RM339-RM210 and RM201-RM7306, respectively. qTAC8 could account for almost 33.4% of the variance, while qTAC9 could explain around 17.4% of the variance in tiller angle ( Table 1). The positive allele (increasing tiller angle) qTAC9 is the same gene as TAC1 In accord to reports in the literature, we found that qTAC9 was closely linked to TAC1 flanked by the SSR loci RM201 and RM1026 [4], and a SNP in the 3' splicing site of the fourth intron of TAC1 was detected after sequencing the TAC1 alleles in D50 and HB277 (S3A Fig). qRT-PCR analysis showed that the expression of TAC1 in HB277 is significant higher than D50 (S3B Fig). These results coincided with the report that a mutation in the 3' splicing site of the fourth intron from 'AGGA' to 'GGGA' can decrease the expression of TAC1 and lead to a compact plant. Hence, we can anticipate that qTAC9 and TAC1 may correspond to the same gene. Characterization of qTAC8 In order to investigate qTAC8, we measured the tiller angle of NIL-qTAC8 and NIL-qtac8. The results showed that the tiller angle of NIL-qTAC8 is significant larger than that in NIL-qtac8 at the ripening stage, and the angle of tiller base in NIL-qTAC8 is significant larger than that in NIL-qtac8 (Fig 2A and 2B). We also found that the two NILs almost exhibited the same tiller angle at tillering stage, but NIL-qTAC8 showed a greater tiller angle than that in NIL-qtac8 since heading stage (Fig 2C). There were no significant differences in tiller numbers, plant height and spikelet fertility between NIL-qTAC8 and NIL-qtac8 (data not shown). Fine mapping of qTAC8 To further investigate the QTL qTAC8, a BC 2 F 2 population containing 178 individuals was established (S1B Fig). The tiller angle of the BC 2 F 2 population showed a normal distribution. According to the progeny test, the BC 2 F 2 individuals could be classified into three subgroups of homozygotes for D50 (DD), for HB277 (HH) and heterozygotes at the targeted region. According to the tiller angle performance in the progeny test, paired t test was used to compare the difference between subgroups. Significant differences occurred between DD and the other two subgroups, HH and DH, suggesting that the DD genotype presented a larger tiller angle (0.462 rad) than HH (0.294 rad), while the tiller angle of the heterozygotes was an intermediate value ( S4 Fig). These results showed that qTAC8 is a QTL in D50 that presents a partial dominance and a positive additive effect. Seven more SSR markers were used to genotype the BC 2 F 2 population and to construct a local linkage group covering 26.2 cM. The phenotype of two populations BC 2 F 2 and BC 2 F 2:3 were investigated in this study and qTAC8 was mapped within a 16.5 cM interval flanked by RM23097-RM23201. qTAC8 could account for 26.3% of the phenotype variance with additive effect of 0.07 rad in BC 2 F 2 population, while in the BC 2 F 2:3 , the QTL could explain 47.1% and 51.8% of the variation in Hainan and Hangzhou, respectively ( Table 2). In order to test and verify this, four informative homozygous recombinants were identified with four markers within RM23097-RM23201 and grouped into four genotypes according to the positions of recombinant breakpoints and allelic composition. Multiple comparisons between the tiller angle and recombinant individual genotypes, using the two non-recombinant lines as controls (h9 and h5), reflected that qTAC8 was narrowed down to a 1.4 cM interval flanked by RM7049 and RM23175 (Fig 3A and 3B). To further fine mapping qTAC8, a segregating population with 2,000 individuals derived from BC 2 F 4 lines that carried a heterozygosis segments at the qTAC8 region, were used to identify the recombinants between RM7049 and RM23175 ( Fig 4A). Next, the identified recombinants were analyzed by seven more markers located between RM7049 and RM23175. Multiple comparisons were conducted and qTAC8 was placed in a 199 kb region between In12 and RM2767. To position qTAC8 more precisely, a large segregating population of 6,000 individuals was introduced from BC 2 F 5 , and a total of 40 recombinants were identified with the help of two markers In12 and RM2767, and four polymorphism markers within this region were developed to genotype these individuals. Multiple comparisons were also conducted between the genotypes of these homozygous recombinants and the phenotypes of their progeny. Finally, qTAC8 was placed spanning on BAC P0431A03, in a 92 kb region flanked by In2 and In36 (Fig 4B). Analysis of candidate genes for qTAC8 Based on the genome annotated database (http://signal.salk.edu/), the critical 92 kb region contains seven predicted ORFs. They encode a TCP family transcription factor, an ATPdependent Clp protease adaptor protein, a transposon protein, two retrotransposon proteins, a zinc knuckle family protein and a basic helix-loop-helix protein (Fig 4C and Table 3). First, the transposon protein and retrotransposon proteins were excluded as candidates for qTAC8, leaving ORF1, ORF2, ORF6 and ORF7. Genomic sequencing of these four candidate genes in D50 and HB277 revealed that nucleotide diversity occurred except for ORF2. No products of ORF6 genomic DNA were identified in two parents (data not shown), which might be explained by a deletion of this gene during rice evolution, thus ORF6 was also excluded as candidate gene for qTAC8 (data not shown). Quantitative real-time PCR was used to characterize the transcripts of the three candidate genes left in the tiller base at heading stage of NIL-qTAC8 and NIL-qtac8. No significant difference in expression was observed for ORF1 and ORF2, while RNA expression level of ORF7 in NIL-qTAC8 was significant higher than in NIL-tac8 (Fig 5A). This result indicates that ORF7 is the possible candidate for qTAC8. Next we also found that the expression level of ORF7 was also increased at heading and ripening stages, but not at tillering stage, which coincides with the phenotypes of one NIL pair NIL-qTAC8 and NIL-qtac8 at three developing stages (Fig 2C and Fig 5B). All together, these data suggest that ORF7 is the candidate gene for qTAC8. Its cDNA stretched 1038 bp, comprises two exons, and encodes a 345 amino-acid protein containing a putative bHLH conserved domain. Although we found that seven single nucleotide polymorphisms (SNPs) occurred in ORF7 between the NIL pair, they didn't cause any variation at the amino acid level. Further genomic sequencing results indicated that several nucleotides difference in the promoter and 3' UTR region of ORF7 might be responsible for the alteration of gene expression (S5 Fig). Expression analysis other tiller angle-related genes Tiller angle is known to be controlled by TAC1, LPA1, LAZY1 and PROG1. To investigate the expression pattern of those genes in the NILs used in this study, qRT-PCR analysis was Fine mapping and candidate gene analysis of qTAC8, controlling tiller angle in rice conducted. We found that LPA1, LAZY1 and PROG1 were all affected by the positive function of the ORF7 allele, whereas there was no difference in expression of TAC1 between the NIL pair (Fig 6). Genetic relationship between qTAC8 and qTAC9 To further study the genetic relationship between qTAC8 and qTAC9, a set of four NILs including NIL-qTAC8qTAC9, NIL-qtac8qTAC9, NIL-qtac8qTAC9 and NIL-qtac8qtac9 was produced to evaluate tiller angle (Fig 7A-7E). The results showed that the tiller angle of NIL- Fine mapping and candidate gene analysis of qTAC8, controlling tiller angle in rice qTAC8qTAC9 was significantly larger than in the other three NILs, with NIL-qtac8qTAC9 and NIL-qtac8/qtac9 presenting the smallest. The tiller angle of NIL-qTAC8qTAC9 displayed an additive effect, indicating that qTAC8 and qTAC9 may be participating in different genetic pathways (Fig 7F). Discussion Plant ideotype has been recognized as an advanced breeding concept and is regarded to be highly associated with high grain yield in rice breeding [21]. Rice traits for plant ideotype include plant height, stem strength, leaf morphology, panicle morphology, tiller angle among other critical traits. Ideal Plant Architecture 1 (IPA1) was a major gene affecting rice productivity. Introduction of the IPA1 semi-dominant gene into the japonica rice Xiushui 11 cultivar resulted in increased seed yield [22]. Tiller angle also plays a central role in rice production formation, and appropriate tiller angle is beneficial for improving rice production [3,4]. Thus https://doi.org/10.1371/journal.pone.0178177.g007 Fine mapping and candidate gene analysis of qTAC8, controlling tiller angle in rice exploration of new genes controlling tiller angle would facilitate strategies to manipulate plant ideotype and increasing rice yield. In this study, we identified two major QTLs controlling tiller angle on chromosomes 8 and 9 which were named qTAC8 and qTAC9, respectively. Using an F 7 RIL population derived from a cross between D50 and HB277 (Table 1), qTAC9 was located between the SSR loci RM201 and RM7306 and the positive allele (increasing tiller angle) at qTAC9 originated from HB277 with a loose plant architecture (Table 1). Previous studies revealed that the target region near qTAC9 was a hot site controlling rice tiller angle on the long arm of chromosome 9 [1,4,6,7], and one QTL named TAC1 was isolated within this interval and cloned using a large F 2 population derived from a cross between indica rice IR24 and an introgressiong line IL55 [4]. Sequencing analysis indicated that a SNP ('AGGA' to 'GGGA') occurred between D50 and HB277, and qRT-PCR analysis revealed that the RNA level of TAC1 was significant reduced in D50 compared to HB277, suggesting that qTAC9 was the same allele of TAC1, a gene that can form a plant with spread-out architecture and is wildly used within indica rice varieties (S3A and S3B Fig). A partially dominant gene qTAC8 originated from D50 with a compact plant architecture and positive allele of the other QTL identified for tiller angle nearby. qTA8-2 which controls rice tiller angle was firstly mapped between R1394 and RZ66 [1], which locates on the same region with qTAC8, suggesting that qTA8-2 may be an allele of qTAC8. However primary QTL mapping, fine mapping, candidate gene prediction and qRT-PCR analysis indicated that ORF7, which encodes a basic helix-loop-helix protein, might be the gene underlying this QTL (Fig 4 and Table 3). Previous studies indicate that the semidominant gene PROSTRATE GROWTH 1 which affects plant architecture, also encodes a basic helix-loop-helix transcriptional factor, suggesting that qTAC8 might present a similar function as PROG1 [9,10]. Analysis of genetic interactions among genes for tiller angle is required to better understand pathways controlling rice tiller angle formation. qTAC8 (the positive allele from D50) and qTAC9 (positive allele from HB277) are two semi-dominant genes for tiller angle, and a double mutant NIL-qTAC8qTAC9 presented additive effect for that trait, suggesting that qTAC8 and qTAC9 are involved in different pathways. This was consistent with the similar expression of TAC1 observed in NIL-qTAC8 and NIL-qtac8 (Fig 6 and Fig 7). Interesting, we found that LAP1, LAZY1 and PROG1 were all down-regulated in NIL-qtac8 at ripening stage, implying that those genes might function in the same pathway with qTAC8 (Fig 6). In addition, qTAC8 encodes a predicted transcription factor, thus, whether qTAC8 acts directly or indirectly on LAP1, LAZY1 and PROG1 remains to be investigated. Gene expression in plants can be basically of two types, constitutive or with specific expression pattern. Although some genes can be expressed during the whole plant life, it may function during specific growth stages and contribute to a certain phenotype. The genetic control of tiller angle is very complex and phenotypes can change during different developmental stages. The japonica rice ZH11 is a typical rice cultivar which displays large tiller angle at the seedling and tillering stages but usually presents a more compact architecture after the heading stage. The rice mutants lazy1 and prog1 present a large tiller angle during all growing periods, while lpa1 presents a large tiller angle at the seedling stage [3]. In this study, NIL-qTAC8 and NIL-qtac8 exhibit no differences in tiller angle at the tillering stage, but they show increasing TAC8 expression and TAC8 transcript levels are significant different during heading and ripening stages. It would be interesting to uncover the molecular mechanisms of how tiller angle is regulated during different rice development stages.
2018-04-03T02:29:19.135Z
2017-05-25T00:00:00.000
{ "year": 2017, "sha1": "d90db04c71bae1a85f1436b871573b41ba2e9aa7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0178177&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d90db04c71bae1a85f1436b871573b41ba2e9aa7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
253504908
pes2o/s2orc
v3-fos-license
Worsening COVID-19 Disease Course After Surgical Trauma: A Case Series Introduction Current guidelines from the American Society of Anesthesiologists recommend postponing elective surgery on COVID-19-positive patients for a minimum of four to twelve weeks. However, literature focusing on the outcomes of COVID-19-positive patients undergoing surgery is scarce. In this case series, the outcome of asymptomatic COVID-19 patients undergoing acute or semi-urgent surgery was evaluated. Case Presentation A case series of four patients between 32 and 82 years old with a confirmed SARS-CoV-2 infection undergoing acute or semi-urgent surgery was presented here. All four patients were asymptomatic for COVID-19, developing severe respiratory failure following endo CABG, caesarian section, a thyroidectomy, or abdominal surgery. ICU admission, together with invasive ventilation, was necessary for all patients. Two patients required venovenous extracorporeal membrane oxygenation treatment. A mortality of 50% was observed. Conclusions In conclusion, the present case series suggests that elective surgery in asymptomatic SARS-CoV-2 infected patients might elicit an exacerbated COVID-19 disease course. This study endorses the current international guidelines recommending postponing elective surgery for SARS-CoV-2-positive patients for seven weeks, depending on the severity of the surgery and perioperative morbidities, to minimize postoperative mortality. Introduction Since the start of the coronavirus disease-19 (COVID-19) pandemic, elicited by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), most healthcare systems have encountered a dramatic decrease in the capacity to treat surgical patients owing to the reallocation of resources. Due to the continuing pandemic, the waiting lists for elective surgical procedures are growing (1,2). Although research on morbidity and mortality in the nonsurgical COVID population is being conducted and published extensively, not many studies have focused on the outcome of the surgical COVID-19 population. Despite the lack of a vast literature, current guidelines from the American Society of Anesthesiologists recommend postponing elective surgery for COVID-19-positive patients for a minimum of four to twelve weeks, depending on the symptomatology and comorbidities (3). Some evidence suggests that surgery performed more than seven weeks af-ter the diagnosis of SARS-CoV-2 infection is associated with mortality similar to baseline (4,5). Therefore, the recommended waiting period for elective surgery in asymptomatic COVID-19 patients is seven weeks unless the risk associated with the postponement of surgery outweighs the risk of postoperative complications and mortality (4,5). In case of persistent symptoms, perioperative morbidities and mortality risk may remain high even after seven weeks, and multidisciplinary perioperative management might be required (4). This article presents a case series of four asymptomatic patients with confirmed SARS-CoV-2 infection undergoing acute or semi-urgent surgery. Postoperatively, all four cases developed significant worsening disease course and severe respiratory complications related to the COVID-19 infection. The main objective of this study was to support the current guidelines to postpone (semi)-elective surgery in asymptomatic COVID-19 patients. Case 1 In March 2021, a 61-year-old male patient was hospitalized, awaiting coronary artery bypass grafting and aortic valve replacement for triple vessel coronary disease with moderate aortic stenosis. His medical history included marked obesity, insulin-dependent type 2 diabetes mellitus, chronic renal insufficiency, peripheral vascular disease, and ischemic stroke one year prior, from which he suffered no permanent disability. The decision was made to perform a hybrid approach with coronary artery bypass grafting of LAD and RCA, followed by a PCI of the circumflex artery. The day before surgery, a standard screening with a nasopharyngeal PCR swab unexpectedly returned positive for SARS-CoV-2, revealing a high viral load. At the time, the patient was on the waiting list for his SARS-CoV-2 vaccination. He expressed no respiratory or infectious symptoms. Given the severity of his coronary lesions, the decision was made to proceed with surgery the next day. Perioperatively, poor quality of the internal mammary arteries was noted, due to which revascularization of the right coronary artery could not be performed. Postoperatively, the patient was admitted to the intensive care unit (ICU). On the first postoperative day, the patient was successfully extubated, still retaining a high oxygen dependency and requiring support with a high-flow oxygen nasal cannula. A few hours after extubation, he developed active chest pain, dynamic ECG changes, and a rising high-sensitivity troponin T (peaking at 1330 ng/L, normal ≤ 14.0 ng/L), consistent with the diagnosis of perioperative myocardial infarction. Urgent coronary angiography and percutaneous coronary intervention were performed, including stenting critical ostial stenosis of the patient's right coronary artery. The postoperative chest radiography demonstrated bilateral hilar infiltrates and postoperative atelectasis. Dexamethasone 10 mg once daily was started in light of his positive COVID-19 screening and supplemental oxygen dependency. Although signs of myocardial ischemia after stenting had resolved, the patient's respiratory status slowly declined in the following days, with severe respiratory failure on postoperative day 5. A trial of non-invasive ventilation was initiated. However, despite 24 hours of NIV, his respiratory status worsened, and the team proceeded to endotracheal intubation and prone ventilation. Chest radiography showed marked bilateral pulmonary infiltrates, consistent with severe COVID-19 pneumonia. Following intubation, dexamethasone was switched to methylprednisolone 40 mg IV twice daily, and empiric antibiotic therapy (piperacillin-tazobactam) was initiated because of rising inflammatory markers. Respiratory cultures confirmed superinfection with Serratia marcescens and Streptococcus pneumoniae, for which moxifloxacin was added based on antimicrobial susceptibility testing. Despite maximal ventilatory support and prone ventilation, oxygenation remained inadequate, and the decision was made to place the patient on venovenous extracorporeal membrane oxygenation (VV-ECMO) on postoperative day 6. After initiating VV-ECMO, adequate oxygenation and ventilation were achieved, and ventilator settings were adjusted to maximize lung-protective ventilation. In the following days, the patient's oxygen requirement gradually decreased. VV-ECMO could be stopped on postoperative day 16 without further respiratory complications. The patient was progressively weaned of ventilatory support and eventually extubated on postoperative day 24. Further revalidation was uneventful, and on postoperative day 30, he was ultimately discharged to the cardiology ward. Case 2 An 82-year-old male patient presented to our hospital's emergency department in December 2020 because of diffuse abdominal pain and nausea. His medical history included arterial hypertension, hyperlipidemia, stable coronary artery disease, appendectomy, and a mitral valve repair following chordal rupture. Workup with abdominal computed tomography (CT) revealed small bowel obstruction, most likely caused by adhesions after previous appendectomy surgery. There were no signs of bowel ischemia. Blood work showed acute kidney injury with normal inflammatory markers. A nasopharyngeal PCR swab taken three days before admission to the hospital had been positive for Sars-CoV-2, revealing a high viral load. Upon presentation to the hospital, the patient had no supplementary oxygen requirement or respiratory complaints. He was not vaccinated for COVID-19 since vaccines were not available then. The patient was evaluated by the general surgery service and admitted for conservative management. However, due to the failure of conservative treatment, with recurrent nausea and abdominal distension, the decision for surgical treatment was taken two days later. Preoperative chest radiography showed clear lung fields. During the procedure, adhesions were cut, and since every segment of the bowel appeared viable, no resections were performed. Postoperatively, the patient was admitted to the ICU in an extubated and hemodynamically stable state. There were no signs of respiratory distress while receiving oxygen, at 4 L/minute, via nasal cannula. His C-reactive protein (CRP) had risen to 160 mg/L (normal ≤ 5 mg/L). Empiric antibiotics (Amoxicillin-clavulanic acid) were started postoperatively. The day after ICU admission, the patient demonstrated increased respiratory distress, and a highflow nasal cannula was started. On postoperative day 3, the patient's general condition declined, with further rising inflammatory markers (CRP of 400 mg/L), abdominal pain, and altered mental status. Chest and abdominal CT revealed persistent paralytic ileus without post-surgical complications and bilateral basal pulmonary atelectasis with minor pleural effusion. Respiratory cultures were obtained, antibiotic therapy was converted to piperacillintazobactam, and dexamethasone 6 mg daily was started. Cardiac ultrasound demonstrated a normal left ventricular function with no signs of fluid overload and stable mild to moderate mitral and aortic insufficiency. The patient's oxygen requirement gradually increased, and noninvasive ventilation was started on postoperative day 5. Repeat chest radiography revealed bilateral hilar accentuation with infrahilar opacification. By this time, respiratory cultures were reported positive for Hafnia alvei, E. coli, and P. aeruginosa, all susceptible to the patient's current antimicrobial therapy. Given the patient's continuously high oxygen requirement and progressive respiratory distress, endotracheal intubation was performed, and mechanical ventilation was started on the fifth day of his ICU stay. Two days later, the patient's respiratory status declined further, and prone ventilation started. While initially improving his overall respiratory status, refractory hypoxemia with hypercapnia started to set in. Follow-up chest radiography on day 8 revealed multiple bilateral opacities compatible with severe viral bronchopneumonia. Given the patient's age and general condition, he was not deemed a candidate for extracorporeal membrane oxygenation. Despite otherwise maximal supportive therapy, the patient's respiratory status progressively deteriorated, and the patient died after eight days of ICU care. Case 3 A 32-year-old woman presented to our hospital in October 2021 with complaints of lower abdominal pain, headache, and sore throat. She was 36 weeks and 5 days into a spontaneous pregnancy. Her medical history included a prior caesarian section, molar pregnancy, and missed abortion. Vital signs upon admission were all normal. Bloodwork was remarkable for thrombocytopenia (platelets 88 × 10 9 /L, range 150 -400 × 109/L), slight anemia (hemoglobin 11.3 g/dL, range 11.7 -15.5 g/dL), and a slightly elevated CRP (CRP, 28 mg/L, normal ≤ 5 mg/L). White blood cell count was in the normal range, as were serum creatinine and LDH. Liver function tests were slightly elevated with an AST of 39 U/L (normal ≤ 35 U/L) and GGT of 83 U/L (normal ≤ 40 U/L). Nasopharyngeal PCR screening for SARS-CoV-2 infection upon admission was positive, with a viral load of more than 10 million copies/mL. The patient had no respiratory symptoms or fever and had refused a COVID-19 vaccine because of her pregnancy. Following her presentation, she was hospitalized for further observation. After two days, the patient developed pro-gressive thrombocytopenia (platelets decreasing to 61 × 10 9 /L) and increasing liver function tests (AST of 140 U/L, ALT of 90 U/L, GGT of 150 U/L), raising concern for HELLP syndrome. An emergent caesarian section was performed on the same day. Preoperatively, she experienced no respiratory symptoms with normal oxygen saturation and breathing room air. The procedure was complicated with an abdominal wall hematoma, based on an arterial abdominal wall hemorrhage visualized on CT, necessitating red blood cell transfusion and eventually a debridement three days later. Abdominal CT also revealed bilateral basal pulmonary atelectasis. Postoperatively after the debridement procedure, the patient experienced mild dyspnea and was treated with supplemental oxygen, at 2L/minute, via nasal cannula. On the 6th day of her hospital course, the patient was admitted to the ICU for hemodynamic observation. Upon admission, the patient still experienced no signs of respiratory distress, but here oxygen requirement increased to 4 L/minute oxygen. Mild bilateral basal infiltrates were confirmed on chest radiography. Amoxicillinclavulanic acid was administered prophylactically for five days after the debridement procedure. Given her bilateral infiltrates, mild oxygen requirement, and the previous positive COVID-19 PCR test, dexamethasone 6 mg daily was initiated. Her respiratory status gradually declined on the third day of her ICU stay, and therapy with a highflow nasal cannula was started. Follow-up chest radiography showed progression of the pulmonary infiltrates. A chest CT on day 9 of her ICU admission revealed severe bilateral consolidations, ground-glass opacities, and a large pneumomediastinum. Consecutively, she was intubated, and prone ventilation was initiated because of respiratory failure. Pulmonary pressures were minimized, given her pneumomediastinum. The next day, refractory hypoxemia was evident, and veno-venous extracorporeal membrane oxygenation (VV-ECMO) was initiated. Given the patient's clinical course and increased inflammatory markers, dexamethasone was replaced by IV methylprednisolone 40 mg twice daily, and empirical piperacillin-tazobactam was started. This was subsequently downscaled to amoxicillinclavulanic acid for hospital-acquired pneumonia, with bacterial cultures showing Klebsiella pneumoniae. In the following weeks, the patient's respiratory status improved, and she was successfully decannulated on day 38 at the ICU after 28 days of VV-ECMO. The next day, a percutaneous tracheostomy was placed, after which she was successfully weaned off ventilator support. The tracheostomy was removed on day 47. The patient was neurologically alert, interactive, and co-operating in her rehabilitation, resulting in a discharge from the ICU in good health on day 52. Anesth Pain Med. 2022; 12(5):e127356. Case 4 In November 2021, an 82-year-old male was admitted to the hospital for a semi-urgent complete thyroidectomy because of severe refractory amiodarone-induced hyperthyroidism. The patient's medical history included arterial hypertension, mild chronic kidney disease, paroxysmal atrial fibrillation, cholecystectomy, and laparoscopic low anterior resection for stage I adenocarcinoma. The hyperthyroidism was treated with high-dose methylprednisolone, starting five weeks before the thyroidectomy. Preoperative screening 2 days before surgery with a nasopharyngeal PCR was positive for SARS-CoV-2 and showed a high viral load of more than 10 million copies/mL. The patient had been vaccinated twice with the Pfizer vaccine, respectively, seven and six months prior to this test. Given the patient's severe symptoms of hyperthyroidism, a thyroidectomy was performed despite the patient's confirmed COVID-19 infection. In the immediate postoperative phase, the patient progressed well, with a minor cough but without any signs of respiratory distress. Methylprednisolone 8 mg daily was started as part of a tapering regimen for his preoperative high-dose methylprednisolone intake. On the second postoperative day, the patient was noted to be hypoxic with a peripheral oxygen saturation of 90% on room air without respiratory distress, for which supplemental oxygen was started. His inflammatory markers were mildly elevated, with a CRP of 58 mg/L (normal ≤ 5). Chest radiography revealed mild bilateral basal infiltrates. Methylprednisolone was replaced by dexamethasone 6 mg once daily according to local practice in COVID-19 pneumonia. In the following days, the patient's inflammatory markers and oxygen requirement gradually increased, eventually requiring transfer to the ICU on postoperative day five. Upon ICU admission, the patient's inflammatory markers were further elevated (white blood cell count of 13.62 × 10 9 /L (range 4.5 -11.0) and CRP of 130 mg/L), and therapy with high-flow nasal cannula was started. The next day, his respiratory status further declined, and non-invasive ventilation (NIV) was initiated. However, this was poorly tolerated by the patient, who showed progressive respiratory distress and hypoxemia, for which he was intubated and prone. Low-dose vasopressors had been started several hours before initiation of NIV because of fluid-refractory hypotension, with progression to severe hemodynamic instability after intubation. Empiric antibiotics (piperacillin-tazobactam) were started for presumed severe septic shock. Bacterial cultures until that point had not yielded any positive results. The patient quickly developed severe hypoxic and hypercapnic respiratory failure, despite maximal respiratory support, in combination with progressive lactic acidosis due to his refractory shock state. The patient subsequently died two days after his admission to the ICU. Discussion This case series describes four asymptomatic COVID-19 patients developing severe respiratory failure following surgery. All patients eventually required ICU admission and invasive ventilation. Two patients were put on VV-ECMO, and only two patients (50%) eventually survived. Since the emergence of the COVID-19 pandemic, there is mounting literature pointing toward worse outcomes in COVID-19 patients undergoing surgery compared to non-COVID-19 patients. Lei et al. published a cohort study including 34 patients who all developed complicated COVID-19 pneumoniae after elective surgery (6). Admission to the ICU was required in 15 (44%) cases, and seven (21%) patients died during ICU stay. Shortly after, a more extensive and international cohort study was published by the COVIDSurg Collaborative (7). This study, including 1128 patients with perioperative SARS-CoV-2 infection, concluded that postoperative pulmonary complications occurred in 51.2% and were associated with high mortality. Mechanisms to explain the observed high mortality rate and exacerbation rate of pulmonary complications due to SARS-CoV-2 infection in surgical patients are yet unclear. A first hypothesis might be that these observations are due to the natural, spontaneous course of the disease itself. The activation of multiple inflammatory pathways leading to the hyperinflammation state and cytokine storm in COVID-19 has been shown to result in acute respiratory distress syndrome (ARDS) and multi-organ failure in the nonsurgical population (8,9). This hypothesis is supported by the fact that all patients described in this study developed respiratory failure meeting ARDS criteria. One of the significant molecules described in the pathogenesis of this cytokine storm in COVID-19 is Interleukin (IL)-6 (10). Transcription of IL-6 is stimulated when SARS-CoV-2 enters the host cell by binding to the angiotensin-converting enzyme-2 (ACE2), after which IL-6 is released from the cell and binds to a membrane-bound IL-6 receptor present on immune cells. This leads to the activation of a signaling pathway that is controlled by a negative feedback mechanism. However, occupation of ACE2 by SARS-CoV-2 leads to reduced degradation of angiotensin-2 (Ang2). The increased concentration of Ang2 indirectly leads to the release of a serum IL-6 receptor, which can bind IL-6 and has the ability to trigger a widespread immune response through interaction with other signaling proteins that are expressed by immune cells, as well as by endothelial cells and fibroblasts (10). Recent data, however, suggest that surgery itself may accelerate and exacerbate the disease progression of COVID-19 (6). Prospective studies during the early COVID-19 pandemic reported the median time from onset of symptoms to ARDS and mechanical ventilation to be 7-12 days and 10-15 days, respectively (11,12). In contrast, disease progression was remarkably faster in three of the four described cases in this study. The time between surgery and the start of mechanical ventilation because of respiratory collapse (given that they were asymptomatic before surgery) was five days for case 2 and six days for cases 1 and 4. Only in case 3 (pregnancy) was mechanical ventilation initiated later (11 days postoperative). This suggests that concomitant surgery may accelerate a severe COVID-19 disease course, although a difference in SARS-CoV-2 variants could also influence this finding. Second, surgical trauma itself is known to trigger an immune response (13). The local immune response in trauma plays a role in tissue repair and is mediated by the activation of different immune cells and the release of inflammatory cytokines (14). It is already recognized that excessive tissue damage elicited by severe trauma can trigger an exaggerated local and systemic immune response, which, in turn, can lead to multiple organ dysfunction (14). The effect of this immune response and possible interactions with the immune response elicited by a COVID-19 infection have not yet been investigated in a surgical population infected with SARS-CoV-2. In this population, trauma might induce a postoperative immunosuppressive state with increased susceptibility to septic complications, as described in the non-COVID surgical population (13). Another hypothesis is that surgery enhances the activation of multiple inflammatory pathways leading to a more overwhelming hyperinflammatory response and cytokine storm than already observed in the non-surgical COVID-19 population. This mechanism might be similar to the second hit phenomenon in trauma, where secondary insults (e.g., mechanical ventilation, and concomitant infection) can lead to increased immune activation and organ damage in an already susceptible trauma patient. In the same way, it could be hypothesized that surgical trauma (and everything surrounding the surgical procedure, like mechanical ventilation, transfusion, and infection) could serve as a second hit in the COVID-19-infected patient. This hypothesis seems likely given the fact that common pathways of inflammatory response after surgery, trauma, and SARS-CoV-2 infection have been described in the literature (13,14). These pathways are all characterized by the expression of inflammatory cytokines like TNF-alpha and interleukin (IL)-6 (13,14). A final hypothesis to explain our observations might be a dysregulation of cortisol homeostasis. Surgery is asso-ciated with a rise in cortisol levels postoperatively, which can remain high for several days after major surgery (15). Also, postoperative morbidity is associated with persistently high cortisol levels and disruption of cortisol circadian rhythm and regulatory mechanisms (15). Similarly, a recent meta-analysis revealed an association between severe COVID-19 disease and high cortisol levels (16). It has been postulated that COVID-19 disease might influence adrenal function and the hypothalamic-pituitary-adrenal axis (17). Multiple mechanisms to explain this interaction have been suggested, including indirect effects like microvascular thrombosis and direct effects, like a viral invasion of adrenal glands and binding of SARS-CoV-2 to ACE2 receptors in the hypothalamus and pituitary gland (17). It has also been suggested that molecular mimicry might play a role, as was demonstrated after the SARS-CoV-1 infection. Indeed, SARS-CoV-2 antibodies might also target ACTH due to similarities in amino acid structure between SARS-Cov-2 and ACTH (18). Therefore, an interaction between the effects of surgery and COVID-19 on adrenal function seems plausible and should be proven by fundamental research. Limitations This article has several limitations influencing its applicability to general clinical practice. Although we described several cases with a worsening COVID-19 disease course after surgery, quantitative data on this phenomenon in the COVID-19 population and comparisons to the non-COVID-19 population are still limited. Additional large prospective cohort and epidemiological studies are required to estimate the magnitude of this phenomenon. Additional studies could allow for a better understanding of the complex mechanisms behind the interactions between surgical trauma and COVID-19 disease and clarify relevant risk factors. Next, we must be aware that COVID-19 is a disease in evolution. The emergence of new variants of the SARS-CoV-2 virus, with differences in clinical presentations and severity, limits the generalizability of our findings. Current guidelines regarding the timing of elective surgery within this patient population are based on data obtained during the first wave of COVID-19, and findings might differ from current variants. Conclusions In conclusion, the present case series suggests that elective surgery in asymptomatic SARS-COV-2 infected patients might elicit an exacerbated COVID-19 disease course. This study endorses the current international guidelines recommending postponing elective surgery for SARS-COV-2-positive patients for seven weeks, depending on the severity of the surgery and perioperative morbidities, to minimize postoperative mortality.
2022-11-14T16:04:20.879Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "e4a0672b6da454723756e8c3dced3670603738ef", "oa_license": "CCBYNC", "oa_url": "https://brieflands.com/articles/aapm-127356.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7db54554a11eedaec2da07ad578ca678c0ef8ebf", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231746711
pes2o/s2orc
v3-fos-license
[18F]FDG PET/MRI in rectal cancer We conducted a systematic literature review on the use of [18F]FDG PET/MRI for staging/restaging rectal cancer patients with PubMed, Scopus, and Web of Science, based on the PRISMA criteria. Three authors screened all titles and abstracts and examined the full texts of all the identified relevant articles. Studies containing aggregated or duplicated data, review articles, case reports, editorials, and letters were excluded. Ten reports met the inclusion criteria. Four studies examined T staging and one focused on local recurrences after surgery; the reported sensitivity (94–100%), specificity (73–94%), and accuracy (92–100%) varied only slightly from one study to another. The sensitivity, specificity, and accuracy of [18F]FDG PET/MRI for N staging were 90–93%, 92–94%, and 42–92%. [18F]FDG PET/MRI detected malignant nodes better than MRI, resulting in treatment change. For M staging, [18F]FDG PET/MRI outperformed [18F]FDG PET/CT and CT in detecting liver metastases, whereas it performed worse for lung metastases. The results of this review suggest that [18F]FDG PET/MRI should be used for rectal cancer restaging after chemoradiotherapy and to select patients for rectum-sparing approaches thanks to its accuracy in T and N staging. For M staging, it should be associated at least with a chest CT scan to rule out lung metastases. Introduction Colorectal cancer is the most common gastro-intestinal cancer and the third leading cause of cancer-related death worldwide. Rectal cancer accounts for about one in three cases of colorectal tumor [1] and is clinically suspected mainly from evidence of bloody stools, symptoms of obstruction, anemia or polypoid lesions on colonoscopy [2]. Initial staging should provide information regarding: rectal wall invasion and involvement of adjacent organs or important anatomical structures (T stage), loco-regional nodal involvement (N stage), and distant metastases (M stage) [3]. Patients with locally advanced rectal cancer (TNM stages II or III) are at high risk of local recurrence and should undergo preoperative chemo-radiotherapy (pCRT) [4]. Disease staging is therefore mandatory to tailor the most appropriate treatment strategy to each patient. The standard of care for locally-advanced rectal cancer currently includes pCRT followed by radical surgery (total mesorectal excision) with curative intent, and adjuvant chemotherapy [5,6]. Accurate staging also plays an irreplaceable part in relation to the recent introduction of rectum-sparing approaches (transanal local excisions or watch-and-wait protocols) as alternatives to conventional surgery for patients showing a major or complete clinical response to pCRT on restaging [7,8]. Current guidelines recommend MRI for local staging, 1 3 and contrast-enhanced thoraco abdominal CT for detecting distant metastases [9]. [18F]FDG PET/CT is reportedly a good predictor of complete response after pCRT [10], performing better than conventional CT in identifying distant metastases [11]. No comparable evidence of its value in nodal staging has been reported so far [12]. Combined [18F]FDG PET/MRI has recently been proposed as an effective imaging modality for rectal cancer patients, capable of generating high-resolution anatomical and functional data. This combined imaging modality can also spare patients the radiation exposure [13] associated with the CT component of PET/CT. In fact, a mean dose reduction in the range of 7.40-9.16 mSv has been reported for PET/MRI compared with PET/CT [14]. PET/MRI also achieves a high soft-tissue contrast that is useful for examining solid organs, such as the liver [15], and it can be implemented with specific MRI sequences, such as diffusionweighted imaging (DWI), if necessary [16]. The combination of the above-mentioned complementary advantages of PET and MRI might theoretically improve our diagnostic accuracy and treatment decision-making for rectal cancer [15]. A wide consensus on the clinical usefulness of [18F]FDG PET/MRI in the staging and restaging of rectal cancer patients has yet to be reached, however. Hence our present study aimed to conduct a systematic literature review and to analyze publications on the use of [18F]FDG PET/MRI for staging or restaging rectal cancer patients. Search strategy PubMed, Web of Science, and Scopus were used for our literature search, from inception through to December 2019, based on the PRISMA criteria [17]. The following terms and their variants were used in the title, abstract and keyword fields, and MeSH fields where available, adapting the search syntax where necessary: "PET/MRI AND rectal AND cancer; PET-MRI AND rectal AND cancer; PET AND MRI AND rectal AND cancer; PET/MRI AND rectal AND tumor; PET-MRI AND rectal AND tumor; PET AND MRI AND rectal AND tumor; PET/MRI AND rectal AND tumour; PET-MRI AND rectal AND tumour; PET AND MRI AND rectal AND tumour". The reference lists of the articles included in this review, and narrative reviews published in the last 10 years were also searched manually for articles not identified by the initial literature search. Our selection criteria excluded studies not written in English, studies containing aggregated data or data duplicated from previously published work, and review articles, case reports, editorials, and letters. No restrictions were placed on study design or population. Systematic review protocol and data extraction Three authors (S.V., F.C., and L.B.) independently screened all titles and abstracts generated by the first search, then examined the full texts of all relevant articles identified according to the inclusion criteria. Any disagreement regarding the article suitability was settled by a discussion between the assessors or, failing this, by referral to a fourth senior author (D.C.). Among 476 studies obtained from the literature search, 10 reports published between June 2015 and December 2019 met the inclusion criteria (Fig. 1). Data from these studies were extracted using a standardized pro forma in Microsoft Excel (Redmond, WA, USA). The information extracted from each study included: author, year of publication, study design, number of cases, and other characteristics shown in Table 1. Results Bailey et al. [18] investigated whether extended PET acquisition times in the pelvis using [18F]FDG PET/MRI could increase the detection rate of potentially metastatic lymph nodes in rectal cancer patients. Twenty-two patients (drawn from among 29 studies) with biopsy-proven rectal adenocarcinoma underwent whole-body simultaneous [18F]FDG PET/MRI study with a mean FDG dose of 300.8 ± 59.2 MBq, and image acquisition 72.8 ± 22.3 min after the injection. The protocol included a 3-min and a 15-min PET/MRI acquisition of the pelvis. The 15-min PET acquisition of the pelvis enabled the identification of roughly 40% more FDG-avid lymph nodes than the classic 3-min acquisition. It was recommended as a useful addition to the dedicated pelvis MRI protocol for rectal cancer staging. There was also less inter-reader variability on PET/MRI than on MRI alone. Brendle et al. [19] compared [18F]FDG PET/CT with a consecutively acquired [18F]FDG PET/MRI, including DWI, in 15 colorectal cancer patients, conducting a separate analysis for mucinous tumors. PET/MRI/DWI was comparable with PET/CT in terms of overall accuracy, more accurate for detecting liver lesions (74% vs. 56%), and equally accurate for peritoneal and lymph node metastases. Crimì et al. [20] examined the restaging of 36 patients with locally advanced rectal cancer after pCRT, highlighting a slightly higher accuracy in T (92% vs. 89%) and N staging (92% vs. 86%) for whole-body [18F]FDG PET/MRI than for MRI alone. PET/MRI findings also prompted changes to the treatment plan in 11% of cases when hypermetabolic tumor residuals were detected among the areas of fibrosis. Jeong et al. [16] measured the mean apparent diffusion coefficient (ADC) and the max/peak/mean standardized uptake values (SUV) of rectal tumors in a sample of nine patients with rectal adenocarcinomas undergoing both [18F] FDG PET/CT and PET/MRI sequentially. Even though mean SUVmax, SUVpeak and SUVmean values of the primary lesions obtained by hybrid PET/MRI were significantly lower than SUVs determined by PET/CT, the quantitative evaluation of PET images revealed high correlation between maximum, peak and mean SUVs obtained using the two modalities (SUVmax, ρ = 0.82, p = 0.007; SUVpeak, ρ = 0.93, p < 0.001; SUVmean, ρ = 0.77, p = 0.016). A significant inverse correlation between the ADC and the max/ peak/mean SUV emerged in hybrid PET/MRI, suggesting an association between the tumor's metabolic activity and water diffusivity, and a complementary role of these two parameters in rectal cancer staging. Kang et al. [21] compared the diagnostic value of [18F] FDG PET/MRI with contrast-enhanced CT in 51 patients with colorectal cancer. The PET/MRI protocol included additional MRI images of any organs suspected of harboring secondary lesions. PET/MRI provided more information than CT in 27.5% of patients, proving particularly useful for characterizing lesions indeterminate on CT in the liver (n = 7), at the surgical site (n = 4), and in the lungs (n = 1), and for identifying additional malignant lesions in the liver not visible on CT (in 3.9% of patients). PET/MRI consequently led to the treatment strategy being adjusted for 21.6% of patients. Compared with CT, PET/MRI was less able to reveal small metastatic lesions in the lung; it detected 52.9% of the pulmonary metastatic nodules visible on CT. Lee et al. [22] tested the diagnostic accuracy of a protocol consisting of a whole-body FDG PET/Dixon-VIBE MRI, followed by dedicated MRI (enhanced and unenhanced) of the liver in cases of suspected secondary lesions. Fifty-nine patients with colorectal cancer were considered. For primary colorectal cancers (n = 14), the protocol proved highly sensitive (100%) in detecting primary lesions, highly accurate in T staging (93%), and very sensitive in N (93%) and M (100%) staging. In 38 patients with suspected metastatic liver lesions, its sensitivity varied from 94.4% (before treatment) to 75% (after neoadjuvant treatment). In short, the study demonstrated that PET/MRI performed well in staging/restaging colorectal cancer and hepatic metastases of chemo-naïve patients. Paspulati et al. [23] compared the diagnostic accuracy of [18F]FDG PET/MRI and [18F]FDG PET/CT in a sample of 12 patients. PET/MRI showed a better diagnostic accuracy in T staging and an at least comparable diagnostic value in N and M staging. On a per-patient basis, the true positive rate was 71% for PET/CT and 86% for PET/MRI, while the true negative rate was 100% for both imaging modalities. According to Plodeck et al. [24], [18F]FDG PET/MRI seems to be good at revealing pelvic recurrences of rectal cancer, and has an important role in the diagnosis and management of this disease. It demonstrated a high sensitivity (94%), specificity (94%), and accuracy (94%) in a pool of 44 patients (and 47 examinations). Two false negative cases came to light, both in patients whose PET/MRI was performed after neoadjuvant chemotherapy; and in one of these patients the histological subtype was mucinous adenocarcinoma. The PET/MRI results prompted adjustments to the treatment strategy in eight patients, and previous imaging studies on five of these patients had revealed no or uncertain malignant lesions. Rutegard et al. [25] investigated the role of adding hybrid imaging ([18F]FDG PET/MRI and [18F]FDG PET/CT) in the staging and restaging of rectal cancer in 24 patients. In one case, PET alone detected a liver metastasis, thus upstaging the patient to M1 and prompting a change of therapeutic approach that led to a hepatic metastasectomy. Yoon et al. [26] studied the diagnostic yield of FDG PET/ MRI for the purpose of M staging in a pool of 71 patients with newly diagnosed advanced mid-to-low rectal cancer. The [18F]FDG PET/MRI examination included additional dedicated MRI sequences of the liver (without and with gadoxetic acid) and the rectum. It was compared with a "standard of care" (SOC) protocol that included contrastenhanced chest and abdominopelvic CT and rectal MRI. Overall specificity was much higher for PET/MRI (98%) than for SOC (73%), and it increased to 100% (vs. 88%) in patients with inconclusive findings regarding M stage using SOC. PET/MRI proved very helpful in excluding metastatic disease thanks to its high specificity, without raising the risk of missing secondary lesions (overall sensitivity was 94% for both PET/MRI and SOC). PET/MRI also enabled a better characterization of most of the incidental findings emerging from SOC, including additional neoplasms. In a restaging setting, two authors [20,22] found PET/ MRI slightly superior to MRI for T staging purposes (with a sensitivity of 92.9% and 92.3%, and an accuracy of 92% and 89%, respectively). In the study by Paspulati et al. [23], PET/MRI proved more efficient than PET/CT. For example, it enabled detailed T staging and revealed mesorectal fascia involvement in a patient with a T3 lesion not seen on [18F]FDG PET/CT. It also improved the identification of hypermetabolic tumor remnants among the areas of fibrosis after chemo radiotherapy, eventually prompting changes in therapeutic strategy [20] (Fig. 2). When two of the studies compared PET/MRI and MRI for N staging purposes one of them [22] found a comparable sensitivity of the two techniques (92.9%), while the other [20] reported a superiority of PET/MRI over MRI (with an accuracy of 92% vs. 86%). Paspulati et al. [23] found the accuracy of PET/CT and PET/MRI similar in the detection of regional lymph node metastases. Combined [18F]FDG PET/MRI has been shown to detect hypermetabolic lymph nodes better than MRI alone, leading to treatment changes as a result [20] (Fig. 3). With a sensitivity of 96% in detecting metastases, [18F] FDG PET/MRI proved more useful before CRT than afterwards (post-CRT its sensitivity was 75%), according to Lee et al. [22]. Another study by Brendle et al. [19] also found a lower overall sensitivity in detecting secondary lesions in the liver (71%) or lymph nodes (47%) when they considered On the other hand, its sensitivity in detecting metastases (94%) was similar to that of SOC (CT + MRI) mainly due to its poor rate of detection for small lung nodules [26]. Kang et al. [21] demonstrated that PET/MRI provided additional information vis-à-vis the findings of other imaging modalities (such as CT) in a fair number of patients (27.5%). Compared with PET/CT, PET/MRI was also able to identify three more metastatic lesions involving the rectal and perirectal region in one study on 12 patients [23], and c axial b1000 diffusion-weighted imaging (DWI) confirming the signal restriction of the mass (arrow); d axial PET/MR fused image with hypermetabolism of the liver lesion (arrow) one additional metastatic lesion in a sample of 24 patients in another report [25]. Discussion Pooling the information from the above-mentioned studies, we summarized that [18F]FDG PET/MRI is better than [18F]FDG PET/CT or MRI, especially in a restaging setting. [14]. The initial studies reviewed here show that [18F]FDG PET/MRI performs better in TNM staging than [18F]FDG PET/CT or MRI/SOC alone. In fact, the anatomic information provided by the low-dose, non-contrast CT component of the PET/CT examination is often insufficient to determine the extent of local tumor invasion or to characterize incidental lesions; instead, the superior soft-tissue contrast resolution, the multiplanar imaging acquisition capability, and the additional value of functional sequences such as diffusionweighted imaging, allow better visualization of soft tissue and musculoskeletal structures, which in turn enhances the diagnostic efficiency of PET/MRI over PET/CT or CT alone. Its greater diagnostic accuracy suggests a pivotal role for [18F]FDG PET/MRI in orienting the approach to therapy for rectal cancer. In fact, PET/MRI findings reportedly prompted changes to the patient treatment strategy in three of the examined studies [20,21,26]. [18F]FDG PET/CT was able to predict a histopathologically complete response after pCRT for rectal cancer with a sensitivity and specificity of 71% and 76%, respectively [10]. For restaging purposes, [18F]FDG PET/MRI combined the already-reported good accuracy of PET images with the anatomical detail of MRI, enabling a better T staging [20][21][22][23]. It has shown that the tumor response to pCRT often consists of partial or complete replacement of viable tumor tissue with fibrosis; consequently, tumor under-staging in this setting might be due to the inability of PET/CT to detect small clusters of residual disease or failure of morphological sequences (such as conventional T2-weighted MRI) and functional ones (such as diffusion-weighted imaging) to identify small remnant tumor deposits in the initial tumor site. By combining morphological, functional and metabolic data, PET/MRI appears to be able to accurately re-stage patients after pCRT [20]. The reported limitations of [18F] FDG PET/MRI concern the difficulty of detecting mucinous lesions (probably due to their low FDG avidity [19]), and residual lesions after pCRT (due to the confounding lowgrade uptake of post-irradiation fibrous tissue). MRI alone has some limitations when it comes to N staging in case of rectal cancer and in discriminating between benign and malignant loco-regional nodes [27]. As for N staging, a meta-analysis by Lu et al. [12] had not found sufficient evidence to support the routine clinical use of [18F] FDG PET/CT for N staging in colorectal cancer, since it achieved a sensitivity of 42.9% and a specificity of 87.9%. In the group of articles analyzed here, the accuracy of [18F] FDG PET/MRI for N staging purpose ranged between 42 and 92% [20][21][22][23]; in other words, its accuracy was basically the same or only slightly superior to that of MRI, and comparable with [18F]FDG PET/CT. The analyzed studies confirm that the soft-tissue contrast and resolution of PET/MRI can be very useful when applied to abdominal solid organs, such as the liver [13]. PET/MRI showed an advantage over PET/CT or contrast-enhanced CT in characterizing liver lesions that would have otherwise been too small to characterize or remained indeterminate. This is thanks to the opportunity to obtain diffusionweighted imaging sequences and use hepatocyte-specific contrast media, which can detect lesions in the hepatobiliary phase, providing an excellent contrast between liver and lesion [16,19,21,22,25,26]. PET/MRI can have an important role in identifying the distant metastases of rectal cancer, despite the known intrinsic weakness of MRI in identifying the metastatic lung lesions (even after the introduction of new dedicated MRI sequences) [28,29]. The limited ability of MRI (compared with CT) to identify small lung nodules is reflected in the coacquired PET findings. There are three main reasons for this: (1) motion artifacts due to cardiac or respiratory movements; (2) the relatively low proton density of lung parenchyma, with a consequently low signal-to-noise ratio; and (3) susceptibility to artifacts related mainly to the multiple interfaces between air and lung tissues. New, specific sequences have recently been developed to address this issue. The use of ultra-short echo times (UTE) sequences could facilitate the visualization of lung parenchyma, achieving a greater sensitivity [28] in the detection of lesions [29]. An acceptable diagnostic accuracy has yet to be reached, however. Moreover, it is worth noting that FDG uptake, as displayed by SUV measurements, is only partially comparable between PET/MRI and PET/CT [30], and this needs to be borne in mind when comparing PET/CT vs. PET/MRI studies. Finally, there were differences among the examined studies in the time interval between FDG injection and PET acquisition (60-120 min), in the duration of each PET bed for the whole-body scan (2-8 min) and in the duration of the pelvis dedicated PET bed (15-42 min). These factors might have influenced the FDG uptake of malignant lesions through the different studies and, therefore, the diagnostic accuracy of the techniques. The main limitations of PET/MRI are the long acquisition times (33-85 min) and the expensive cost of the examination. The conclusions that can be drawn from our review unfortunately have several limitations. First of all, four of the studies included in the review grouped rectal and colon cancers together, though the approach to their staging and treatment may differ. Second, some studies considered a mixed population of patients examined before and after pCRT, or with different histopathological subtypes (mucinous and nonmucinous tumors). Finally, the size of the samples analyzed in the studies was generally small. Conclusion Although the role of [18F]FDG PET/MRI in rectal cancer has yet to be established, the evidence pooled in this review suggests the following indications for [18F]FDG PET/MRI in the management of rectal cancer patients. 1. It should be used mainly for rectal cancer restaging after pCRT, or for identifying recurrences. 2. It could be a precise tool for choosing which patients to address to rectum-sparing approaches instead of classic surgery because of its accuracy in T staging and N staging, compared with PET/CT or MRI. 3. When used for M staging, it should be associated with at least a chest CT scan to rule out any lung metastases. The MRI examination protocol that could be suggested to maximize the diagnostic accuracy of PET/MRI while keeping an acceptable acquisition time should include: (1) whole body T2-weighted and T1-weghted sequences plus a sequence for lung parenchyma evaluation; (2) a pelvic protocol with T2-weighted sequences in three planes, DWI and contrast enhanced sequences; (3) a dedicated protocol for the upper-abdomen preferably after injection of hepato-specific contrast agent. The heterogeneity of the included studies did not allow us to perform a meta-analysis of the performance of [18F] FDG PET/MRI in rectal cancer. However, the present review offers a complete overview of the literature about the topic. Funding Open Access funding provided by Università degli Studi di Padova. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
2021-02-02T23:15:46.016Z
2021-01-31T00:00:00.000
{ "year": 2021, "sha1": "9dc075a3520385a7e50aa855c204dde6bb8b2bea", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12149-021-01580-0.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "9dc075a3520385a7e50aa855c204dde6bb8b2bea", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247120721
pes2o/s2orc
v3-fos-license
Comparison of Immune Response Assessment in Colon Cancer by Immunoscore (Automated Digital Pathology) and Pathologist Visual Scoring Simple Summary The immune response to colon cancer (CC) is highly variable among patients and is clinically relevant. In this study, we compared the immune response assessment for early-stage CC, as measured by Immunoscore (IS), to pathologist visual scoring of the CD3+ and CD8+ T-cell densities at the tumor site (T-score). The objectives were to determine the inter-observer agreement between pathologists and the concordance between the two methods. Agreement between pathologists was minimal to weak. Moreover, a weak concordance between the two methods was observed, leading to misclassification of 48% of cases by pathologist scoring. Due to the high level of immune infiltrate heterogeneity resulting in disagreement of interpretation among pathologists, IS is unlikely to be reproduced via non-standardized methods. Abstract Adjunction of immune response into the TNM classification system improves the prediction of colon cancer (CC) prognosis. However, immune response measurements have not been used as robust biomarkers of pathology in clinical practice until the introduction of Immunoscore (IS), a standardized assay based on automated artificial intelligence assisted digital pathology. The strong prognostic impact of the immune response, as assessed by IS, has been widely validated and IS can help to refine treatment decision making in early CC. In this study, we compared pathologist visual scoring to IS. Four pathologists evaluated tumor specimens from 50 early-stage CC patients and classified the CD3+ and CD8+ T-cell densities at the tumor site (T-score) into 2 (High/Low) categories. Individual and overall pathologist scoring of immune response (before and after training for immune response assessment) were compared to the reference IS (High/Low). Pathologists’ disagreement with the reference IS was observed in almost half of the cases (48%) and training only slightly improved the accuracy of pathologists’ classification. Agreement among pathologists was minimal with a Kappa of 0.34 and 0.57 before and after training, respectively. The standardized IS assay outperformed expert pathologist assessment in the clinical setting. Introduction The important role of immune response to the tumor has been demonstrated in numerous solid cancers [1][2][3][4][5][6][7], including Colon Cancer (CC) [8][9][10][11][12][13][14][15][16], with a high-level of tumor-infiltrating lymphocytes (TILs) being consistently associated with a favorable prognosis. Various methods with different cutoff values have been used to assess immune cell infiltration. Hematoxylin and eosin (H&E) staining of tumor tissue is the most frequently used histochemical stain in clinical and research laboratories. However, with this method, it is difficult to count the number of TILs in cancer cell nests [2]. The reproducibility of the immune response evaluation by visual examination of H&E slides was previously reported and showed a low level of concordance between the 11 expert observers (4% of 268 cases evaluated) [2]. Due to heterogeneity of TILs and the subjectivity of its evaluation on H&E slides by pathologists, such a method was not reliable enough for a therapeutic decision-making process. Therefore, markers with more accuracy and added clinical value are needed. Moreover, consensus recommendations for scoring TILs for diagnostic purposes, translational research, and the clinical trials are required. The integration of the IS assay into pathology clinical practice can help to ensure the higher level of accuracy and efficiency for characterization of immune response [17][18][19][20]. We previously showed that of all immune cells involved in the in situ immune reaction, CD3+ and CD8+ T-lymphocyte cells (specific populations of tumor-infiltrating lymphocytes; TILs) provided the optimal combination for prognostic purpose. The accuracy of prediction of survival times for the different patient groups was greater with a combined analysis of the center of tumor (CT) and the invasive margin (IM) regions than with a single-region analysis [21]. CD3 and CD8 were also chosen as markers because of the quality of the staining and the stability of these antigens. We then developed and validated the immunebased international consensus IS assay [2]. Immunoscore ® values are reported based on predefined cutoffs and given one of five category scores (IS 0 to IS 4) that are combined into two relevant clinical risk categories: IS Low (IS 0-1) and IS High (IS 2-4). These distinguish tumors with low versus high immune infiltration that are associated with high versus low risk of recurrence, respectively. IS is now recommended for use in conjunction with the TNM classification system to estimate prognosis for early-stage CC patients in the ESMO Clinical Practice Guidelines [22,23]. In a large international study of more than 3500 stage I-III CC patients, in high-risk stage II patients, IS identified a large fraction of patients (70%) whose risk for recurrence was similar to that of low-risk stage II patients when not treated with chemotherapy [3,23,24]. This strongly suggests a clinical utility for the IS assay to identify patients having a low biological recurrence risk despite the presence of pathologic high-risk features that might otherwise trigger adjuvant chemotherapy. These patients may avoid unnecessary treatment and its attendant toxicities. In addition, IS was shown to be a powerful prognostic marker for stage III CC patients in two randomized phase III clinical trials [3,4] and also predicted response of adjuvant chemotherapy in two independent cohorts [4,6]. The analytical validation of IS has been demonstrated previously [20,25]. Immunoscore ® was deemed to be a robust, reproducible, quantitative, and standardized immune assay, with a high prognostic performance, independent of all of the prognostic markers currently used in clinical practice. The immune response was introduced for the first time into the latest (5th) edition of the WHO Digestive System Tumors as "an essential and desirable diagnostic criteria for CC". Furthermore, the 2020 ESMO Clinical Practice Guidelines for CC included IS to refine the prognosis, stratify patients according to risk, and thus adjust the chemotherapy decision-making process, although its role in predicting an adjuvant chemotherapy effect is uncertain. Therefore, it is important to compare the performance of the standardized consensus digital pathology IS to an evaluation of the immune response by visual examination of H&E slides or by a visual examination of CD3+-and CD8+-stained slides by expert pathologists. Here, we compared the performance of automated digital pathology (using IS) and pathologist visual scoring of CD3+ and CD8+ T-cell densities at the tumor site (T-score) for assessment of immune response in patients with CC. The performance of each of the two Cancers 2022, 14, 1170 3 of 12 methods in assessing the immune response status and the impact of misclassifications of the risk of recurrence on patient management and treatment decisions was evaluated. Material and Methods This study compared the immune response assessment in early-stage CC by two methods: (1) expert pathologist evaluation of CD3+ and CD8+ stained slides at the tumor site (T-score) in two steps: (i) without training and (ii) with training and (2) artificial intelligence assisted digital pathology (IS). Case Selection Representative high-resolution scanned images of CD3+ and CD8+ single-stained tumor specimens from 50 patients were selected from the IDEA-France study [4]. The mean densities of CD3+ and CD8+ T-cells quantified in the CT and IM were converted into IS with predefined cutoffs [26,27]. Immunoscore®uses standardized percentile values (0-100%), and the algorithm categorizes the continuous Immunoscore®into five groups (0, 1, 2, 3, and 4). A predefined two-level classification (2 groups of recurrence risk) uses predefined cutoffs corresponding to IS-Low with a mean percentile of 0-25% (IS 0-1) and IS-High with a mean percentile of >25-100% (IS 2-4), consistent with the validated assay cutoffs determined in the Society for Immunotherapy of Cancer (SITC) study [6], with IS-Low indicating a poor prognosis (high-risk of relapse) and IS-High is indicative of a good prognosis. The subset of 20 cases, for which IS was around a 25% mP clinical cutoff point, were analyzed separately. This subgroup consisted of 10 cases with IS-Low (≤25%) and 10 cases with IS-High. Pathologist Visual Assessment Four expert pathologists with broad experience in gastrointestinal cancer pathology independently assessed the immune infiltration (CD3+ and CD8+ T-cells) for the 50 selected cases through qualitative visual and manual scoring via an online secured-access web gallery. Pathologists were asked to classify each marker density and to sort them into three categories (Low, Intermediate, and High) and a final 2-class T-score (Low or High) was generated in accordance with clinical reporting. The images were analyzed blindly without training instructions. To avoid a learning bias, cases were analyzed by each pathologist in a pre-specified, individualized, and randomized order. Pathologist Training In a second step, the pathologists were trained to assess densities of each marker at the 25% mP (separating Low and Intermediate staining) across a selection of four illustrative images (Table 1). In order to recognize heterogeneity in T-cell infiltrates from different regions in multiple tumors but yielding equivalent T-scores, pathologists were further provided a set of 12 images that represented a spectrum of CD3+ and CD8+ densities across the CT and IM regions ( Table 1). The four pathologists repeated immune infiltration evaluation on the same 50 selected cases after this training and reported their classification category for CD3+ and CD8+ T-cells in both the CT and IM and the overall category (Low/Intermediate/High) for each case. A final 2-class T-score (Low or High) was generated in accordance with clinical reporting of the Immunoscore ® . The immune infiltration assessment data were captured using a data collection Excel spreadsheet and analyzed. The IM is more invaded than the CT region (CD3+ and CD8+) 2 The CT is more invaded than the IM region (CD3+ and CD8+) 2 70% The IM and CT display similar densities (CD3+ and CD8+) 2 The IM is more invaded than the CT region (CD3+ and CD8+) 2 The CT is more invaded than the IM region (CD3+ and CD8+) 2 Repeatability Evaluation of IS The 50 reference cases were internally analyzed three times to evaluate repeatability of the IS method. The IS module (Immunoscore ® Analyzer, Veracyte, Marseille, France) was used for automatic detection of the CT and IM, quantification of CD3+-and CD8+-stained Tcells, and classification of the reference cases into the clinical IS categories. Each IS repetition (identified as DP1, DP2, and DP3) and validation of the results were carried out by two histotechnicians who evaluated the technical parameters, including immunoperoxidase staining quality (the histotechnicians are experienced and expert in performing quality control analysis of IS cases). The IS results and the name of the histotechnician were captured using a data collection Excel spreadsheet. Statistical Analysis The statistical analysis was used to explore the following types of concordance: between individual pathologist assessment and IS for all cases (n = 50) and for the subset of cases around the clinical 25% IS cutoff (n = 20) before and after training, inter-pathologists' agreement with visual assessment of T-score, and among three repeated IS assessments. Agreement Evaluation The Cohen's Kappa coefficient was used to evaluate agreement of Immunoscore®results between the two rating methods, IS and pathologists' scoring. The Fleiss's Kappa coefficient test, an extension of the Cohen's kappa, was used to compute the agreement between multiple observers' assessments. In accordance with McHugh et al. [28], the level of agreement was categorized according to the Kappa values as none (0-20%), minimal (21-39%), weak (40-59%), moderate (60-79%), strong (80-90%), and almost perfect (>90%). A negative Kappa indicated that there was less agreement than would be expected by chance, given the marginal distributions of ratings. Comparison of Individual Pathologist Visual Assessment to Is before Training Without previous training, the agreements were weak between pathologists' T-score classification and the reference IS for the immune infiltration assessment of 50 CC cases ( Figure 1, plain dark blue bars). The mean agreement (Cohen's Kappa) for pathologists' T-score classification compared to the reference IS was 0.47 (minimum and maximum agreements were (0.29-0.59)). The maximum agreement rate with the reference IS was 82% (Cohen's Kappa of 0.59) for pathologist #2 and 80% for the three other pathologists #1, #3, and #4, with Cohen's Kappa from 0.29 to 0.53 ( Figure 1, plain dark blue bars). The lowest percentage of negative agreement between T-score and IS, 25%, was observed for pathologist #4, while the lowest positive percent agreement was observed for pathologists #2 and #3 (79%). The disagreement rates for T-score classification versus the reference IS for each pathologist were even higher for the 20 cases with IS percentiles around the clinical cutoff. A minimal level of agreement was reached by pathologists' visual evaluation compared to the reference IS: the mean agreement (Cohen's Kappa) for pathologists T-score classification compared to the reference IS was 0.30 (minimum and maximum agreements were (0.10-0.50); Figure 1, plain light blue bars). Comparison of Individual Pathologist Visual Assessment to Is after Training After training, a moderate level of agreement between the pathologist T-score visual assessment and the reference IS on the 50 cases was reached for one pathologist (#3; Cohen's Kappa of 0.67) while it remained weak for all other pathologists (Cohen's Kappa ranging from 0.46 to 0.56). The mean agreement (Cohen's Kappa) for pathologists' T-score classification compared to the reference IS was 0.54 (minimum and maximum agreements were (0.46-0.67); Figure 1, dotted dark blue bars). The best agreement rate for classification of the 20 cases around the clinical cutoff after training was observed for pathologist #3 (70%) with a corresponding weak Cohen's Kappa agreement of 0.40 (versus 0.30 before training; Figure 1, dotted light blue bars). The disagreement rates for T-score classification versus the reference IS for each pathologist were even higher for the 20 cases with IS percentiles around the clinical cutoff. A minimal level of agreement was reached by pathologists' visual evaluation compared to the reference IS: the mean agreement (Cohen's Kappa) for pathologists T-score classification compared to the reference IS was 0.30 (minimum and maximum agreements were (0.10-0.50); Figure 1, plain light blue bars). Comparison of Individual Pathologist Visual Assessment to Is after Training After training, a moderate level of agreement between the pathologist T-score visual assessment and the reference IS on the 50 cases was reached for one pathologist (#3; Cohen's Kappa of 0.67) while it remained weak for all other pathologists (Cohen's Kappa ranging from 0.46 to 0.56). The mean agreement (Cohen's Kappa) for pathologists' T-score classification compared to the reference IS was 0.54 (minimum and maximum agreements were (0.46-0.67); Figure 1, dotted dark blue bars). The best agreement rate for classification of the 20 cases around the clinical cutoff after training was observed for pathologist #3 (70%) with a corresponding weak Cohen's Kappa agreement of 0.40 (versus 0.30 before training; Figure 1, dotted light blue bars). The impact of training was further assessed by evaluating the four different types of "agreement" (i.e., combining concordance or discordance before and after training, Figure 2). On average, training had a positive impact in 20% of the analyzed cases (Type 3). However, training had no impact for 18% of the cases (Type 2) and even worsened the concordance between the visual assessment and IS in 15% of the cases (Type 4). The impact of training was further assessed by evaluating the four different types of "agreement" (i.e., combining concordance or discordance before and after training, Figure 2). On average, training had a positive impact in 20% of the analyzed cases (Type 3). However, training had no impact for 18% of the cases (Type 2) and even worsened the concordance between the visual assessment and IS in 15% of the cases (Type 4). Figure 2. Distribution of agreement types between T-score visual assessment and the reference IS (Low, Intermediate, High) in a set of 50 colon cancer cases, before and after training (average of cases falling in each type is reported in parentheses). Inter-Pathologist Agreement with Visual Assessment of T-Score The inter-observer agreement for the 50 selected CC cases into T-score classification was weak before training (Fleiss's Kappa of 0.34) and was still weak after training (Fleiss Kappa of 0.57; Figure 3). The agreement rates were minimal or nonexistent for the 20 cases around the clinical IS cutoff point (Fleiss Kappa of 0.13 and 0.37, before and after training, respectively; Figure 3). Inter-Pathologist Agreement with Visual Assessment of T-Score The inter-observer agreement for the 50 selected CC cases into T-score classification was weak before training (Fleiss's Kappa of 0.34) and was still weak after training (Fleiss Kappa of 0.57; Figure 3). The agreement rates were minimal or nonexistent for the 20 cases around the clinical IS cutoff point (Fleiss Kappa of 0.13 and 0.37, before and after training, respectively; Figure 3). The impact of training was further assessed by evaluating the four different types of "agreement" (i.e., combining concordance or discordance before and after training, Figure 2). On average, training had a positive impact in 20% of the analyzed cases (Type 3). However, training had no impact for 18% of the cases (Type 2) and even worsened the concordance between the visual assessment and IS in 15% of the cases (Type 4). Figure 2. Distribution of agreement types between T-score visual assessment and the reference IS (Low, Intermediate, High) in a set of 50 colon cancer cases, before and after training (average of cases falling in each type is reported in parentheses). Inter-Pathologist Agreement with Visual Assessment of T-Score The inter-observer agreement for the 50 selected CC cases into T-score classification was weak before training (Fleiss's Kappa of 0.34) and was still weak after training (Fleiss Kappa of 0.57; Figure 3). The agreement rates were minimal or nonexistent for the 20 cases around the clinical IS cutoff point (Fleiss Kappa of 0.13 and 0.37, before and after training, respectively; Figure 3). Disagreement of Pathologist Visual Assessment with the Reference IS Pathologist disagreement with the reference IS, as defined as the percentage of cases for which at least one pathologist assessment was not concordant with the reference IS, was observed in nearly half of the cases without training (48%; 24 out of 50; Figure 4A) and in 30% of the cases (15 out 50) after training ( Figure 4B). The analysis of the 20 CC cases around the IS clinical cutoff resulted in even lower concordance with the overall disagreement rate as high as 80% and 65% for before and after training, respectively. Pathologists agreed only on three High T-scores and one Low T-score out of 20 cases before training ( Figure 4A) and on 5 and 2 cases after training ( Figure 4B), respectively. Disagreement of Pathologist Visual Assessment with the Reference IS Pathologist disagreement with the reference IS, as defined as the percentage of cases for which at least one pathologist assessment was not concordant with the reference IS, was observed in nearly half of the cases without training (48%; 24 out of 50; Figure 4A) and in 30% of the cases (15 out 50) after training ( Figure 4B). The analysis of the 20 CC cases around the IS clinical cutoff resulted in even lower concordance with the overall disagreement rate as high as 80% and 65% for before and after training, respectively. Pathologists agreed only on three High T-scores and one Low T-score out of 20 cases before training ( Figure 4A) and on 5 and 2 cases after training ( Figure 4B . Graphical plot representing the agreement between each of the four pathologists visual Tscore and IS before (A) and after training (B) of 50 colon cancer cases. Reference IS scores (Low and High) from 50 colon cancer cases (x axis) are plotted against the pathologist visual T-score and IS methods (y axis). Dark green circles indicate an agreement between all pathologists and the reference digital pathology IS method. Bright orange circles indicate disagreement between at least one of the pathologists and the reference IS. The mean percentiles (mP) of the CD3+ and CD8+ T-cells densities are represented as circles, whose size is proportional to the mP value observed for each case. The 50 cases were ranged from the lowest mP to the highest mP and IS was translated into 2category classification (dashed line): Low IS (mP ≤ 25%) and High IS (mP ˃ 25%); the T-score classification for each pathologist is represented by blue circles with a black outline (Low T-scores) and yellow circles with a black outline (High T-scores). Abbreviations: Patho, pathologist; mP, mean percentile; DP, digital pathology. Reference IS scores (Low and High) from 50 colon cancer cases (x axis) are plotted against the pathologist visual T-score and IS methods (y axis). Dark green circles indicate an agreement between all pathologists and the reference digital pathology IS method. Bright orange circles indicate disagreement between at least one of the pathologists and the reference IS. The mean percentiles (mP) of the CD3+ and CD8+ T-cells densities are represented as circles, whose size is proportional to the mP value observed for each case. The 50 cases were ranged from the lowest mP to the highest mP and IS was translated into 2-category classification (dashed line): Low IS (mP ≤ 25%) and High IS (mP > 25%); the T-score classification for each pathologist is represented by blue circles with a black outline (Low T-scores) and yellow circles with a black outline (High T-scores). Abbreviations: Patho, pathologist; mP, mean percentile; DP, digital pathology. Reproducibility of Is Assessment The agreement between three repeated IS scores and the initial reference IS score for each of the 50 CC cases is illustrated in Figure 5. Almost perfect agreement was observed (Cohen's Kappa of 0.93). In the first repeat (DP1, Figure 5), only 1 out of 50 cases were incorrectly classified as compared to the reference IS result and only two more cases were misclassified in the two remaining repeats (DP2 and DP3, Figure 5), leading to an agreement of 94%. The three discordant cases were very close to the cutoff point of 25% with IS mPs ranging from 21.2% to 26.2%. Thus, IS yielded a sensitivity of 95% and a positive predictive value of 97% (overall agreement of 94%). Reproducibility of Is Assessment The agreement between three repeated IS scores and the initial reference IS score for each of the 50 CC cases is illustrated in Figure 5. Almost perfect agreement was observed (Cohen's Kappa of 0.93). In the first repeat (DP1, Figure 5), only 1 out of 50 cases were incorrectly classified as compared to the reference IS result and only two more cases were misclassified in the two remaining repeats (DP2 and DP3, Figure 5), leading to an agreement of 94%. The three discordant cases were very close to the cutoff point of 25% with IS mPs ranging from 21.2% to 26.2%. Thus, IS yielded a sensitivity of 95% and a positive predictive value of 97% (overall agreement of 94%). Figure 5. Graphical plot representing the agreement between three repeated IS analyses and reference IS assessment for the 2-category classification. The IS scores for 50 colon cancer cases (x axis) are plotted against the scoring method (horizontal lines, from the bottom to the top, represent three repeated IS analyses [DP1_IS, DP2_IS, and DP3_IS], the reference IS) used (y axis). The mean percentiles (mP) of the CD3+ and CD8+ T-cell densities are represented as circles, whose size is proportional to the value observed for each case. The cases range from the lowest mP to the highest mP and are translated into IS with the 2-category classification: Low IS (mP ≤ 25%) and High IS (mP ˃ 25%). Abbreviations: mP, mean percentile; IS, Immunoscore ® ; DP_IS, digital pathology Immunoscore ®. Discussion The reproducibility of IS digital pathology was previously assessed [2]. Representative images from five centers (Belgium, Canada, China, France, USA) of tissue stained for CD3+ and CD8+ (n = 36), having IS ranging from lowest to highest (2.5th to 90th percentiles, respectively) were re-analyzed by eight pathologists from different centers. These eight IS digital pathology quantifications revealed a strong reproducibility (mean cell densities in each tumor region, r = 0.97 for tumor; r = 0.97 for invasive margin; p < 0.0001). Only 2.1% variation in the mean percentile of CD3+ and CD8+ T-cell densities was found between IS quantifications. These observations were confirmed in an independent study [20]. This showed the strong reproducibility of IS using digital pathology. Since visual evaluation of tumor infiltrating lymphocytes in H&E-stained slides by pathologists was not sufficiently accurate for clinical decisions and, as it was important to assess the added value of automated digital pathology over visual assessment on the same CD3+ and CD8+ stains, we evaluated the reproducibility of a visual examination on these slides (T-score) by expert pathologists. The inter-pathologists' reproducibility and the differences between T-score and automated digital IS were evaluated. The IS method was confirmed to be a very robust method that produced reliable and consistent data with a very high degree of agreement (94%) between repeated measures. Moreover, the rare cases of discordance (3 out of 50) were all very close to the cutoff value of 25% and re-testing such samples to correctly assign their score would be simple. In Figure 5. Graphical plot representing the agreement between three repeated IS analyses and reference IS assessment for the 2-category classification. The IS scores for 50 colon cancer cases (x axis) are plotted against the scoring method (horizontal lines, from the bottom to the top, represent three repeated IS analyses [DP1_IS, DP2_IS, and DP3_IS], the reference IS) used (y axis). The mean percentiles (mP) of the CD3+ and CD8+ T-cell densities are represented as circles, whose size is proportional to the value observed for each case. The cases range from the lowest mP to the highest mP and are translated into IS with the 2-category classification: Low IS (mP ≤ 25%) and High IS (mP > 25%). Abbreviations: mP, mean percentile; IS, Immunoscore ® ; DP_IS, digital pathology Immunoscore ® . Discussion The reproducibility of IS digital pathology was previously assessed [2]. Representative images from five centers (Belgium, Canada, China, France, USA) of tissue stained for CD3+ and CD8+ (n = 36), having IS ranging from lowest to highest (2.5th to 90th percentiles, respectively) were re-analyzed by eight pathologists from different centers. These eight IS digital pathology quantifications revealed a strong reproducibility (mean cell densities in each tumor region, r = 0.97 for tumor; r = 0.97 for invasive margin; p < 0.0001). Only 2.1% variation in the mean percentile of CD3+ and CD8+ T-cell densities was found between IS quantifications. These observations were confirmed in an independent study [20]. This showed the strong reproducibility of IS using digital pathology. Since visual evaluation of tumor infiltrating lymphocytes in H&E-stained slides by pathologists was not sufficiently accurate for clinical decisions and, as it was important to assess the added value of automated digital pathology over visual assessment on the same CD3+ and CD8+ stains, we evaluated the reproducibility of a visual examination on these slides (T-score) by expert pathologists. The inter-pathologists' reproducibility and the differences between T-score and automated digital IS were evaluated. The IS method was confirmed to be a very robust method that produced reliable and consistent data with a very high degree of agreement (94%) between repeated measures. Moreover, the rare cases of discordance (3 out of 50) were all very close to the cutoff value of 25% and re-testing such samples to correctly assign their score would be simple. In contrast, a significant disagreement was observed for the visual semi-quantitative pathologist T-score (High or Low). This inter-observer disagreement was not improved by providing pathologists with training for the visual scoring process to recognize the IS cutoff points of prognostic importance. Furthermore, the study revealed that the effect of training was heterogeneous between pathologists and that, overall, training only marginally improved and, in fact, for two pathologists, worsened the concordance between the visual assessment and IS. Importantly, a high rate of disagreement was observed when comparing the pathologists' visual assessment with the reference IS, leading to misclassification of almost half the cases (48%) and this disagreement was particularly high for the cases around the 25% clinical cutoff (80%). The lack of improvement in agreement between pathologist evaluation and quantitative digital pathology, before and after training, is likely multifactorial. In fact, the size of a colon tumor is quite large, and a whole slide analysis revealed a heterogeneous pattern of CD3+ and CD8+ within different areas of the tumor. Furthermore, the mean density of these cells is higher at the invasive margin compared to the core of the tumor, rendering the overall visual evaluation difficult. In addition, these immune cells can be present within the tumor glands or within the stroma at different densities and can be clustered or dispersed even within the same tumor. CD3, encompassing both CD8 and CD4 T-helper cells and CD8 cells, also have different densities in different areas of the tumor, and the evaluation has to be done twice for each of these markers on consecutive slides. Looking at the overall slide is tedious, and the semi-quantitative evaluation of so many heterogeneities is very complex and in fact very subjective. For such evaluation, the novel tool of quantitative digital pathology is clearly much more appropriate, as demonstrated by the poor performance of pathologist scoring, even after training. To illustrate how an incorrect determination of an immune response of stage II and III CC patients could influence the subsequent treatment and potential outcome of the patient, we illustrated a clinical decision tree for these patients ( Figure 6) [29]. For patients with stage II CC, the misclassification of patients with IS-Low to highly infiltrated tumors (IS-High) results in patients being identified as stage II CC at low clinical risk when they are in fact at high biological risk. This is important because such a situation would produce false expectations of a low risk of recurrence for these patients who will not be monitored as closely as those at high risk of recurrence to detect signs of relapse earlier. Based on the worst-case negative agreement between visual T-score and IS observed in this study (25%), 75% of IS-Low CC cases would be classified as having a good outcome. Thus, 17% of lowrisk stage II or 9% of all stage II patients would not be appropriately considered as high-risk patients. They may be undertreated and under screened. Conversely, misclassification of truly IS-High stage II CC patients as having tumors with low immune cell infiltration could result in patients recommended for adjuvant chemotherapy when their recurrence risk is low, and thus they are unnecessarily exposed to long-term toxicity and side-effects of chemotherapy ( Figure 6). In the worst-case scenario observed in this study (positive agreement of 79%), this represents 7% of all stage II CC patients who might be overtreated. In the case of stage III, if IS-Low CC patients were misclassified as IS-High, they would not be identified as poor responders and may be unnecessarily subjected to additional therapy (six months versus three months) and its associated long-term toxicity and intense side effects ( Figure 6). Considering the worst incorrect classification observed in this study, 75% of the IS-Low patients would be identified as good responders to extended adjuvant treatment, which represents 37% in the stage III high clinical risk group (T4/N2) or 15% of all stage III CC patients. Finally, in the worst-case scenario observed, up to 21% of IS-High cases might be incorrectly identified as poor responders to six months of chemotherapy (IS Low) and thus be subjected to an increased risk of relapse. Altogether, given an estimated 101,420 and 23,000 new stage II and new stage III CC patients per year, respectively, pathologist visual evaluation of T-score would lead to 8700/5800 (before/after training) CC cases being misclassified annually and possibly receiving inappropriate patient care. A limitation of the study relates to the sample size and to the relatively low number of expert pathologists who evaluated the CD3+-and CD8+-stained images. These results should be validated with a larger cohort of patients and with a larger number of expert pathologists. However, given the very important difference between pathologist T-score classification and the reproducible IS quantification, these results confirm the importance of new tools for pathologists, namely quantitative digital pathology. Altogether, given an estimated 101,420 and 23,000 new stage II and new stage III patients per year, respectively, pathologist visual evaluation of T-score would lead 8700/5800 (before/after training) CC cases being misclassified annually and possibly ceiving inappropriate patient care. A limitation of the study relates to the sample size and to the relatively low num of expert pathologists who evaluated the CD3+-and CD8+-stained images. These resu should be validated with a larger cohort of patients and with a larger number of exp pathologists. However, given the very important difference between pathologist T-sc classification and the reproducible IS quantification, these results confirm the importa of new tools for pathologists, namely quantitative digital pathology. Conclusions The potential negative impact that misclassification of immune response assessment and thus erroneous prognosis and risk evaluation might have on the clinical management of patients with CC was shown to be significant. Our results showed that the IS assay provided the best stratification of patients into prognostic recurrence groups (low versus high). We conclude that the standardized and robust IS assay outperforms the assessment of expert pathologists in the clinical setting for immune response evaluation and can thus provide the most appropriate individualized therapeutic decisions for patients with CC.
2022-02-26T16:11:59.219Z
2022-02-24T00:00:00.000
{ "year": 2022, "sha1": "da7b238d9d87faa1fc881321e99081bdfda37aeb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/14/5/1170/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1aedd6ac04996af1c043f005e93465c26af17223", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226217031
pes2o/s2orc
v3-fos-license
A novel quantification platform for point-of-care testing of circulating MicroRNAs based on allosteric spherical nanoprobe MiRNA-150, a gene regulator that has been revealed to be abnormal expression in non-small cell lung cancer (NSCLC), can be regarded as a serum indicator for diagnosis and monitoring of NSCLC. Herein, a new sort of nanoprobe, termed allosteric spherical nanoprobe, was first developed to sense miRNA-150. Compared with conventional hairpin, this new nanoprobe possesses more enrichment capacity and reaction cross section. Structurally, it consists of magnetic nanoparticles and dual-hairpin. In the absence of miRNA-150, the spherical nanoprobes form hairpin structure through DNA self-assembly, which could promote the Förster resonance energy transfer (FRET) of fluorophore (FAM) and quencher (BHQ1) nearby. However, in the presence of target, the target-probe hybridization can open the hairpin and form the active “Y” structure which separated fluorophore and quencher to yield “signal on” fluorescence. In the manner of multipoint fluorescence detection, the target-bound allosteric spherical nanoprobe could provide high detection sensitivity with a linear range of 100 fM to 10 nM and a detection limit of 38 fM. More importantly, the proposed method can distinguish the expression of serum miRNA-150 among NSCLC patients and healthy people. Finally, we hoped that the potential bioanalytical application of this nanoprobe strategy will pave the way for point-of-care testing (POCT). Introduction MicroRNAs (miRNAs) are sorts of small single stranded noncoding RNAs and its length are approximately 21 ~ 25 nucleotides [1][2][3]. Since the dysregulation of miRNAs is closely related to the occurrence and prognosis of tumor, the detection of circulating miRNAs in serum can be used as credible biomarkers for tumor liquid biopsy [4,5]. For instance, monitoring the circulating miRNA-150 in serum could provide reliable clinical assays for diagnosis early non-small cell lung cancer (NSCLC) [6]. Thus, the sensitive and specific detection of miRNAs is considered to be a meaningful strategy for early cancer diagnosis. Currently, the detection of miRNAs is normally performed by traditional techniques such as northern blotting, microarray and reverse transcription-polymerase chain reaction (RT-PCR) [7][8][9]. Although these methods are reliable, they are hardly applied to clinic due to the disadvantages of complicated operations, high cost, unstable results and required well-trained staff. Certainly, other various techniques have been developed, including electrochemical sensor, surface plasmon resonance (SPR), surface-enhanced Raman scattering (SERS), colorimetry and fluorescence [10][11][12][13][14]. Among the various techniques, fluorescence detection platform has emerged as an excellent alternative for the detection, quantification and characterization of target [15][16][17]. Organic fluorescent dye-labeled DNA probe and fluorescent technique have been widely employed for ultrasensitive quantification of DNA, miRNA, proteins, and others because of the easily commercialized synthesis [18][19][20]. To upgrade the assay sensitivity, most of these determinations were carried out assist with signal amplification strategies, including rolling circle amplification (RCA), exponential amplification reaction (EXPAR), catalytic hairpin assembly (CHA), entropy driven catalytic reaction, enzyme-assisted signal amplification and materials, etc. [21][22][23][24][25][26]. However, these amplified methods also suffered from various drawbacks and limitations in practice, such as long time and high-budget needed, reaction condition rigorous, not well met the demand of universal simple and rapid medical analysis, especially for point-ofcare testing (POCT). Therefore, it is expected to establish one kind of simpler method for target directly detection without the help of any amplification strategy. One possible solution is that the allosteric spherical nanoprobes have high reaction cross section and show high binding capacity and specificity to target miRNA. Up to now, the hairpin probe has always been attracted particular interest as the simplest, prototypical system. Tyagi and Kramer were the pioneers to study hairpin shaped molecular probe [27]. The nature of molecular beacon is single-stranded DNA molecule, which contains a stem-and-loop structure and a pair of fluorophore and quencher group. The loop is design to hybridize with its complete complementary target. And the stem is the result of two complementary arm sequences through annealing. Due to the proximity of a pair of fluorophore and quencher group, the fluorescence was quenched. So, when the loop recognizes the perfectly target, the structure of hairpin changes into a more stable formation of DNA duplex and forces the stem apart, resulting in the leakage of fluorescence. To date, various molecular probe-based detection methods have been proposed in a large scale of applications. For example, the detection of DNA damage, the monitoring of DNA movement, biological small molecule detection, and miRNA detection in living cells [28][29][30][31][32][33]. Nevertheless, molecular probebased nucleic acids detection is also generally on the basis of the above-mentioned amplification techniques. Here, we design a unique allosteric spherical nanoprobe based on the traditional molecular beacon idea. Fluorescent groups and quenching groups are respectively marked at the end-to-end joint of the two hairpin structures. Numerous dual-hairpin structures are modified on the surface of magnetic nanoparticles. So, the nanoprobe was endowed with enrichment capacity and reaction cross section so as to be easier to react with the target. When the target exists, the closed dual-hairpin spherical nanoprobe is opened to active "Y" structure, leading to significant fluorescence leakage. Following that, the results of fluorescence were recorded on multipoint fluorescence scanning microarray. The manner of fluorescence readout is different from traditional fluorescent scanner, the microarray device is more portable and compact, especially for POCT application. In this way, the miRNA-150 detection is determined by the fluorescence changes in the allosteric spherical nanoprobe upon binding with the target. Due to the inherent fluorescence signal transduction mechanism, the dual-hairpin spherical structure functions as a sensitive probe with a high signal-to-background ratio. Meanwhile, this allosteric spherical nanoprobe has high hybridization specificity because of its loop-stem structure, which can easily discriminate the complementary from single-mutation target miRNA. Reagents and materials Polyacrylamide gel electrophoresis (PAGE)-purified DNA oligonucleotides and miRNAs were manufactured by Takara Biotechnology Co., Ltd. (Dalian, China, https ://www.takar a.com.cn/). These sequences are summarized in Additional file 1: respectively. To avoid the hydrolysis of target sequences by RNase, 0.1% DEPC treated sterilized water was used through the entire experiment. The RNase-free tubes and tips were supplied by Axygen Scientific Inc. (Silicon Valley, USA, https ://axyge n.bioon .com.cn/). All chemicals employed were of analytical reagent grade. High-purity deionized water prepared by a Milli-Q water purification system (Millipore Corp., Bedford, USA) was used for all experiments. Design of microarray structure Glass chips with 6 × 6 microarray were customized by Zhenjiang Huarui chip technology Co., Ltd. The size of the chip was 75 mm × 25 mm × 1.1 mm, the diameter of the sample pool was 2 mm, and the depth was 0.8 mm (Additional file 1: Fig S1). All the size of microarray chip was in accordance with the detection module of the Lux-Scan 10 K Microarray Scanner. Synthesis of dual-hairpin spherical probe First, electrolytic ions and aggregating particles impurities in nanometer magnetic beads were removed by nitric acid fiber film filter with 0.22 μm. After ultrasonic treatment, 1 mL of carboxyl magnetic nanoparticles were blended and washed twice with 1 mL of MES (15 mM, pH 6.0, 4 °C) to avoid the agglomeration. Afterwards, the magnetic nanoparticles were sinned and adsorbed with magnetic separator and the supernatant was removed. Next, 100 μL of MES and 100 μL of EDC (10 mg/mL) were added to the magnetic nanoparticles. The solution was blended completely at room temperature for 30 min to suspend the magnetic nanoparticles and activate the carboxyl. Later, the supernatant was removed again. The activated carboxyl magnetic nanoparticles could be used for coupling. Meanwhile, the prepared fixed chain (Seq B) was diluted to wanted concentration with MES. Afterwards, the fixed aggregation chains (200 μL of the diluted Seq B) of the dual-hairpin probes modified with azyl were covalently coupled with the carboxyl magnetic nanoparticles. This process was completed by TMM-5 M Magnetic programmable mixer overnight at room temperature. So far, the carboxyl magnetic nanoparticles coupled with Seq-B were ready for following experiments and bioassays. Fluorescence detection Fluorescence detection was performed at LuxScan 10 K Microarray Scanner and Varioskan LUX Multimode Microplate Reader. Fluorescence emission spectra was collected in the range of 500 to 650 nm with an excitation wavelength of 490 nm and slit width of 5.0 nm. Gel electrophoresis The non-denaturing 12% polyacrylamide gel electrophoresis (PAGE) analysis was carried out for 120 min in 1 × TBE buffer at pH 8.3 under 90 V constant voltage. 15 μL of sample and 3 μL of loading buffer were completely mixed, and the concentrations of each sample were both 2000 nM. Two ladders were used to indicate the molecule weight, one was 20 bp DNA ladder and the other was miRNA marker. Then, the gel was stained with Sybr green I for 30 min and then imaged using a Bio-Rad Gel Doc XR + Imaging System (Bio-Rad, USA). Preparation of clinical plasma samples All samples were obtained from patients at the Southwest Hospital of Third Military Medical University (Army Medical University) with ethical approval. There were 7 patients (male 4, female 3) with average age of 50.23 ± 4.25 years in NSCLC group. In control group, 3 patients (male) with average age of 49.16 ± 4.13 years were involved. Any patient with insufficiency of liver or renal function, malignancy, peripheral vascular disease, and blood individual was not found in the two groups. Each sample of blood (300 μL) was cracked with lysis solution, then extracted by ethanol and isopropanol. After absorbing the total miRNA by spin column, 30 μL of the RNase free TE buffer was added to elute the miRNA. The above-mentioned miRNA products were reverse transcribed and the cDNAs were stored at − 80 °C. Extraction of circulating miRNAs and expression profile analysis of miRNA-150 According to the manufacturer's procedure, circulating miRNAs were extracted from anticoagulation of peripheral blood. The RT-PCR was performed on a Thermal Cycler 2400 and the Q-PCR was on a CFX-96. Real-time PCR profiling was performed with the following cycling conditions: 95 °C for 15 min; 40 cycles of 95 °C for 10 s, 60 °C for 60 s. Besides, the total 20 μL for Q-PCR reaction mixture contained 2.5 μL of cDNA, 4 μL of 5 × Golden HS TaqMan qPCR Mix, 1 μL of 20 × miRNA TaqMan Assay and 12.5 μL of RNase-free water. Analysis of data In the profiling experiment, miRNA expression data were normalized toward the average expression of miRNAs detectable in all samples. By strictly controlling of the amplification conditions and primers, the efficiency of the PCR was close to 1. Fold changes of relative expression were calculated using the △△C T method, the − △△C T values were calculated by using the formula "2 − △△CT = [(C T gene of interest-C T internal control) sample A-(C T gene of interest-C T internal control) sample B]". Thus, higher scores represent higher expression levels [34]. All data were analyzed with SPSS 18.0. Significance and very significance were accepted as P < 0.05 and P < 0.01, respectively. Design principle of allosteric spherical nanoprobe As shown in Scheme 1, the allosteric spherical nanoprobe was structurally consisted of three sequences (Seq A: the free chain, Seq B: the fixed chain, Seq C: the auxiliary chain) and formed double rings that were specifically hybridized with target nucleic acids. In this magnetic nanoparticles allosteric spherical probe system, the FAM and BHQ1 are respectively marked at the end-toend joint of the two hairpin structures. Then, numerous Scheme 1 Schematic illustration of the allosteric spherical nanoprobe for miRNA-150 direct multipoint fluorescence detection dual-hairpin structures are conjugated to the surface of magnetic nanoparticles. When no miRNA-150 exists, the spherical nanoprobe forms firmly hairpin structure by intramolecular hybridization, which could promote the FRET of FAM and BHQ1 nearby. However, in the presence of target, the target-probe hybridization can specifically open the hairpin and form the active "Y" structure which separated fluorophore and quencher to yield an "active" fluorescence. Subsequently, this complex can be precipitated to the bottom of the microarray chip with the aid of magnet. Finally, the results of fluorescence were recorded on multipoint fluorescence scanning microarray. In this manner, the target-binding allosteric spherical nanoprobe leads to significant fluorescence signal enhancement and the fluorescent intensity is related to the concentration of target miRNA-150. The design of double rings makes it easier to detect target than traditionally one hairpin DNA probe due to the magnet nanoparticles enrichment numerous dual-hairpin probes and increase the larger reaction cross section accordingly. Besides, the reaction cross section of the ring sequence is larger than that of the stem sequence, which ensures that the target sequence can be combined with the ring sequence to form a more stable structure of DNA double strand and then pull the stem apart. Meanwhile, each of the three sequences mismatched completely, thus avoiding the formation of non-specific hybridization and ensuring the sufficiently FRET between fluorophore and quencher. In principle, the concept of integrating numerous dual-hairpin probes on a single magnetic nanoparticle has made it easier to expand the functionality of this nanoprobe. Verify the structure of the allosteric spherical nanoprobe A stable allosteric spherical nanoprobe structure is one of the prerequisite for the success of this novel approach. For the sake of verifying the spherical nanorobe structure, the following experiments were conducted. Firstly, considering that the high-resolution melting can reflect the dissociation temperature, after degeneration at 98 °C, the melting curve from 98 °C to 0 °C was analyzed in Fig. 1a. Obviously, each group has different and unique melt peak from each other, indicating that a single hybrid structure was formed and no dimer was produced. Secondly, the non-denaturing PAGE (12%) analysis was performed to verify the structure of the allosteric spherical nanoprobe. In Fig. 1b, lane 5 showed that the intramolecular hybridization allosteric spherical nanoprobe consists of Seq A, Seq B and Seq C had slower mobility owing to the higher molecular mass than the three single sequence from line 1 to lane 3 and lane 4 (miRNA-150). Besides, the reaction products of allosteric spherical nanoprobe and miRNA-150 were obtained in lane 6 with lower migration rate than lane 5. These results confirm that the allosteric spherical nanoprobe structure was initially successfully constructed and can react with the target. Thirdly, the fluorescence emission spectra in Fig. 1c clearly indicated that the low fluorescent intensity of this allosteric spherical nanoprobe and other four single sequence. This phenomenon is the expected hairpin intramolecular hybridization result. When the target was introduced, higher fluorescence intensity was appeared, indicating the target could bind with allosteric spherical nanoprobe and change the structure of the hairpin thereby generating significant fluorescent signal. Feasibility of the allosteric spherical nanoprobe The allosteric spherical nanoprobe was design to combine with miRNA-150 specifically and turn the "inactive" hairpin probe to "active" fluorescent state. So, the fluorescence measurement was firstly carried out to verify the feasibility of the developed method. As depicted in Fig. 2a, the fluorescence intensity was very weak in control group without the target, low background signal may be attributed to the perfect combination between FAM and BHQ1 at the end-to-end joint of the two hairpin structures. In contrast, there was a strong fluorescence intensity appeared when miRNA-150 existed in the allosteric spherical nanoprobe system. Furthermore, the results of fluorescence microscope before and after the reaction between the allosteric spherical nanoprobe and the target are consistent match with those of fluorescence. As illustrated in fluorescence microscope field of view (Fig. 2b), almost no fluorescent dot was shown when no target was existed, contributing the intramolecular hybridization of allosteric spherical nanoprobe. Interestingly, we observed that the fluorescent spots, the reaction products of hairpins and targets, full filled the field of vision of a fluorescent microscope (Fig. 2c). The data suggested that the target reacted with allosteric spherical nanoprobe successfully, resulting in obvious fluorescence leakage. Therefore, it is reasonable to believe that the feasibility of the allosteric spherical nanoprobe is workable. Optimum condition of hybridization experiment The components in the allosteric spherical nanoprobe such as Seq A, Seq B and Seq C were indispensable and played essential roles in target detection. Subsequently, the ratio of different components (Seq A, Seq B and Seq C) in probe and the concentration of probes, the reaction temperature and the concentration of Mg 2+ were studied to obtain the best analytical performance for this detection system. Prior to construct this probe, not only should a proper hairpin structure be ensured, but also fluorescent leakage among Seq A and Seq B and Seq C should be avoided. As can be seen from the data Tian et al. J Nanobiotechnol (2020) 18:158 Fig. 3a, the sufficient amount of Seq C could ensure the realization of FRET during hybridization, while the excessive Seq C would not affect the hairpin probe and the free Seq C in solution would be removed by magnetic separation, the relative fluorescence intensity increased with the increasing ratio of the components and become plateau from 1:1:4. Thus, 1:1:4 was chosen as the optimal concentration ratio among Seq A, Seq B and Seq C. Similarly, the carboxylation sites on the magnetic nanoparticles and the corresponding amount of hairpins bound to them are considered to be factors affecting the probe performance. So, the concentration of probes was discussed. As shown in Fig. 3b, with the increase concentration of probe, the relative fluorescence intensity gradually increased until reaching a relatively stable at 160 nM, indicating that the optimal concentration of the probe is 160 nM. This optimization result revealed that excess probes couldn't participate in the reaction due to two respects reasons. One is the steric effects of hairpin structure made the probes cannot completely bind to the carboxylation site on the surface of magnetic nanoparticles. The other is the carboxylation sites on the surface of magnetic nanoparticles were completely bound with probes. So, redundant probes are not affect the magnetic bead based probes system. Finally, 160 nM was regarded as the proper concentration of the allosteric spherical nanoprobe. In order to meet the demands of ideal reaction temperature in this probe system, different reaction temperatures were tested to gain the best analysis fluorescent signals. As depicted in Fig. 3c, the relative fluorescence intensity reached its peak at 35 °C, and decreased rapidly as the reaction temperature increased. This trend indicated that, with the increase of reaction temperature, the structural stability of allosteric spherical nanoprobe decreased and the relative fluorescence signal of allosteric spherical nanoprobe decreased accordingly. At last, the optimal temperature was determined at 35 °C for the sake of the stability of the probe structure and the performance of this probe system. Since different ionic strengths will affect the melting temperature of probe and a certain ionic strength can stabilize the stem hairpin structure [35], Zhang's group have verified that the stem structure of the molecular beacon was opened when there was no Mg 2+ in the buffer [36]. So, the concentration of Mg 2+ associated with the stability of allosteric spherical nanoprobe was further investigated. As exhibited in Fig. 3d, the relative fluorescence intensity reached the maximum when the concentration of Mg 2+ was 5 mM, indicating that hairpin structure maintains a good stable state. While excessive dosage of Mg 2+ may influence the identification and hybridization efficiency between dual-hairpin and target. Therefore, 5 mM was selected as the satisfactory concentration of Mg 2+ in the following experiment. Performance of the allosteric spherical nanoprobe Under the selected experimental conditions, the performance of the proposed method for quantitative analysis of miRNA-150 was further investigated by detecting targets with different concentrations. Figure 4a expressed the relative fluorescence intensity of miRNA-150 with different concentrations in this allosteric spherical nanoprobe system. The higher the target concentration, the stronger the fluorescence intensity. Figure 4b revealed a good linear relationship between the relative fluorescence intensity and the logarithm of miRNA-150 concentration over the range from 100 fM to 10 nM. The linear regression equation was Relative FI = 1.2606 lgC + 2.9348, with a correlation coefficient of R 2 = 0.9903. The threshold of detection limit was set to three-fold standard deviation above blank. Therefore, the detection limit was estimated as 38 fM. Correspondingly, the multi-point fluorescence scanning results on microarray could be seen in Fig. 4d. The wide linear range and low detection limit reflect the quiet good performance of this strategy. Additionally, this proposed method was comparable to some previously reported fluorescence sensors (Additional file 1: Table S2) with pretty simple design, good sensitivity and relative short detect time for miRNAs detection [19,[37][38][39][40][41]. This method is characterized by the ingenious design of a magnetic bead-based allosteric spherical nanoprobe, which has stronger enrichment capability and larger reaction cross section for the target. Detection of miRNA-150 in serum samples To identify the performance of the new method for detecting miRNA-150 in the blood samples, we carried out the serum samples detection experiment. Five different concentrations of artificial synthesized miRNA-150 were spiked in tenfold diluted human health serum and tested with the strategy we designed. Recovery rates were listed in Table 1 ranged from 92.7% to 106.9%, implying that the miRNA-150 in spiked serum samples could be detected by this method. Moreover, the efficiency of our sensor has also been proved by serum samples from NSCLC patients and healthy people, and qRT-PCR confirmed this result. The Ethics and Clinical Research Committee of the Southwest Hospital of Third Military Medical University (Army Medical University) Group approved our research, and all contributors have signed written informed consents. From healthy contributors and NSCLC patients, we obtained the serum samples. Levels of miRNA-150 in the serum samples were direct quantified with the proposed assay and qRT-PCR respectively. Figure 4c showed the real human samples results for both NSCLC patients and healthy contributors, the similar tendency as the results of qRT-PCR. The results of probe method were obtained within 1 h, but qRT-PCR assays required more than 3 h procedures. The proposal method successfully used for the quantification of miRNA-150 on laboratory and real samples from healthy contributors and NSCLC patients. These results suggested that the established miRNA probe holds potential application in miRNA detection of real samples. Specificity of the allosteric spherical nanoprobe The increased burden of diagnosing for NSCLC is largely due to the high sequence homology of miRNA. Therefore, it should be emphasized that the specificity presented in this method is available. Figure 5 shows the comparison of fluorescence spectra in response to different kinds of miRNAs targets under the optimal conditions. Notably, the fluorescence intensity in response to miRNA-150 is approximately 3.7-fold higher than that in response to single-base mismatched miRNA-150 (miRNA-M 1 ) and approximately 6.64-fold higher than that in response to three-base mismatched miRNA-150 (miRNA-M 3 ) and a completely unrelated sequence (miRNA-P). Inset shows the multi-point fluorescence scanning results on microarray. These results demonstrate that the proposed method has high selectivity in discriminating miRNA-150 from analogous miRNA due to the structure of the loop-stem. Conclusions In brief, a rapidly, specifically and sensitively allosteric spherical nanoprobe for the assay of miRNA-150 was developed, without the involvement of any other complicated amplification technology. The sensing design depends on the allosteric spherical nanoprobe, which enable to convert the specific miRNA-150 recognition reactions into measurable fluorescence signals. It exhibits high sensitivity toward NSCLC-related miRNA-150 with a detection wide range of 6 orders of magnitude and a detection limit as low as 38 fM. The double loop-stem structure of the nanoprobe could identify single-base mismatched target. More importantly, the proposed method can directly distinguish the expression of serum miRNA-150 NSCLC patients and healthy people. Although much remains to be studied about the performance of the sensing in complex biological, this probe paves the way for further applications in the clinical diagnosis of lung cancers. We hope that this allosteric spherical nanoprobe can be used for bioanalysis, especially in the field of POCT. Additional file 1: Fig S1. Schematic diagram of microarray structure. Table S1. Alignment of sequences used in experiment. Table S2. Comparison of the reported chemosensors for miRNA.
2020-11-01T14:24:29.744Z
2020-10-26T00:00:00.000
{ "year": 2020, "sha1": "956998ba5129da0efe0c6e42aa7497d4b189c7aa", "oa_license": "CCBY", "oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-020-00717-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bda14ecfc345d1bb177430aa38e77f0f53dcc8ce", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
439391
pes2o/s2orc
v3-fos-license
Green synthesis and evaluation of silver nanoparticles as adjuvant in rabies veterinary vaccine Background Green synthesis of nanoparticles by plant extracts plays a significant role in different applications. Recently, several studies were conducted on the use of nanoparticles as adjuvant. The main aim of this study was to evaluate green synthesized silver nanoparticles (AgNPs) as adjuvant in rabies veterinary vaccine and compare the results with the existing commercially available alum adjuvant. Materials and methods In the current study, AgNPs were prepared by the reduction of aqueous silver nitrate by leaf extract of Eucalyptus procera. The formation of AgNPs was confirmed by ultraviolet (UV)–visible spectrophotometer, scanning electron microscopy, dynamic light scattering, and X-ray diffraction analysis. Then, different amounts of AgNPs (200 µg, 400 µg, 600 µg, and 800 µg) were added to 1 mL of inactivated rabies virus. The loaded vaccines (0.5 mL) were injected intraperitoneally into six Naval Medical Research Institute mice in each group on days 1 and 7. On the 15th day, the mice were intracerebrally challenged with 0.03 mL of challenge rabies virus (challenge virus strain-11, 20 lethal dose [20 LD50]), and after the latency period of rabies disease in mice (5 days), the mice were monitored for 21 days. Neutralizing antibodies against rabies virus were also investigated using the rapid fluorescent focus inhibition test method. The National Institutes of Health test was performed to determine the potency of optimum concentration of AgNPs as adjuvant. In vitro toxicity of AgNPs was assessed in L929 cell line using MTT assay. In addition, in vivo toxicity of AgNPs and AgNPs-loaded vaccine was investigated according to the European Pharmacopeia 8.0. Results AgNPs were successfully synthesized, and the identity was confirmed by UV–visible spectrophotometry and X-ray diffraction analysis. The prepared AgNPs were spherical in shape, with an average size of 60 nm and a negative zeta potential of −14 mV as determined by dynamic light scattering technique. The highest percentage of viability was observed at 15 mg/kg and 20 mg/kg of AgNPs-loaded vaccine concentrations after injecting into the mice. The calculated potencies for alum-containing vaccine and AgNPs-loaded vaccine (dose 15 mg/kg) were 1.897 and 1.303, respectively. MTT assay demonstrated that alum at the concentration of 10 mg/mL was toxic, but AgNPs were not toxic. The in vivo toxicity also elucidated the safety of AgNPs and AgNPs-loaded vaccine in mice and dogs, respectively. Conclusion In the current study, for the first time, the adjuvanticity effect of green synthesized AgNPs on veterinary rabies vaccine potency with no in vivo toxicity was elucidated according to the European Pharmacopeia 8.0. Introduction Dovepress Dovepress 3598 asgary et al die of rabies, with 99% of them from Asia and Africa. 2,3 Rabies is a preventable disease, and as a major control strategy, vaccination is recommended by the World Health Organization (WHO). The vaccine is used in two distinct situations that include pre-and postexposure treatments. The protective function is the production of neutralizing antibodies, either IgM or IgG, which are able to prevent the entry of virus into cells. 4 The vaccine is prepared by inactivation of rabies virus to prevent disease either before or after exposure to the virus. Since animals play a significant role in the spread of the disease, vaccination of livestock and wild life is proven to be a common way of controlling the disease in developed countries. In this regard, with regular vaccination of animals, human rabies has been controlled in Europe. 5 To improve the effectiveness of rabies veterinary vaccines, aluminum hydroxide (alum) is used. 6 Alum is one of the most common adjuvants, and aluminum compounds are widely used in the manufacture of many vaccines. Although, alum enhances immune response, some disadvantages, such as destructive effects on the local tissues (necrosis), prolonged inflammation causing severe irritation at the site of injection, provoking only T helper 2 response, weak cellular immune response, and unwanted IgE reactions, restrict its application in vaccine formulations. [7][8][9] Moreover, it has been shown that aluminum compounds increase the level of unwanted homocytotropic antibodies in animal species. 10 Moreover, alum-based vaccines are not effective in producing antiviral responses. Therefore, new adjuvants for the enhancement of immunogenicity of weak antigens, with lower side effects, long-term immune stimulation, and simultaneous stimulation of humoral, cellular, and mucosal responses, are required. Recent progresses in the field of nanotechnology, especially in producing particular size and shape of metal nanoparticles, are leading to the development of a variety of different applications. One feature of nanoparticles is the capability of trapping or capturing molecules such as proteins and nucleic acids; hence, promising methods for antigen delivery and improving the function of the immune system by selection of targeting antigen-presenting cells are provided. As regards, delivery of antigens to antigen-presenting cells, in particular, dendritic cells, and their stimulation are important issues in the development and improvement of vaccine potency; hence, vaccination systems based on nanoparticles create opportunities for controlled delivery of antigens to desired immune cells. 11 Recently, several studies were conducted on the use of nanoparticles as adjuvant. It was shown that nanoparticles such as silver, gold, and calcium phosphate enhance the immunogenicity of antigens. 10,12,13 Despite other applications of nanoparticles, they are also used as immune potentiators. For example, nanoparticles such as polylactide-co-glycolide have been used to enhance the immune responses against several antigens. 14,15 Moreover, in a recent study, the stimulatory effects of silver nanoparticles (AgNPs) on immune response to albumin was elucidated, which showed the ability to enhance the immune response to weak antigens. 13 Conventionally, nanoparticles are prepared by physicochemical techniques, including chemical vapor deposition, physical vapor deposition, grinding systems, and solvothermal synthesis. 16 These approaches usually need high-cost instruments, and they are performed under dangerous conditions using hazardous reagents. Some of these procedures also have complications such as instability and aggregation of the synthesized particles. Currently, green synthesis of nanoparticles has been exponentially used because of the lower cost of production as well as the simplicity of synthesis. Given the potential effects of the nanoparticles as adjuvants and their easy synthesis using green chemistry, AgNPs produced by Eucalyptus procera were evaluated to enhance the immune response against inactivated rabies virus and the results were compared with the existing commercially available alum adjuvant. Materials and methods green synthesis of agNPs The preparation method of AgNPs was described earlier. 17 Fresh E. procera leaves were shade dried at room temperature (RT) and powdered using a mortar and pestle. Then, 50 g of the dried powder was weighed and boiled in 500 mL deionized water for 2 hours and then filtered through Whatman grade 1 filter paper. For synthesis of AgNPs, 6 mL of the extract was added to 100 mL of 0.01 mM AgNO 3 (EMD Millipore, Billerica, MA, USA) aqueous solution and stirred gently at RT. This mixture was incubated until the colorless solution converted into dirty brown color, which revealed the formation of AgNPs. Then, the solution was centrifuged at 13,000× g for 20 minutes, and the pellet was washed three times with distilled water. AgNPs were resuspended in ethanol (EMD Millipore), dried at 75°C for 120 minutes, and stored at 4°C for a few days, and subsequent procedures were performed immediately. No instability was observed during the incubation. Characterization and identification of agNPs Optical absorption spectra of AgNPs were analyzed using an Epoch UV-Vis (UV-visible) spectrophotometer (BioTek, Bad Friedrichshall, Germany) within a range of 300-700 nm The powder samples were coated by gold film for loading the dried particles on the SEM instrument. The gold coating was performed by a Sputter Coater model SCD005 made by BAL-TEC (Pfäffikon ZH, Switzerland), and the images were captured at desired magnification. The size distribution profile and charge quantification of the synthesized AgNPs samples were evaluated by dynamic light scattering particle size analyzer and zeta potential analyzer (Malvern Zetasizer Nano-ZS), respectively. X-ray diffraction (XRD) measurement of the produced AgNPs was carried out using X-ray diffractometer instrument (Rigaku D/max 2500V) in the angle range of 10°C-110°C at 2θ and scan axis 2:1 sym. The adjuvanticity of agNPs Different amounts of AgNPs (200 µg, 400 µg, 600 µg, and 800 µg) were added to 1 mL of inactivated rabies virus (Lot No 92-1; Pasteur Institute of Iran, Tehran, Iran) under biological safety class II laminar hood in sterile conditions. The resulting mixtures were gently stirred at 4°C on a magnetic stirrer overnight. The loaded vaccines (0.5 mL) were injected intraperitoneally into six Naval Medical Research Institute (NMRI) mice in each group on days 1 and 7 for immunization assessment. Inactivated rabies virus and commercial vaccine containing alum adjuvant (Lot No 92-1; Pasteur Institute of Iran) were injected as negative and positive controls, respectively. On the 14th day, blood samples were collected from the ocular vein and sera (at least 100 µL) were used for the determination of raised neutralizing antibodies. On the 15th day, the mice were intracerebrally challenged with 0.03 mL of challenge virus strain-11 (CVS-11, 20 lethal dose [LD] 50 ), and after the latency period of rabies disease in mice (5 days) the mice were monitored for 21 days. Any death was assessed by fluorescent antibody test (FAT) on dead mouse brains using fluorescein isothiocyanate (FITC)conjugated anti-nucleocapsid polyclonal antibodies (Bio-Rad Laboratories Inc., Hercules, CA, USA) and a fluorescence microscope (E-200; Nikon Corporation, Tokyo, Japan). The presence of Negri bodies in the neuron cells confirmed the rabies disease. Neutralizing antibody titration by rapid fluorescent focus inhibition test Neutralizing antibodies were measured by rapid fluorescent focus inhibition test (RFFIT). The isolated sera were inactivated by incubation at 56°C for 30 minutes and threefold serial dilutions of reference (WHO reference) and sample sera were prepared in minimum essential media in triplicates. Subsequently, 50 µL of CVS-11 (50 focus forming dose 50 ; Pasteur Institute of Iran), sufficient to infect 80% of cells in each well, was added to each well and incubated at 37°C for 1 hour. Minimum essential media instead of CVS-11 and phosphate buffer saline (PBS) instead of serum were used as negative and positive controls, respectively. Approximately 50 µL of BSR cell suspension (a clone of baby hamster kidney cells; Pasteur Institute of Iran) in minimum essential media supplemented with 10% fetal bovine serum (5×10 4 cells/well) was added to each well and incubated overnight at 37°C in 5% CO 2 . The plates were rinsed three times with PBS and fixed using 80% cold acetone for 30 minutes at 4°C. Finally, the plates were stained with 50 µL FITC-conjugated anti-nucleocapsid polyclonal antibody, and the percentage of the infection was determined by fluorescent microscopy. The neutralizing antibody titers were calculated using Reed and Muench method, a simple method for estimating 50% end points. 18 Determination of vaccine potency using National Institutes of health test The National Institutes of Health (NIH) test is a gold standard method according to WHO, British, European, and US Pharmacopeia for measuring the potency of adjuvants for rabies virus vaccine. [19][20][21][22] According to the results of the adjuvanticity test, the concentration of 15 mg/kg (300 µg AgNPs per injection dose) was used for the NIH test. Serial dilutions (1:5, 1:25, 1:125, and 1:625) of the test and the reference (Rabipur, India, Lot No 2078) vaccines were prepared in the NIH buffer (0.83 g NaCl, 0.11 g Na 2 HPO 4 , and 0.027 g KH 2 PO 4 in 100 mL of distilled water adjusted to pH 4.7), and 0.5 mL of each dilution was intraperitoneally injected into ten NMRI mice (20 g; Pasteur Institute of Iran) on days 1 and 7. On the 14th day, the mice were intracerebrally challenged with 0.03 mL of challenge rabies virus strain (CVS-11, 20 LD 50 ). For the determination of median LD 50 (the amount of challenge virus to kill 50% of the mice population), 0.03 mL of each CVS-11 dilutions (10 −4 , 10 −5 , 10 −6 , and 10 −7 ) was injected intracerebrally into ten mice and LD 50 was determined using Reed and Muench method. In addition, one group of mice was injected intracerebrally with 0.03 mL of the test vaccine for inactivity test. After the latency period of rabies disease in mice (5 days), they were monitored and the number of sick, paralyzed, or dead animals was recorded for 21 days. To confirm the mice deaths from rabies virus, FAT method was performed on mouse brains to detect Negri bodies (containing viral nucleoprotein). Finally, the mortality of the vaccinated mice with the reference and test vaccines was determined, and the potency was calculated by Probit In vitro toxicity assay of the synthesized agNPs L929 cell line (American Type Culture Collection TIB-67) was cultured in RPMI 1640 supplemented with 10% fetal bovine serum and 1X Pen-Strep (Thermo Fisher Scientific, Waltham, MA, USA) at 37°C in 5% CO 2 . For assay, 5,000 cells/well were cultured in a 96-well microplate (Thermo Fisher Scientific) and incubated overnight at 37°C in 5% CO 2 . Approximately 100 µL of different concentrations of AgNPs (10 −3 mg/mL, 10 −2 mg/mL, 10 −1 mg/mL, 10 0 mg/mL, and 10 1 mg/mL) was added to each well in sextuplicate and incubated for 24 hours at 37°C. Then, 100 µL (0.05 mg/well) of MTT (Sigma-Aldrich Co., St Louis, MO, USA) was added to each well and incubated for 4 hours at 37°C. The supernatants were removed, 100 µL of dimethyl sulfoxide (Sigma-Aldrich Co.) was added to each well, and the reaction was read at 570 nm and 630 nm using a spectrophotometer. All the data were analyzed with one-way ANOVA statistical test using SPSS software Version 16.0 (SPSS Inc., Chicago, IL, USA) and P-value 0.05 was considered a significant difference between groups. animal storage conditions All the animals were housed at RT (20°C-23°C) on a 12-hour light/dark cycle, with unlimited access to food and water, and adapted for 1 week before the tests. Animal experiments were approved by the ethical committee of Pasteur Institute of Iran and performed based on the ethical standards formulation in the Declaration of Helsinki. abnormal toxicity test for the synthesized agNPs Abnormal toxicity test was performed for the green synthesized AgNPs according to the European Pharmacopeia 8.0. Outbred female CD1 mice (4-5 weeks old with a body weight range of 17-24 g) supplied by the Pasteur Institute of Iran were used in this study. 23 Briefly, 300 µg of AgNPs was dissolved in 0.5 mL of a 9 g/L sterile and pyrogen-free sodium chloride solution R and then injected intravenously into five healthy female CD1 mice. As a control, 0.5 mL of a 9 g/L sterile and pyrogen-free sodium chloride R solution was injected intravenously into another five mice. All the mice were monitored for abnormal reaction, signs of illness, and weight loss for 24 hours after the time of injection. 24 A two-tailed unpaired t-test was used for calculating P-value difference between the test and control data. safety test on agNPs-loaded rabies vaccine for veterinary use Safety test on AgNPs-loaded rabies vaccine for veterinary use was carried out according to European Pharmacopeia 8.0. Two healthy female dogs (Skye Terrier breed, 12 weeks old) that did not have antibodies against rabies virus were used. Briefly, the animals were examined by a veterinarian and confirmed to be healthy. The body temperatures were recorded for 3 days prior to administration of the vaccine, and the mean temperature was used as baseline. Then, a double dose of the AgNPs-loaded rabies vaccine was intramuscularly injected, and the animals were monitored at least daily for signs of abnormal local and systemic reactions for 14 days after administration. 25 In addition, the body temperatures were monitored at 0 hour, 4 hours, and 4 days after administration. 26 characterization of the synthesized agNPs The identity, size, and shape of the green synthesized AgNPs were analyzed by UV-Vis spectrophotometry, XRD, SEM, and dynamic light scattering techniques. The UV-Vis spectrophotometer proved to be a very useful technique for the analysis of some metal nanoparticles. Reduction of AgNPs was observed based on the surface plasmon resonance peak at 438 nm by using UV-Vis spectroscopy (Figure 1). No additional peaks were obtained in the spectrum which confirms that the synthesized products are Ag only. The morphology of nanoparticles was evaluated by SEM. Figure 2 shows a microscopic image of the synthesized AgNPs. Point-to-point study of AgNPs by SEM reveals that the sizes of the nanoparticles vary and are in a range of 50−70 nm. The size and zeta potential of AgNPs were also determined using Malvern Zetasizer Nano-ZS ( Figure 3). Size distribution profile of the biosynthesized AgNPs showed the particles with a mean size of ∼63 nm. The zeta potential of the biosynthesized AgNPs was measured as −14.9 mV. AgNPs were characterized by XRD analysis for the investigation of their composition and purity. Figure 4 shows XRD patterns of AgNPs synthesized by plant leaves extract. The obtained characteristic diffraction peaks in this XRD pattern were observed at 2θ, 38.4, 44.5, and 64.8, which are assigned to the (111), (200), and (220) crystallographic planes of face-centered cubic structure for the silver powder sample, respectively (Joint Committee on Powder Diffraction Standards file No 04-0783). No additional diffraction peaks were observed other than the characteristic peak of the silver structure showing the purity of synthesized AgNPs. The adjuvanticity analysis of agNPs To analyze the adjuvanticity of the synthesized AgNPs, different amounts of AgNPs were mixed to inactivated rabies virus and injected intraperitoneally into six NMRI mice. The mortality of the mice at different concentrations of AgNP-loaded vaccines is shown in Figure 5 and Table 1. The highest percentage of viability was significantly seen at 15 mg/kg and 20 mg/kg as compared with the PBS group when analyzed by chi-square test (P0.05). Deaths in mice were confirmed by FAT using FITC-conjugated anti-nucleocapsid polyclonal antibodies. Moreover, RFFIT on the collected sera revealed that all the living mice had 3 IU/mL of neutralizing antibody titers while dead ones had 0.375 IU/mL of neutralizing antibodies (Table 1). Potency determination of agNPs-loaded vaccine based on the NIh test The potency of AgNPs-containing vaccine was determined using the NIH test, and the result was simultaneously compared with the alum-containing vaccine. Table 2 shows the mortality of the mice in the NIH test. The percentage ratio of the surviving mice to the total mice that received AgNPscontaining vaccine was comparable with that of the mice that received alum vaccine, and no significant difference was observed using chi-square statistical test ( Figure 6). Statistical analysis showed that the calculated potency for alum adjuvant vaccine is 1.897 and for the AgNPs-containing vaccine, it is 1.303. In addition, a linear relationship was found between viability and vaccine dilutions in the NIH test (Figure 7). In vitro and in vivo toxicity of the synthesized agNPs The toxicity of green synthesized AgNPs was investigated both in vitro and in vivo using MTT assay and mice abnormal toxicity, respectively. In vitro toxicity of AgNPs was first evaluated by MTT assay on L929 cell line. The data demonstrated that alum at the concentration of 10 mg/mL was significantly toxic (P0.05), but AgNPs were not toxic at this concentration when compared with the control group that received PBS (Figure 8). Abnormal toxicity test of green synthesized AgNPs was performed according to the European Pharmacopeia 8.0. All the mice did not show any abnormal reactions or signs of illness 24 hours after the time of injection (Table 3). All the animals gained weight, and the measurement of weight gain did not show any significant difference between the test and control group (Figure 9). safety test for agNPs-loaded vaccine Safety test for veterinary use was carried out on two healthy female dogs. After administration of AgNPs-loaded rabies vaccines, the animals were daily monitored for 14 days. It must be mentioned that the normal mean body temperature for a Skye Terrier dog is in the range of 38.3°C-39.2°C and true fever presents body temperatures ranging from 39.5°C to 41.1°C. 26 The animals did not show signs of either disease or local and systemic reactions (Table 4), and no fever or death was observed due to the vaccine during monitoring. Discussion One of the current issues in vaccinology is the necessity of novel adjuvants and targeted delivery systems. Adjuvants have routinely been used for both research and commercial vaccines. However, toxicity and physicochemical properties that affect manufacturability have limited their clinical applications. Recently, nanoparticles showed great promise for vaccine formulations. 14 The biological functions of nanoparticles are highly dependent on dimensions and concentrations. Dimensions at the nanometric scale can precisely control and easily change the size range of viruses and are also able to capture or trap macromolecules such as proteins and nucleic acids. 11 AgNPs are one of the attractive candidates for drug delivery of macromolecules such as DNA, RNA, and proteins. Many studies have had their focus on the use of AgNPs as drug carriers, and several in vitro studies revealed a little toxicity of AgNPs on various cell lines. 27,28 In this study, the adjuvanticity effect of AgNPs on the rabies veterinary vaccine was assessed and the results were compared with commercially available veterinary vaccine containing alum adjuvant. AgNPs were prepared with E. procera extract as reducing agent from silver ions. The results showed that the size of AgNPs in water was 50-70 nm with a negative charge of −14 mV. In addition, XRD results confirmed the identity of biosynthesized AgNPs. The adjuvanticity effect of nanoparticles was examined by injection of fatal/challenge virus into the mice after receiving different concentrations of AgNPs and monitoring for mortality over a 21-day period. Results revealed that adjuvanticity of AgNPs was enhanced by increasing its concentration and finally reached a plateau which might be the reason for the similarity of the results obtained for the 15 mg/kg and 20 mg/kg concentrations. In addition, the mice were evaluated for neutralizing antibodies using RFFIT method. Antibody titer results were in line with the mortality percentage of the mice at different concentrations of AgNPs and were more in groups that received 15 mg/kg and 20 mg/kg. Surprisingly, at these concentrations, the adjuvanticity effect is comparable with alum adjuvant. This result is similar to the report by Xu et al 13 regarding their two protein models, namely ovalbumin and bovine serum albumin. However, this phenomenon was observed at lower concentrations for both protein models in their study (2 mg/kg and 10 mg/kg for bovine serum albumin and ovalbumin, respectively) with AgNPs, which have a size of 141 nm and a charge of −30.6 mV. The difference can be explained by the fact that the interaction of nanoparticles with proteins and cells depends on the surface properties. 29,30 Furthermore, it has been reported that BALB/c mice that 31 they used have more potent humoral immune responses. The concentration of 15 mg/kg AgNPs was further analyzed by the NIH test for potency calculation and comparison with alum adjuvant. It must be noted that the NIH test is a gold standard method for testing potency of rabies vaccine. The NIH results elucidated that the potency of vaccine containing nanoparticles was comparable with the commercial vaccine containing alum (the same batch), and both are acceptable by WHO as potent animal vaccines. In addition, the mortality of mice correlates with the dilution factor used in the NIH test. It must be emphasized that when different dilutions of vaccine were prepared, the adjuvanticity effect of AgNPs in the diluted vaccines was not affected. This indicates that AgNPs were homogeneously mixed with the antigen, and the interaction between the inactivated rabies virus and the nanoparticle was not affected by dilution. When the survival rate with the dilution of 1/5 NIH test (received 3 mg/kg of AgNPs) was compared with the survival rate in the concentration of 5 mg/kg of nanoparticle (at the initial test of the study), surprising results were obtained. When the vaccine was diluted to one part in five, the survival rate was 73.3%, whereas when the nanoparticle with 5 mg/kg concentration was injected, the survival rate was equivalent to 33.3%. This phenomenon can be explained as follows: when the vaccine was diluted by the NIH buffer, the antigen was also diluted and the ratio of nanoparticle/antigen was not changed. In other words, one of the reasons why the mortality of mice at 5 mg/kg per dose is higher than that of 3 mg/kg per dose is that, at this concentration, the ratio of Table 3 abnormal reactions and signs of illness in mice monitored for 24 hours after the injection of 300 µg of agNPs
2018-03-21T20:55:11.344Z
2016-07-29T00:00:00.000
{ "year": 2016, "sha1": "ad517c7caac8736138898e2a78067a10bd7d51e6", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=31623", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad517c7caac8736138898e2a78067a10bd7d51e6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
252617378
pes2o/s2orc
v3-fos-license
Considerations for the Utilization of Questionnaires in Collegiate Team Environments Questionnaires are commonplace in both team and individual sports as a subjective tool to assess an athlete’s psychological perception and behavioral practices towards their performance and physical preparation. A consistent and systematic approach is required when administering questionnaires to an athlete or group of athletes. Proper questionnaire design and administration methods allow a strength and conditioning coach to effectively analyze the data and make actionable interventions when necessary. There are challenges in sports, especially team environments, which strength and conditioning professionals must maneuver to better help athletes. These challenges include sudden changes in practice or travel, coaching changes, administrative technicalities, athlete cooperation, and many more factors. When challenges arise, questionnaires are useful tool to gauge how an athlete responds to such changes. The purpose of this report is to outline strategies and considerations for strength and conditioning professionals to effectively implement questionnaires in the collegiate environment. Introduction The widespread use of questionnaires within team and individual sports acts as an analytical tool to assess subjective perceptions from individual athletes. Athlete selfreported measures utilizing wellness questionnaires and various survey tools provide low-budget means of evaluating training load and gauging athlete's psychological perceptions and behavioral practices throughout the training year. Questionnaires can help identify an individual athlete's awareness of the physical and psychological difficulty of their training on a day-to-day basis 6 . Additionally, evidence supports that the utilization of subjective ratings of perceived exertion (RPE) in questionnaires may provide more accurate measures of an athlete's internal training load, a variable that is typically measured objectively through heart rate monitoring 4 . An additional but unique aspect to questionnaires implemented in a collegiate environment is that strength and conditioning professionals often develop questions that are specific to the team or athlete at any given moment. Due to sudden changes in assessment needs based on coaches, players and the competitive environment, it is not always feasible to utilize well-researched questionnaires that have been externally and internally validated. Strength and conditioning professionals frequently face the challenge of designing training programs that maximize performance, meet the demands of the sport, and maintain the health and wellbeing of their athletes 7 . Research has shown the ability to enhance performance, given that workload data is assessed properly and disseminated effectively 7 . More specifically, internal workloads is defined as the physiological and/or psychological stress an athlete experiences in response to a given amount of work (external workload) completed within a training session or competition. Assessing internal workloads allow strength and conditioning professionals to analyze the relationships between objective external workloads and internal workloads 3 . While monitoring heart rate is an objective method to assess internal stress, questionnaires can be an optimal way to assess athletes' psychological internal stress by gauging their opinions and perceptions to training and other environmental stressors 3 . Utilizing questionnaires is a cost-effective monitoring strategy that can be implemented by any strength and conditioning professional. Therefore, the purpose of this report is to provide insight and considerations when creating and implementing questionnaires and surveys in a collegiate environment. Additionally, this report aims to highlight challenges, obstacles, and strategies to successfully collect survey data from athletes and reporting methods to optimize a small component of a larger monitoring program. Challenges for Practitioners Successful athlete monitoring via questionnaires requires a concerted effort from well-connected strength and conditioning professionals on a daily basis. Regardless of questionnaire design or response flexibility, there are a number of common challenges and barriers that affect the implementation and continuation of a successful subjective athlete monitoring program. The questions below characterize some of the most common challenges strength and conditioning professionals encounter when establishing an athlete monitoring program:  How do I convince my coaches and athletes to begin monitoring in the first place?  How do I generate buy-in and encourage honest and consistent feedback?  What questions should I ask? How many and how often?  How do I know if my questions are valid, reliable and unbiased?  What's the best way for me to gather and store data in the long term?  Once I have gathered the data, how do I analyze it?  How will I know if a meaningful change has occurred or if an intervention is required?  What's the best way for me to communicate these findings with my staff?  What if my staff and athletes are not open to changing plans, habits or routines? Small strength and conditioning staffs at universities with a large array of varsity sports may have significant barriers when implementing a questionnaire on a daily basis. The demands of training a large number of athletes with a minimal staff takes an exorbitant amount of time and can limit coaches ability to implement monitoring programs, especially subjective questionnaires. Even among well-staffed departments that may be in a good position to implement a monitoring program, the notion of interpreting or managing a continuous influx of data (internal and external) is often overwhelming. To manage robust data sets from different sources, athlete management systems can be utilized to alleviate the workload of low-staffed strength and conditioning staffs. However, these solutions can be cost prohibitive, especially at colleges where departmental budgets cover only necessary operating expenses. Many of the above challenges surround a real or perceived lack of time, people, resources, and expertise in designing, implementing, and leveraging the insights generated from an effective monitoring program. Subjective Data Collection with Questionnaires Questionnaires should consist of brief questions designed to gauge the knowledge, opinion, and behavior of a population (i.e., athletes) 19 . Questionnaires can provide strength and conditioning professionals with subjective data that can be used for further research and analysis, administration or policymaking. For example, an athlete could report low fuel or low hydration for several days in a row, which may prompt strength and conditioning professionals to take that information and look for ways to enhance fueling or improve their accessibility to fuel and hydration. Questionnaires are widely used because they easy to interpret and economical to implement. When questionnaires are efficiently implemented, they can be administered to a large population in a brief period of time. Therefore, questionnaires can be very useful when working with numerous teams composed of diverse athletes with highly variable daily schedules. Additionally, they are non-interventional, nor invasive, and thus have minimal ethical concerns, especially when respondents are informed of risks and voluntarily consent to their participation 19 . It is important that strength and conditioning professionals exercise caution when analyzing data from questionnaires as low response rates and selection bias can likely frequently occur. It may be relevant to note that the quality of a questionnaire is greatly dependent on the strategic choice of words, volume of questions and design of the questions utilized. These factors make it critical that the questionnaire is meticulously designed and, if possible, internally validated before implemented on a large scale. A primary consideration for coaches is to ensure the questions ask a specific question that attempts to answer key questions that help improve field performance and reduce the likelihood of injury. Anecdotally, a mix of closed-ended and open-ended questions are likely to keep the respondent interested and attract a high response rate. Closed-ended questions are easily answered with a yes or no, or perhaps a numerical rating, while open-ended questions commonly involve more in-depth thought and may consist of a few sentences as an appropriate response. When both types of questions are incorporated into a survey, it is recommended to begin with closed-ended questions 19 . The quantity of questions can be best determined by the strength and conditioning professional's ability to manage the information. Although fewer questions may be the best place to start to gain an initial sense of buy-in, questionnaires that can be completed quickly have the highest response rate 14 . Depending on the complexity of the questions, this may include up to 5-10 questions. If a lengthier questionnaire is necessary, research suggest to limit it to 25-30 questions, aim for it to be completed in less than thirty minutes, and consider administering it very infrequently 14 . How to Administer Questionnaires Questionnaires can enhance communication and may boost athlete buy-in. When implementing questionnaires, one should consider the number of questions asked and the order of specific questions. Proper question volume and order increase effectiveness and prevent poor response rates 13 . A beneficial approach is to administer athlete questionnaires at a designated time, whether first opportunity in the morning, post-lift, or at the end of the day 13 . However, the most successful approach may be allowing the athlete to complete the questionnaire at their convenience. When administering a questionnaire daily a sound recommendation is to limit questions to five concise questions, which ensures the athlete receives timely feedback and it fosters positive relationships between the athlete and strength and conditioning professionals 13 . Consequently, this also will strengthen communication with the sport's coaching staff and additional support staff members such as the dietitians, athletic trainers, and sports psychologists. Athlete Responses When considering athlete responses on questionnaires, there are a multitude of factors that must be considered. First, it is important to make sure the questionnaires are administered to encourage consistent and truthful responses. The strength and conditioning professional should discuss the importance of answering all questions honestly in order to have the most accurate data to best help the athlete 11 . Individually administering questionnaires promotes truthful responses and avoids group-based responses. Individualized questionnaires specific to each athlete can also strengthen the bond between them and the strength and conditioning professional, for this promotes the idea that the strength and conditioning professional has a specific interest in the health and well-being of each athlete and is invested in them on a personal level. Another factor strength and conditioning professionals must consider when analyzing athlete responses is that an athlete's perception of their workload may vary strongly based on their individual training experience 13 . When considering a freshmen athlete versus a senior athlete, the senior athlete will most likely have a higher training age. More often than not, older athletes have been exposed to a more vigorous collegiate strength and conditioning programming than younger athletes and have a more accurate perception of exercise intensity. Further, training age may be related to more accurate perception of training intensity 11 . When reviewing athlete response data, the strength and conditioning professional must actively look for outliers. An outlier is a data value that is extremely high or low and far outside an athlete's normal state, typically greater than 1-2 standard deviations away from the mean. For example, extremely high RPE during a light practice or a very drastically low hydration status may be important to consider when it comes to the recovery and rehabilitation of an individual athlete 12 . Outliers may warrant additional questions and probing of the staff to identify if a real problem exist. Types of Questionnaires Wellness questionnaires and ratings of perceived exertion (RPE) scales can be used to answer a multitude of performance questions such as how many hours of sleep an athlete has had, how hydrated or fueled the athlete might be, the amount of stress they may be under, or how much general fatigue and soreness they may be feeling. As a coach designs a questionnaire, it is important to first determine the goal and what insight is critical for the coaching staff. A practitioner must assess if the survey questions should help identify specific types of psychological and emotional stress an athlete may encounter daily 15 . Athletes can encounter eustress (positive) and distress (negative) from their daily environments. Strength and conditioning professionals should be aware of sources of stress athletes encounter on a daily basis that include academic stress, relational stress, stress from coaches, professor, family members, etc. Using a questionnaire can help, determine the athlete's perception of their physical response to various sources of stress in addition to their training and sports performance. Questionnaires can be organized in a variety of ways when seeking the information above. Strength and conditioning professionals may utilize RPE scales, such as the Borg Rating of Perceived Exertion scale, which consists of a quantitative value from 6-20 17 . Hydration can be collected by asking an athlete how many bottles of water they drank the prior day, or by using a color scale in which the athlete selects the answer which most closely mimics the color of their urine. Fueling questions can be written in pursuit of specific numbers of meals "How many meals (or snacks) did you eat yesterday" with a selector button or fill-in-the-blank, or they can be written as simple yes/no questions like "Did you eat at least three complete meals yesterday, and breakfast prior to training today?" 9 . Methods for In-season Data Collection When it comes to in-season data collection, the timing of data collection is considered crucial. Depending on the athlete, it may or may not be beneficial to collect data through questionnaire administration on the day of the performance. Some athletes may have a tendency to become over stimulated or start overthinking, which is not an ideal mental state going into a performance. Therefore, it is not recommended to administer a questionnaire too close to a performance. However, with regards to post-performance, most athletes will greatly benefit from the administration of a questionnaire, for it may give an inside look at how an athlete is feeling allowing the strength and conditioning professional to see what needs to be done to aid the athlete's recovery process 12 . Once the collection of these data occurs, it is now time to apply the individual-level data to the team, and begin using the information as a catalyst to drive conversations. These specific conversations may be prospective with regards to the training you are preparing or retrospective as an evaluation of your program's efficacy. They may also be a foundational component of the strength and conditioning professional's bond with their athletes as well as an opportunity to discuss why their perception may or may have not aligned with given expectations. Examples of teamlevel questions to analyze include:  Did the training stimulus imposed yesterday accomplish the specific athlete's goal, or was the result an example of underachievement or overachievement?  Do strength and conditioning professionals need to provide their athletes with more education on topics like sleep hygiene, nutrition, hydration, time management, recovery modalities, etc .12 ?  Across multiple semesters, is there a team-wide trend regarding stress, academic calendars, and competitive schedules?  Are there more apparent trends that may need to be given a closer look? Questions like these may also prompt the strength and conditioning coach to dig deeper, for a simple change or behavioral adjustment. For example, the strength and conditioning coach might conclude a simple adjustment would be to increase the quality of an athlete's recovery thus positively influencing their athletic performance 12 . Ratings of Perceived Exertion (RPE): The RPE scale is a common method for determining exercise intensity through the athlete's perception of how easy it is to breathe, or contract the working limbs during physical tasks 5 . Because of this, it is considered to be a subjective measurement. Two scales commonly used to express RPE are the 0-10 scale and the Borg's 6-20 scale. The 0-10 scale starts at 0 (no exertion at all) and ends at 10 (very heavy exertion )2 . The Borg's RPE scale, developed by Swedish researcher Gunnar Borg is similar to the 1-10 scale, but starts at 6 (no exertion at all) and ends at 20 (maximal exertion), leaving more specific numbers for athletes to quantitatively rank their efforts 17 . Additional variations of RPE scales may utilize quantitative variables while including a red-yellow-green color gradient and/or various "emoji" faces relating to the emotions that training induced. The 0-10 scale is typically better to use for athlete questionnaires because of its greater ease of comprehension, while the Borgs 6-20 scale is primarily utilized in scientific laboratory settings. Any RPE scale with a color gradient or various emoji faces violates many optimal psychometric principles, and therefore should be avoided when monitoring an athlete through the use of questionnaires 5 . RPE is used in research studies because it has been quantitatively linked to markers of respiration, circulation, and physical output during exercise. It is often used within training programs to describe the intensity of a particular session, for it acts as a temporal phenomenon that is instantaneous to any given point during the athlete's training or competition. This has an important role in the self-regulation of behavior, such as autoregulation of external load and pacing 5 . Session-RPE (sRPE): When athletes recall their RPE for the entire training session or competition it represents the session RPE, otherwise known as sRPE. The session's duration refers to the length of the session expressed in minutes. A nominal score is given by an athlete to describe his RPE of "mean training intensity" during that training session or competition. Training load or competition load can be calculated by multiplying the session's duration in minutes by the quantitative RPE measurement 8 . For example, if an athlete reports an RPE of 7 on an RPE 0-10 scale throughout a 45-minute training session, this would lead to a training load of 315. When analyzing sRPE and training load variables it is best to choose only one RPE scale for athletes to use so numbers can be compared across the board. Session-RPE methodology has been shown to be valid, reliable, and very useful in the performance arena. Additionally, other subjective measures may also prove to be highly valuable too. Coaches and strength and conditioning professionals cannot always exclude the possibility of adding objective measures (HR measures adapted for endurance sports, and/or GPS measures adapted for team sports) to subjective measures. These objective measures have the potential to further complement data obtained from subjective measures 8 . Wellness Questionnaire -A wellness questionnaire is described as a general questionnaire distributed to athletes regarding their health, wellness, and performance. This kind of questionnaire works to gauge the general health and well-being of an athlete at a specific time 16 . Because of this, it may be valuable to administer wellness surveys on a consistent basis, numerous times throughout each semester 16 . Common areas that wellness surveys assess may include nutrition, hydration, stress, and sleep, all of which are further expanded on below.  Nutrition: Nutritional components may be assessed which may include questions regarding the level of intake of macronutrients such as protein, carbohydrates, and fats. In instances where the athlete is not expected to know their macronutrient values off hand, a wellness questionnaire may ask them to describe their meals. Meal frequency or meal timing components may be assessed as well, with questions such as "How many meals did you eat today?" "How many times did you have a snack?" and "What times did you eat each one of your meals or snacks?" 9  Hydration: Hydration components may be assessed through questions about hydration amount or method such as "How many ounces of water did you drink today?" or "Did you have any electrolyte-rich sports drinks today? If so, which ones?"  Stress: The athlete's stress components may be included within a wellness survey in a multitude of ways, but common questions may be along the lines of "How stressed do you feel today?" "What is the source of the stress you are experiencing (athletic, academic, etc.)?" 16  Sleep: Various sleep components may also be analyzed within a wellness survey such as the amount or quality of sleep the athlete got, or the time they went to bed. This may include questions such as "How long did you sleep last night" and "How well-rested do you feel?" Sleep variables may also be measured through wearables so it may be important to consider that data too 10 . It is imperative for strength and conditioning professionals to assess factors affecting student-athlete wellbeing. A brief daily assessment can help in this mission, by asking better questions directly to the athletes in order to help their physical and mental performance. In a study seeking to underscore the reciprocal connection between the body and mind, authors suggest measuring health behaviors, such as diet, sleep, exercise, and alcohol use 15 . The measurement of these specific behaviors may circumvent the need to directly measure mental health symptoms, which many athletes may be hesitant to do 15 . Data Storage and Preservation Strategies Physical data that includes any information or written forms about the athletes should be stored in a locked file cabinet in the performance office. Electronic data should always be stored in an encrypted and password protected file on the head strength and conditioning professional's computer or the university's cloud drive. Most athlete management software (AMS) that are utilized in these settings are encrypted and meet Health Insurance Portability and Accountability Act of 1996 (HIPAA) standards 18 . However, it still remains important that strength and conditioning professionals thoroughly understand all HIPAA requirements to adequately store and preserve athlete data. As technology improves alongside the growing data storage and collection market, one's ability to analyze data is changing to be within the palm of their hand via cell phone applications. There are numerous websites and applications that automatically create questionnaires, which can be administered to a team on a daily basis. Widely used applications allow strength and conditioning professionals to create assessment polls/questionnaires within the application and the data and information is then stored on a cloud or integrated in to AMS. A very good rule of thumb for strength and conditioning professionals to consider is if it is measured, it must also be managed. Questions for the university prior to beginning data collection:  Does the university consider this data as medical documentation? If so, what is the policy on the number of years that data must be stored or the "data retention policy"? Due to the Personal Health Information, records must be stored for 10 years. The HIPAA Privacy Rule also contains standards for individuals' rights to understand and control how their health information is used 18 .  Does the strength and conditioning professional have the ability to keep the data electronically or do the original documents need to be stored? This may depend on how the strength and conditioning professional is collecting the data. This subset regards individually identifiable health information a covered entity creates, receives, maintains, or transmits in electronic form. This information is called electronically protected health information, or e-PHI. The Security Rule does not apply to PHI transmitted orally or in writing.  Determine that any folders that are owned by the individual are institutional property and therefore the university must have the ability to access those folders even if the individual leaves the university. As a strength and conditioning professional, it is important to understand the specific university's Institutional Data Policy.  Will athletes be voluntarily providing information or will it be mandatory? If data collection is mandatory, what will the potential consequences be for not completing their survey?  Which individuals will be provided with the information after data collection?  Which individuals will be managing the data and be responsible for any interventions? For example, if an athlete records feeling sore for 3 days, will the athletic trainer or strength and conditioning professional be responsible for prescribing an intervention to address the soreness. Athlete and Coach Buy In Prior to questionnaire administration, it is important that a strength and conditioning coach ensures it will be taken seriously by athletes and coaches, and accountability will be maintained in order to consistently collect data over time. In other words, the questionnaire must lead to both athlete and coach buy-in 4 . It is the responsibility of strength and conditioning professionals to get athletes to not only buy-in to their overall performance program, but to welcome the program-specific methodologies of athletic performance monitoring as well. A primary purpose for questionnaire uses within a larger performance program is to provide a wealth of information that will allow strength and conditioning professionals to help athletes develop on both a physical and psychological level. Encouraging athlete buy-in can be done through weekly education methods, such as "90 Seconds of Why," a series utilized by Greg Adamson, Associate Director of Olympic Sports Performance at the University of Tennessee. 90 Seconds of Why features a short video or teaching explanation that helps athletes become educated on the importance of performance components such as these daily questionnaires. Additionally, it is vital that strength and conditioning professionals actively seek out ways to include athletes in what is known as the journey of the questionnaire. On a season-by-season basis, it is valuable to conduct a quick survey to see which questions athletes believe they should or should not be asked. This does not mean that athletes have the final say over a questionnaire's content, but rather gauging their opinions can be considered valuable feedback. Ultimately, questionnaire content should be a collaboration between sports coaches and strength and conditioning professionals, as they can help prioritize what must be analyzed. The inclusion of athletes' feedback in this realm can further accelerate buy-in, as well as help develop meaningful athlete-coach relationships built on communication and trust 4 . Through the growth of these positive relationships, education can be shared on a more intimate level, allowing a higher level of ownership regarding an athlete's questionnaire responses and day-to-day performance. Yet another way to increase athletes' accountability is to encourage them that it is impossible to manage what is not measured. Collecting data over time directly involves the athletes in a positive management cycle, allowing the creation and enhancement of positive habits, which in turn can lead to greater cumulative outcomes. Taking ownership and exercising personal responsibility is not always easy and can inevitably lead to hardship at times. However, this hardship allows the athlete to work on their ability to be consistent, inciting them on their journey to become better leaders. This shift in mentality allows athletes to progress in their ability to win championships. Although all this stems from a questionnaire, it can be so much more than that. The pivotal concept of athlete buy-in is indeed a direct path to a significant advantage in the competition arena. Reporting Data to Coaches When reporting data to sport coaches, it is imperative to explain the purpose of the questionnaire as it corresponds to both the individual athletes as well as the team environment and culture. Student-athletes must understand the value of honest information and consequently how it can affect training, recovery recommendations, or education 12 . This can be achieved through the inclusion of specific examples of the data's positive effect on sports performance. It is also important that sport coaches understand that through the delivery of this data, the strength and conditioning professional is not trying to tell them how to do their jobs, or how to operate practices, but instead provide information that can lead to achieving high-level success for the team. Informing the coaches that these are tools to create conversation is key. Conversations with sport coaches may lead to introspection of strength and conditioning professionals, which may be helpful to consider as they engage in conversations with athletes. Acute spikes in stress (physical, emotional, psychological, etc.) may lead to less desirable training outcomes and therefore an athlete falling short of their maximal potential. Sports coaches may not be able to comprehend data-driven concepts without a collection of comparison data being presented in an understand format. Every sports coach is different when it comes understanding and comprehending data. This may be a bar graph, table, radar chart, or one of many other ways to present data. As a strength and conditioning professional, exercising flexibility and adapting to whichever presentation style is necessary is a vital skill. In an instance like this, once the sports coach is able to receive data in a manner that is digestible to them, it is the responsibility of strength and conditioning professionals to be able to accurately explain the positive and negative effects of undulations in various stressors. To optimize reporting, a sound recommendation for strength and conditioning professionals is to establish a standardized framework allowing efficient collection and centralization of data. More specifically, it should be agreed upon how and when data should be collected, formatted, processed, and stored. The importance of the strength and conditioning professionals working synchronously when collecting data should not be understated. The goal for data collection should mimic standardized methods, similar to any research setting, in order to ensure extraneous variables and sources of error are not influencing the outcome manipulation. Ensuring the data is clean, consistent, and free from error markedly improves efficiency and confidence when it comes time to interpret results. One of the most common ways for practitioners to aggregate data in spreadsheets is through a format referred to as wide format. Using this format, each row is representative of a single athlete's data over time, growing in width as more and more data is added (Table 1). This is a natural format to use since changes within an individual athlete can be easily viewed from left to right. However, a downside to using this format is that post-processing of data by visualization software becomes challenging due to columns not being discernable by a categorical heading. A simple method for combatting this is to format datasets in what is known as the long format. When using the long format each column is labeled with a specific variable of interest, such as "Date" or "Fatigue," while each row is a distinct instance of an athlete's entry of this variable (Table 2). Alternatively, variable names can be aggregated in a single column known as an 'ID' or a 'Key' field ( When considering which platform to utilize for managing and visualizing information, Excel is the most familiar and generally has the lowest barrier to entry, making it ideal to achieve the greatest reporting outcomes. However, it is important that the strength and conditioning professional understands as the dataset continues to grow, or become more complex, cell-referenced software may struggle to deal with the processing requirements and can easily 'break' if the reference pathways are compromised. Considering this, it is recommended that practitioners consider alternative software that is better designed to manage, manipulate and visualize large volumes of data early on so as not to become overwhelmed in the transition at later stages. PowerBI is often a good solution since most colleges with Office365 as their primary campus platform will inherently have a license for PowerBI usage within their athletic department. Utilizing new software for data management will require some upskilling, as strength and conditioning professionals must become comfortable with using it effectively. In a situation where there may be time restraints, an athlete management solution may prove to be more effective. Recommendations for Structuring and Delivering the Report After the initial data has been collected and interpreted, data visualization is a critical part of the process to display the results in a meaningful way. Effectively communicating the results to sports coaches can make or break if the coach will utilize the data. Properly utilizing the data will influence the perceived value that monitoring and data collection brings to the team and organization. It is important to understand the goal of athlete questionnaires is to convey data in a simple way to a coaching staff and athletes prior to intervention recommendations. It should be acknowledged that few coaches or athletes have high graphical literacy, otherwise known as 'graphicacy' 1 . Therefore, it is critical that sports coaches and athletes are not tasked with interpreting complex and multi-variate plots laden with statistical symbols and mathematical jargon 1 . More often, coaches and athletes are familiar with tabular box scores containing integers and percentage values in rows and columns. Where possible, strength and conditioning professionals should aim to highlight the most important information first through electing the use of visual tools that are familiar and easily interpretable by key stakeholders. The format and style of any report are dependent on contextual interest and the relationship that exists between the strength and conditioning professionals and the sports coaching staff. In general, reports can be presented in a visual or text-based format and can be delivered formally either during staff meetings or informally via dispersed dashboards with web-accessible reports (Figure 1), printouts, and digital messaging. Visual formats containing graphs, charts, and figures benefit from being information-rich. This allows them to make the most of the real estate available in order to convey meaning. Conversely, text-based formats such as written player reports allow the strength and conditioning professional to be more descriptive about the wider context of what the data is concluding. A combined approach of text and visual presentation of the data is likely the optimal approach to enhance data comprehension. Slow turnaround of data-related insights has the potential to overturn an otherwise sound monitoring program and could be considered negligent if the athlete has an illness or is injured. It should be the priority of the strength and conditioning coach to act on the data as soon as it becomes available, given that it has been efficiently processed and interpreted. Lastly, when it comes to conveying the inferred meaning of questionnaire data, a strength and conditioning coach should avoid deterministic or predictive language. Phrasing such as 'Caroline is 1.5 standard deviations below her typical wellness score so if she practices today, she is going to get injured' can be viewed negatively by sports coaches and may be considered as weaponizing of the data. Instead, consider introducing the magnitude of the change with stats or z-scores and how it may warrant modification following a collaborative conversation with the athlete and support personnel. A better way to approach this finding might be, 'Caroline is 1.5 standard deviations below her typical wellness score and this is quite a large change for her. The sports medicine report shows the athlete has not been coming in for treatment lately either. I suggest Coach Cain contact her to gather more detail and that we add her to the pre-practice meeting regarding options for practice adjustment.' Conclusion In summary, strength and conditioning professionals can effectively utilize questionnaires as part of a larger monitoring program to gauge athletes' perception of practice, training, competition, and other aspects of their life. Questionnaires add a necessary component to a monitoring program as strength and conditioning professionals are tasked with minimizing injury risk, maximizing performance, and maintaining an optimal level of health and well-being for all student-athletes. Questionnaires are a single, albeit important component of a high-performance program, allowing strength and conditioning professionals to improve athlete-to-coach communication, education, and potentially sports performance.
2022-09-30T15:05:52.285Z
2022-09-28T00:00:00.000
{ "year": 2022, "sha1": "7e914e7cd212ec6ccb181b8cfac54465b51c1ca3", "oa_license": "CCBY", "oa_url": "https://researchdirects.com/index.php/strengthandperformance/article/download/46/41", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "921711ada73006bd4c6880eb2d9f6b37b82d13c3", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
232110788
pes2o/s2orc
v3-fos-license
Congruence Properties of Indices of Triangular Numbers Multiple of Other Triangular Numbers It is known that, for any positive non-square integer multiplier $k$, there is an infinity of multiples of triangular numbers which are triangular numbers. We analyze the congruence properties of the indices $\xi$ of triangular numbers that are multiples of other triangular numbers. We show that the remainders in the congruence relations of $\xi$ modulo k come always in pairs whose sum always equal $\left(k-1\right)$, always include 0 and $\left(k-1\right)$, and only 0 and $\left(k-1\right)$ if $k$ is prime, or an odd power of a prime, or an even square plus one or an odd square minus one or minus two. If the multiplier $k$ is twice the triangular number of $n$, the set of remainders includes also $n$ and $\left(n^{2}-1\right)$ and if $k$ has integer factors, the set of remainders include multiples of a factor following certain rules. Finally, algebraic expressions are found for remainders in function of $k$ and its factors. Several exceptions are noticed and superseding rules exist between various rules and expressions of remainders. This approach allows to eliminate in numerical searches those $\left(k-\upsilon\right)$ values of $\xi_{i}$ that are known not to provide solutions, where $\upsilon$ is the even number of remainders. The gain is typically in the order of $k/\upsilon$, with $\upsilon\ll k$ for large values of $k$. Introduction Triangular numbers T t = t(t+1) 2 are one of the figurate numbers enjoying many properties; see, e.g., [1,2] for relations and formulas. Triangular numbers T ξ that are multiples of other triangular number T t (1.1) T ξ = kT t are investigated. Only solutions for k > 1 are considered as the cases k = 0 and k = 1 yield respectively ξ = 0 and ξ = t, ∀t. Accounts of previous attempts to characterize these triangular numbers multiple of other triangular numbers can be found in [3,4,5,6,7,8,9]. Recently, Pletser showed [9] that, for non-square integer values of k, there are infinitely many solutions that can be represented simply by recurrent relations of the four variables t, ξ, T t and T ξ , involving a rank r and parameters κ and γ, which are respectively the sum and the product of the (r − 1) th and the r th values of t. The rank r is being defined as the number of successive values of t solutions of (1.1) such that their successive ratios are slowly decreasing without jumps. In this paper, we present a method based on the congruent properties of ξ (mod k), searching for expressions of the remainders in function of k or of its factors. This approach accelerates the numerical search of the values of t n and ξ n that solve (1.1), as it eliminates values of ξ that are known not to provide solutions to (1.1). The gain is typically in the order of k/υ where υ is the number of remainders, which is usually such that υ ≪ k. Table 1. OEIS [10] references of sequences of integer solutions of (1.1) for k = 2, 3, 5, 6, 7, 8 Rank and Recurrent Equations Sequences of solutions of (1.1) are known for k = 2, 3, 5, 6, 7, 8 and are listed in the Online Encyclopedia of Integer Sequences (OEIS) [10], with references given in Table 1. Among all solutions, t = 0 is always a first solution of (1.1) for all non-square integer value of k, yielding ξ = 0. Let's consider the two cases of k = 2 and k = 7 yielding the successive solution pairs as shown in Table 2. We indicate also the ratios t n /t n−1 for both cases and t n /t n−2 for k = 7. It is seen that for k = 2, the ratio t n /t n−1 varies between close values, from 7 down to 5.829, while for k = 7, the ratio t n /t n−1 alternates between values 2.5 ... 2.216 and 7.8 ... 7.23, while the ratio t n /t n−2 decreases regularly from 19.5 to 16.023 (corresponding approximately to the product of the alternating values of the ratio t n /t n−1 ). We call rank r the integer value such that t n /t n−r is approximately constant or, better, decreases regularly without jumps (a more precise definition is given further). So, here, the case k = 2 has rank r = 1 and the case k = 7 has rank r = 2. In [9],we showed that the rank r is the index of t r and ξ r solutions of (1.1) such that and that the ratio t 2r /t r , corrected by the ratio t r−1 /t r , is equal to a constant 2κ+3 For example, for k = 7 and r = 2, (2.1) and (2.2) yield respectively, κ = 7 and 2κ + 3 = 17. Four recurrent equations for t n , ξ n , T tn and T ξn are given in [9] for each non-square integer value of k t n = 2 (κ + 1) t n−r − t n−2r + κ (2.3) where coefficients are functions of two constants κ and γ, respectively the sum κ and the product γ = t r−1 t r of the first two sequential values of t r and t r−1 . Note that the first three relations (2.3) to (2.5) are independent from the value of k. Congruence of ξ modulo k We use the following notations: for A, B, C ∈ Z, B < C, C > 1, A ≡ B (mod C) means that ∃D ∈ Z such that A = DC + B, where B and C are called respectively the remainder and the modulus. To search numerically for the values of t n and ξ n that solve (1.1), one can use the congruent properties of ξ (mod k) given in the following propositions. In other words, we search in the following propositions for expressions of the remainders in function of k or of its factors. Proposition 1. For ∀s, k ∈ Z + , k non-square, ∃ξ, µ, υ, i, j ∈ Z + , such that if ξ i are solutions of (1.1), then for ξ i ≡ µ j (mod k) with 1 ≤ j ≤ υ, the number υ of remainders is always even, υ ≡ 0 (mod 2), the remainders come in pairs whose sum is always equal to (k − 1), and the sum of all remainders is always equal to the product of (k − 1) and the number of remainder pairs, Proof. Let s, i, j, k, ξ, µ, υ, α, β ∈ Z + , k non-square, and ξ i solutions of (1.1). Rewriting (1.1) as T ti = T ξi /k, for T ti to be integer, k must divide exactly T ξi = ξ i (ξ i + 1) /2, i.e., among all possibilities, k divides either ξ i or (ξ i + 1), yielding two possible solutions ξ i ≡ 0 (mod k) or ξ i ≡ −1 (mod k), i.e. υ = 2 and the set of µ j includes {0, (k − 1)}. This means that ξ i are always congruent to either 0 or (k − 1) modulo k for all non-square values of k. Furthermore, if some ξ i are congruent to α modulo k, then other ξ i are also congruent to β modulo k with β = (k − α − 1). As ξ i ≡ α (mod k), then ξ i (ξ i + 1) /2 ≡ (α (α + 1) /2) (mod k) and replacing α by In this case, υ = 4 and the set of µ j includes, but not necessarily limits to, Note that in some cases, υ > 4, as for k = 66, 70, 78, 105, ... , ν = 8. However, in some other cases, υ = 2 only and the set of µ j contains only {0, (k − 1)}, as shown in the next proposition. In this proposition, several rules (R) are given constraining the congruence characteristics of ξ i . (R3): If k = s 2 + 1 with s even, the rank r is always r = 2 [11], and the only two sets of solutions are as can be easily shown. For t 1 , forming which is the triangular number of ξ 1 . One obtains similarly ξ 2 from t 2 . These two relations (3.1) and (3.2) show respectively that ξ 1 is congruent to 0 modulo k and , and the only two sets of solutions are as can be easily demonstrated as above. These two relations (3.3) and (3.4) show that ξ 1 and ξ 2 are congruent respectively to (k − 1) and 0 modulo k. , and the only two sets of solutions are as can easily be shown as above. These two relations (3.5) and (3.6) show that ξ 1 and ξ 2 are congruent respectively to (k − 1) and 0 modulo k. There are other cases of interest as shown in the next two Propositions Proposition 3. For ∀n ∈ Z + , ∃k, ξ, µ < k, i, j ∈ Z + , k non-square, such that if ξ i are solutions of (1.1) with ξ i ≡ µ j (mod k), and (R6) if k is twice a triangular number k = n (n + 1) = 2T n , then the set of µ j includes 0, n, n 2 − 1 , (k − 1) , with 1 ≤ j ≤ υ. Finally, this last proposition gives a general expression of the congruence ξ i (mod k) for most cases to find the remainders µ j other than 0 and (k − 1). Proposition 4. For ∀n > 1 ∈ Z + , ∃k, f, ξ, ν < n < k, µ < k, m < n, i, j ∈ Z + , k non-square, let ξ i be solutions of (1.1) with ξ i ≡ µ j (mod k), let f be a factor of k such that f = k/n with f ≡ ν (mod n) and k ≡ νn mod n 2 , then the set of µ j in- where m is an integer multiplier of f in the congruence relation and such that m < n/2 or m < (n + 1) /2 for n being even or odd respectively, and 1 ≤ j ≤ υ. Note that 11 of these 16 values of k are multiple of 6, the others are 2 mod 6 and 5 mod 6 for, respectively three and two cases. One notices as well, that generally, Ra and Exy supersede Ezt with x < z and t < y, except for k = 60 and 120. Conclusions We have shown that, for indices ξ of triangular numbers multiples of other triangular numbers, the remainders in the congruence relations of ξ modulo k come always in pairs whose sum always equal (k − 1), always include 0 and (k − 1), and only 0 and (k − 1) if k is prime, or an odd power of a prime, or an even square plus one or an odd square minus one or minus two. If the multiplier k is twice a triangular number of n,the set of remainders includes also n and n 2 − 1 and if k has integer factors, the set of remainders include multiple of a factor following certain rules. Finally, algebraic expressions are found for remainders in function of k and its factors. Several exceptions are noticed as well and it appears that there are superseding rules between the various rules and expressions. This approach allows to eliminate in numerical searches those (k − υ) values of ξ i that are known not to provide solutions of (1.1), where υ is the even number of remainders. The gain is typically in the order of k/υ, with υ ≪ k for large values of k.
2021-03-05T02:41:24.237Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "008e3464a6415abdacae58cf6a0f7192857c39db", "oa_license": "CCBY", "oa_url": "https://doi.org/10.30538/oms2021.0162", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "008e3464a6415abdacae58cf6a0f7192857c39db", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Chemistry" ] }
92695942
pes2o/s2orc
v3-fos-license
Antioxidant , cytotoxic , and antidiabetic activities of Dendropanax morbifera extract for production of health-oriented food materials Antioxidant, cytotoxic and anti-diabetic effects of fermented and non-fermented Dendropanax morbifera extracts were compared to assess the potential utility of this species in the development of healthoriented food. The non-fermented extract (NFDE) was obtained from leaves and branches of D. morbifera and the fermented extract (FDE) was prepared by inoculation with Lactobacillus plantarum and Lactobacillus brevis after extraction of D. morbifera with distilled water. Antioxidant activity before and after fermentation was assessed via the α, α-diphenyl-β-picrylhydrazyl (DPPH) radical scavenging assay, cytotoxicity analyzed with the MTT assay using 3T3-L1 cells and anti-diabetic activity measured based on inhibition of α-glucosidase activity. The D. morbifera extract exhibited substantial antioxidant activity. Moreover, FDE at 24 h exerted more significant antioxidant effects than NFDE (97.1 vs 89.8%) at a concentration of 5 mg/ml. Comparison of the effects of the non-fermented and fermented extracts on 3T3-L1 cell viability revealed slightly higher cytotoxicity of FDE than NFDE (85 vs 95% viability) at a concentration of 500 μg/ml. Both NFDE and FDE (100 μg/ml) exerted strong α-glucosidase inhibitory effects (98.9 and 97.6%, respectively). In view of the low cytotoxicity coupled with significant antioxidant and anti-diabetic effects, the D. morbifera extract presents a novel candidate for the production of functional anti-diabetic agents with minimal side-effects. INTRODUCTION Due to westernized eating habits and lack of exercise, the incidence of obesity and diabetes continues to rise by >10% every year (Xu et al., 2011).Increasing intake of high-calorie meals has resulted in a growing number of patients with metabolic syndrome diseases, such as diabetes and hyperlipidemia.In 2014, diabetes was the sixth most common cause of death in Korea.Diabetes, a type of metabolic disease characterized by hyperglycemia with elevated blood glucose levels, is caused by lack of insulin secretion in pancreatic cells or failure of normal kidney function (Li et al., 2013).In particular, oxidative stress is associated with progression of diabetes and contributes significantly to complications (Brownlee, 2005).Under conditions of long-term persistence of hyperglycemia, reactive oxygen species (ROS) produced during glycosylation of glucose enhance lipid peroxidation and oxidative damage, leading to various diabetic complications, such as hypertension, arteriosclerosis and hyperlipidemia (Sakurai and Tsuchiya, 2006;Lones, 1991;Tai et al., 2000). Continuous efforts to improve metabolic syndromes through ingestion of specific dietary components are underway.Current diabetic includes sulfonylurea, metformin, alpha-glucosidase inhibitor, thiazolidinedione and dipeptidyl peptidase-4-inhibitor as well as insulin.The chemical drugs currently available for treatment of diabetes cause serious side-effects, highlighting the necessity to develop effective natural remedies. Recently, Dendropanax morbifera has been increasingly cultivated on Jeju Island and some regions of the Korean coastline along the southwestern sea.D. morbifera, a subtropical broad-leaved evergreen tree belonging to the family Araliaceae, is an economically important species due to its use in the production of golden varnish (Moon et al., 1999;Kim et al., 2006).In addition, its leaves, stems, roots and seeds are traditionally used in folk medicine for skin and infectious diseases, headaches and other maladies (Park et al., 2004).Various beneficial physiological activities of D. morbifera have been documented, such as improvement of lipid abnormalities, diabetic disease, immune activity, thrombosis and kidness losss protection effect (Tan and Ryu, 2015;An et al., 2014;Lee et al., 2002;Choi et al., 2015;Kim et al., 2015).The plant is additionally reported to exert a skin whitening effect (Park et al., 2014;Lee et al., 2015), indicative of a variety of physiologically active components.However, limited information is available on the potential anti-diabetic effects of D. morbifera. Most of the foods using D. morbifera are beverages, which are produced by simple processing or by hot water extraction or natural fermentation using sugar.However, in this study, it is intended to develop a health-oriented food materials which can differentiate from the similar products through fermentation of lactic acid bacteria as a raw material and produce antioxidant and antidiabetic activities of D. morbifera extract.This study focused on evaluation of the antioxidant, cell toxicity and anti-diabetic activities, in particular, alpha-glucosidase activity of D. morbifera distilled water extracts.Our results may serve as a platform to evaluate the utility of D. morbifera extracts as a nutraceutical source for management of Lee and Kim 125 diabetes in the future. Preparation of D. morbifera extracts Boughs of D. morbifera were collected from a natural habitat in Jeju Island in February 2016.Samples were dried at room temperature and subjected to the extraction process.The collected D. morbifera boughs were cut into 1.0 cm length sections.The distilled water extract of D. morbifera (NFDE) was extracted with 20 volumes of water at 95°C for 4 h and reduced to a powder using the spray-dry method.Fermented D. morbifera (FDE) was prepared as follows: L. plantarum and L. brevis were inoculated in De Man, Rogosa and Sharpe (MRS) broth at 37°C for 24 h and diluted to obtain an initial population of 1-5 × 10 7 CFU/ml.D. morbifera solution (5%) was inoculated with fresh bacterial subculture (4% v/v) for fermentation at 37°C for 24 h, followed by sterilization and filtration.The filtered solution of fermented sample was concentrated using a rotary evaporator and spray-dried. Total phenolic assay The total phenolic content was determined with the Folin-Ciocalteu assay (Singleton and Lamuela-Raventos, 1999) using gallic acid (GA) as the standard.A mixture comprising of the sample solution (50 µl), distilled water (3 ml), 250 µl Folin-Ciocalteu's reagent solution, and 7% NaCO3 (750 µl) was vortexed and incubated for 8 min at room temperature, followed by dilution into 950 µl distilled water.The mixture was allowed to stand for 2 h at room temperature and absorbance measured at 765 nm against distilled water as a blank.Total phenolic content was expressed as gallic acid equivalents (µg GAE/ml sample) based on a gallic acid calibration curve.The linear range of the calibration curve was 10 to 200 µg/ml (r = 0.99). Measurement of antioxidant activity of extracts The antioxidant capacity of extracts was analyzed by measuring free radical scavenging activity using the DPPH assay (Brand-Williams et al., 1995).Samples were prepared at concentrations of 0.1, 1 and 5 mg/ml.Vitamin C treatment was used as the positive control group.After maintaining at room temperature for over 30 min, free radical scavenging activity was determined by mixing with 500 µM DPPH solution (1:1) and incubating in the dark, followed by measurement of absorbance at 517 nm using a spectrophotometer. Analysis of alpha-glucosidase inhibitory activity Alpha-glucosidase inhibitory activity of the extract was examined according to a standard protocol with minor modifications (Shai et al., 2011).The reaction mixture containing 50 μl phosphate buffer (100 mM, pH 6.8), 10 μl alpha-glucosidase (1 U/ml) and 20 μl of various concentrations of extract (0, 10, 20, 50 and 100 µg/ml) was preincubated in a 96-well plate at 37°C for 15 min.Next, 20 μl pnitrophenol (5 mM) was added as a substrate and incubated further at 37°C for 20 min.The reaction was terminated with the addition of 50 μl Na2CO3 (0.1 M).Absorbance of released p-nitrophenol was measured at 405 nm using a multiplate reader.Acarbose (0.10.5 mg/ml) was included as a standard.A control sample without the test substance was set up in parallel, and each experiment performed in triplicate.Results were expressed as percentage inhibition calculated using the formula: Inhibitory activity (%) = (1 − As/Ac) ×100 where, As represents absorbance of the test sample and Ac the absorbance of control. Statistical analysis All data are presented as mean ± STD.Differences among treatments were assessed by analysis of variance (ANOVA), followed by Dunnett's test.p values V 0.05 were considered to be significant. Total polyphenol content Two type strains, L. plantarum and L. brevis were investigated as starter cultures for the fermentation of D. morbifera.Following fermentation, total phenolic contents and antioxidant activities of fermented D. morbifera using starter cultures were determined.The total phenolic contents of NFDE and FDE were measured using a standard curve prepared with different concentrations of gallic acid.In the current study, the total phenol contents of NFDE and FDE were determined as 562.44 and 640.03 µg/ml, respectively, and shown to increase with fermentation time (Table 1).It was confirmed that the total polyphenol contents had expanded by about 1.14 times in the case of FDE compared to NFDE.Earlier, Kang et al (2011) reported that the phenol content is increased by fermentation at 8.13 and 9.53 mg/ml in extract of Maclura tricuspidata and the fermented extract of M.tricuspidata, respectively. DPPH radical scavenging activity of D. morbifera extracts Comparison of DPPH radical scavenging abilities before and after fermentation according to extract concentration showed higher inhibitory activity of FDE than NFDE (Table 2).NFDE exerted increasing inhibitory effects (10.68, 65.31, and 89.8%) at concentrations of 0.1, 1, and 5 mg/ml, respectively.Within this concentration range, the inhibitory effects of FDE at 24 h were 13.67, 72.61, and 97.1%, respectively.It was confirmed that DPPH radical scavenging had increased around 1.081.28times in the case of FDE compared to NFDE. Effects of D. morbifera extracts on 3T3-L1 cell viability This study aimed to discover a possibility that NFDE and FDE can be used as health-oriented food materials.To determine the effects of the extracts on cell viability, the MTT assay was performed on 3T3-L1 cells treated with 0 to 500 µg/ml NFDE or FDE.The results are expressed as a percentage of surviving test cells relative to the control group (Figure 1).No significant toxicity in 3T3-L1differentiated cells treated with both fermented and nonfermented extracts at a range of concentrations was observed (0 to 500 µg/ml). DISCUSSION In this study, the functional components of D. morbifera fermented with lactic acid bacteria was investigated.The antioxidant effect of the fermented extracts according to the lactic acid bacteria was analyzed.Polyphenols, originally known as Vitamin P, have various potential health benefits.Polyphenol compounds are widely distributed in medicinal plants.Several physiological properties of these phytochemicals have been reported, including antioxidant and anticancer activities (Liu, 2004;Manach et al., 2005, Kang et al., 2011).Notably, at 72 h of fermentation, the total polyphenol content of FDE (640.03 µg/ml) was higher than that of NFDE (562.44 µg/ml) (Table 1).During fermentation, a number of enzymes, such as protease, amylase and lipase, are secreted, leading to increased levels of phenolic substances and consequently, elevated antioxidative activity (Manach et al., 2005).The increase in polyphenol content was attributed to an increase in the quantity of phenolic compounds.Phenolic substances impart a unique color to plants and are significantly involved in determining taste.The antioxidative compounds obtained from natural products to date have mainly been identified as phenolic compounds and flavonoids.In particular, caffeic acid, chlorogenic acid and gentisic acid exert strong antioxidative effects (Chung 1999). The DPPH radical scavenging activities of NFDE and FDE increased in a concentration-dependent manner, as shown in Table 2. FDE displayed slightly higher DPPH radical scavenging activity than NFDE within the concentration range of 0.1, 1, and 5 mg/ml.The positive control group (Vitamin C) displayed significantly higher radical scavenging activity than D. morbifera at equivalent concentrations.In a study by Jeon et al. (2011) comparing treatment with extracts of ginseng and lactic acid-fermented ginseng (0.1 and 1.0%), activity was increased from 24.85 and 49.78% to 54.30 and 86.36%, respectively.This study concluded that fermentation of D. morbifera with lactic acid bacteria is possible and that is effective to increase the antioxidant effects of D. morbifera.Kang et al. (1995) reported enhanced DPPH radical scavenging antioxidant activity by phenolic compounds with greater reducing power.Moreover, in their study, FDE with high total polyphenol content displayed high DPPH radical scavenging activity, further supporting a correlation between phenolic compounds and DPPH radical scavenging ability.The electron donating ability via radical scavenging of DPPH contributes to the antioxidant activity of phenolic substances. To examine the cytotoxicity of D. morbifera to 3T3-L 1 cells, the MTT assay was performed using various extract concentrations (0-500 µg/ml) before and after fermentation.The results are shown in Figure 1.At the highest treatment concentration of non-fermented D. morbifera extract (500 µg/ml), viability of 3T3-L 1 cells was 95%, indicating no significant inhibition of cell survival.At the same concentration of fermented extract, cell viability was 85%, suggesting that fermentation broth maintaining a concentration of 500 µg/ml extract can effectively enhance cell growth without inducing toxicity.As a result, all activity experiments were conducted with concentrations of up to 500 µg/ml extract. The α-glucosidase enzyme ultimately converts polysaccharides to monosaccharides by α-amylase.Inhibition of these enzymes results in delayed carbohydrate hydrolysis and absorption, thereby improving postprandial glucose increase.Inhibitors of α-glucosidase activity block digestion and absorption of carbohydrates in the small intestine regardless of insulin secretion, thereby reducing the side-effects of existing drugs, such Current treatments include control of weight and diet, along with administration of insulin, sulfonyl urea and biguanide.However, the development of effective antidiabetic diets using natural products that do not exert side-effects remains an urgent clinical requirement. Acarbose is a typical inhibitor of α-glucosidase that has recently been developed for use in the treatment of diabetes.With a view to controlling insulin blood glucose levels in patients with type 2 diabetes, inhibition of αglucosidase by D. morbifera was examined as an indicator of antidiabetic activity.As shown in Figure 2, a dose dependent inhibitory effect on α-glucosidase was observed.Administration of 50 µg/ml acarbose, currently marketed as a diabetic improver, led to 43.8% inhibition of α-glucosidase activity.Inhibitory activities of 10 µg/ml NFDE and FDE on α-glucosidase were equivalent to that of 50 µg/ml acarbose.At concentrations of 20, 50 and 100 µg/ml, NFDE and FDE exerted higher inhibitory activity than acarbose, supporting their utility as natural materials for the improvement of diabetes mellitus. In conclusion, the antioxidant, cytophilic and αglucosidase inhibition effects of extracts of D. morbifera support its utility as a potential candidate for the development of natural anti-diabetic agents with minimal side-effects. Figure 2 . Figure 2. Inhibitory effects of NFDE and FDE on α-glucosidase activity.Results are presented as means±SD of experiments performed in triplicate.Acarbose (50 µg/ml) was used as a positive control. Concentration hepatotoxicity and dysregulation of pancreatic function.Diabetes mellitus is divided into insulin-dependent and insulin non-dependent subtypes. Table 1 . Total polyphenol contents in distilled water and fermented extracts of D. morbifera 4 cells per well.Cells were treated with 200 µl NFDE and FDE at a range of concentrations(0, 50, 100, 200, 300, and 400 µg/ml) and incubated at 37°C for 4 h in 5% CO2.Cell viability was determined according to the protocol provided by the manufacturer.MTT reagent (20 µl) was added to individual wells and incubated under similar conditions for 1 h.Absorbance of plates was read at 490 nm in a microplate reader.The number of viable cells was directly proportional to absorbance Table 2 . DPPH radical scavenging activity (%) of distilled water and fermented extracts of D. morbifera Effects of NFDE and FDE on viability of 3T3-L cells.Cells were seeded at a concentration of 1 × 10 4 cells/well in a 96-well plate and differentiation allowed for 4 days following treatment with a range of concentrations of NFDE and FDE.Following harvesting, cytotoxicity was determined with the MTT assay.Results are presented as means±SD of experiments performed in triplicate.
2019-04-03T13:10:21.458Z
2019-02-06T00:00:00.000
{ "year": 2019, "sha1": "1016c95714f1a060b88985c8437ea5d71ad1f6f4", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJB/article-full-text-pdf/46D39D760111.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1016c95714f1a060b88985c8437ea5d71ad1f6f4", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
235593092
pes2o/s2orc
v3-fos-license
The Two Scaling Regimes of the Thermodynamic Uncertainty Relation for the KPZ-Equation We investigate the thermodynamic uncertainty relation for the $(1+1)$ dimensional Kardar-Parisi-Zhang equation on a finite spatial interval. In particular, we extend the results for small coupling strengths obtained previously to large values of the coupling parameter. It will be shown that, due to the scaling behavior of the KPZ equation, the TUR product displays two distinct regimes which are separated by a critical value of an effective coupling parameter. The asymptotic behavior below and above the critical threshold is explored analytically. For small coupling, we determine this product perturbatively including the fourth order; for strong coupling we employ a dynamical renormalization group approach. Whereas the TUR product approaches a value of $5$ in the weak coupling limit, it asymptotically displays a linear increase with the coupling parameter for strong couplings. The analytical results are then compared to direct numerical simulations of the KPZ equation showing convincing agreement. Introduction Over the last years there has been remarkable progress in field theory with regard to the Kardar-Parisi-Zhang (KPZ) dynamics [1] on the one hand and in stochastic thermodynamics with respect to the thermodynamic uncertainty relation (TUR) on a discrete set of states [2,3,4] on the other hand. The KPZ equation is a paradigmatic example of a growth equation displaying nonequilibrium dynamics while the TUR bounds the entropy production through fluctuation and mean of any current. For a recent excerpt of the former see, e.g., [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]. In regard of the latest achievements for the TUR see, e.g., [20,21,22,23,24]. In [25,26] these two areas have been connected by formulating a general field-theoretic TUR and evaluating it explicitly for the KPZ equation analytically as well as numerically in a certain scaling regime. For other field-theoretic formulations of stochastic thermodynamic concepts, in particular the stochastic entropy production, see, e.g., [27,28,29]. The derivation of the KPZ-TUR in [25] relies on a perturbational approximation in a small effective coupling parameter of the KPZ non-linearity. This approach is quite generally applicable to stochastic field-theoretic overdamped Langevin equations. However, it is intrinsically limited to the linear scaling regime of the respective partial differential equation [30]. In case of the KPZ equation, this is the so-called Edwards-Wilkinson (EW) scaling regime [31,32]. The aim of the present paper is to extend the results from [25] valid in the EW scaling regime of the KPZ equation to the genuine KPZ scaling regime. This requires an approach that will hold for arbitrary values of the effective coupling strength of the KPZ non-linearity. For equal-time correlation functions such a generalization is possible by using the exactly known stationary probability density functional of the 1d KPZ equation. For two-time correlation functions, however, this approach is not feasible and thus different methods have to be used. In the present case we use two different ways of calculating this type of correlation functions. The first one is the perturbational approximation introduced in [25] and as a second one we employ the dynamic renormalization group (DRG) approach. Here the former applies to the EW scaling regime, where the latter covers the genuine KPZ scaling regime. Hence, a combination of these methods enables us to analytically express the KPZ-TUR for arbitrary values of the effective coupling parameter. These results are compared with numerical simulations based on the method from [26]. This comparison will show convincing agreement between the theoretical predictions and the numerical results. The paper is organized as follows. In section 2 we give a brief overview of the problem at hand and introduce the necessary notions for the formulation of the KPZ-TUR. Section 3 deals with the derivation of exact results valid for arbitrary coupling strength. In particular, we utilize the stationary state probability density functional of the (1 + 1) dimensional KPZ equation to calculate equal-time correlation functions entering the KPZ-TUR via functional integration. In section 4 we concisely state the scaling behavior of the KPZ equation as this will be relevant for the calculation of temporal two-point correlation functions. Sections 5 and 6 cover the calculation of a specific two-time correlation function via perturbational approximation and DRG, respectively. The combination of the results obtained in the prior sections, yields the KPZ-TUR for arbitrary coupling strength, which is given in section 7. The comparison of the analytically obtained theoretical predictions to numerical data is shown in section 8. We summarize our results in section 9. The Problem In this section we will briefly introduce the KPZ equation and the TUR, as well as give a short summary of the results obtained in [25] which link the two topics. At the end of the section we outline the steps to be taken in order to extend the results from [25,26] to arbitrary coupling strength. We begin with stating the KPZ equation in the form needed for our analysis, i.e., the (1 + 1) dimensional Kardar-Parisi-Zhang equation on a finite interval x ∈ [0, b], b > 0, given by Here ν represents the surface tension, λ is the coupling parameter of the nonlinearity and η Gaussian space-time white noise with zero mean and autocor- We further assume periodic boundary conditions for (1), i.e., h(0, t) = h(b, t), and flat initial condition h(x, 0) = 0 (see also [25]). The thermodynamic uncertainty relation for a non-equilibrium steady state (NESS) was originally proposed for Markovian networks [2]. It gives a lower bound on the total entropy production ∆s tot needed to provide a certain precision 2 of a process in such a network. It reads where · denotes averages with respect to the noise history. Here, ∆s tot = σ t in the stationary state, with σ the entropy production rate and 2 = 2D/(j 2 t), with D the diffusivity and j an arbitrary NESS current. Later, the TUR (2) was proven to hold for a Markovian dynamics on a discrete set of states [3] and for overdamped Langevin dynamics [33]. Recently, the TUR in (2) was extended to a general field-theoretic overdamped Langevin equation [25] and exemplified with the (1 + 1) dimensional KPZ equation from (1). For the KPZ equation it was found via a perturbative calculation of ∆s tot and 2 as well as by direct numerical simulation [26] that Q 5 for small values of the effective coupling parameter What is meant by 'small' will be specified below in section 4. According to [30], such a perturbative approach to a non-linear PDE like (1) will yield results expected to be valid in the linear scaling regime of the non-linear equation. Hence, the results from [25,26] are valid in the so-called Edwards-Wilkinson (EW) scaling regime of the KPZ equation. In the present paper, we will derive the field-theoretic analog of (2) (see [25]) for arbitrary values of λ eff and thus extend the range of validity from the EW scaling regime to the genuine KPZ scaling regime. The terminology will be explained in more detail in section 4. The expressions for the constituents of the TUR used in this paper are derived in [25] and read as the total entropy production and as the precision, where Here, Ψ g describes the time-integrated generalized current with g(x) ∈ L 2 (0, b) ( b 0 dx g(x) = 0) as an arbitrary weight function. As it was shown in [25] that Q does not depend on the choice of g(x), we will set g = 1 in the following, i.e., Hence, (5) becomes with Ψ (t) from (7). In the stationary state we have Ψ (t) = J t with J the stationary current. In the following we derive explicit expressions for ∆s tot , Ψ (t) and var[Ψ (t)]. The first two, namely ∆s tot and Ψ (t) , are given by equal-time correlation functions. These correlation functions may thus be calculated in the stationary state via functional integration over the stationary state probability density of the (1 + 1) dimensional KPZ equation (see section 3). The variance of Ψ (t) is, on the other hand, given by a temporal two-point correlation function, which requires more knowledge than the stationary state probability density. We show below two different ways to obtain var[Ψ (t)]. The first uses a perturbation expansion in λ eff from (3) (see section 5) and the second employs dynamic renormalization group (DRG) techniques (see section 6). Normalized Stationary Distribution For the (1 + 1) dimensional KPZ equation the stationary probability density functional of the height field h(x, t) is known exactly [6,32,34] and reads Note, that (9) is identical to the steady state solution of the Fokker-Planck equation for the linear problem, i.e., for the EW equation [31]. In the following, we want to use (9) to calculate equal-time steady-state correlation functions. Hence, (9) needs to be properly normalized. The normalization is obtained by expressing h(x, t) in terms of its Fourier series (see e.g. [25]) where h k (t) ∈ C, and inserting (10) into (9). The introduction of a finite Fourier-cutoff Λ ensures the normalizability of (9). A subsequent functional integration of (9) over h yields where h R/I,j (t) represents the real/imaginary part of h j (t), respectively. Hence, the normalization of (9) reads and therefore With (13) with µ k = −4π 2 νk 2 (see also [25]). Note, that is expected to hold, where · denotes averages with respect to the noise history. We will show this explicitly in the case of Ψ (t) and ∆s tot below. Exact Stationary Current and Entropy Production Rate For the steady state current J, where Ψ (t) = J t, we get The second step follows from a spatial integration of (1) with a subsequent averaging with respect to p s [h] and the last step uses Parseval's identity and (14). The result in (16) has already been derived in [25] as lowest order approximation of a perturbation expansion in λ eff where the l.h.s. of (15) was used for calculating expectation values. It is instructive to examine why the lowest order approximation is in fact exact. This can best be seen by studying the structure of the perturbation expansion of ∂ t Ψ (t) . Terms with an even power of λ eff vanish as they represent odd moments of the Gaussian noise η, whereas terms with odd power greater than 1 vanish by exact cancellation of the involved moments. For the steady-state entropy production rate σ, with ∆s tot = σ t, it is found with (10), using Wick's theorem and (14) that where R k ≡ [max(−Λ, −Λ + k), min(Λ, Λ + k)] (see [25]). Again, a comparison of (17) with the corresponding result from [25] shows that the lowest order perturbational approximation is also exact for the case of the entropy production rate σ in 1d. Thus, by using (13), we can calculate for the (1+1) dimensional KPZ equation the exact expressions for the stationary current J (see (16)) and the entropy production rate σ (see (17)) for arbitrary values of the coupling parameter. This implies that two of the three constituents of the TUR product Q are known exactly. Hence, we state the intermediate result In the following sections we present two different approaches to obtain results for var[Ψ (t)] in order to complement (18). > t EW→KPZ , for a finite KPZ system (see, e.g., [32]). In contrast to the results in the previous section, which hold for any choice of system parameters, the variance of Ψ (t) changes its behavior depending on the strength of the coupling parameter from (3). To illustrate this in more detail, it is instructive to have a look at the time-scales at which the changes in the variance occur. In the case of a large coupling parameter, these time-scales are the EW to KPZ crossover time t EW→KPZ , given by [32] and the KPZ correlation time t KPZ c , given by [32] t KPZ c ≈ 2 (0.21) 3 ν ∆ 0 1/2 In Fig. 1, we show schematically the behavior of the variance of Ψ (t) if t KPZ c > t EW→KPZ . For times t < t EW→KPZ the system is in the so-called Edwards-Wilkinson regime, characterized by the critical exponent z = 2 of the linear theory. In this scaling regime, the variance of Ψ (t) is expected to scale linearly in time t [32]. For times in the range t EW→KPZ < t < t KPZ c , the system is in its transient regime. This regime belongs to the KPZ scaling-regime, characterized by the KPZ critical exponent z = 3/2. While in the transient regime, the variance is predicted via scaling arguments to scale with t 4/3 , i.e., it displays super-diffusive behavior [32]. For times t > t KPZ c the system enters the KPZ stationary regime, where the variance is again expected to scale linearly in time t. However, due to the super-diffusive behavior in the transient regime, the proportionality factor is larger in the stationary KPZ regime than in the EW scaling regime [32]. In the following we will refer to the above described behavior as the behavior for the 'normal' ordering of timescales, namely t KPZ c > t EW→KPZ . Before we discuss the case of t EW→KPZ > t KPZ c let us reformulate the two timescales in (19) and (20) by expressing both in terms of the effective coupling parameter λ eff from (3). To this end we introduce a dimensionless time t s = t/T with the diffusive time scale T = b 2 /ν. This yields for the EW to KPZ crossover time and for the KPZ correlation time The form of (21) and (22) indicates the existence of a critical effective coupling parameter λ c eff below which the 'normal' ordering of time-scales breaks down, i.e., t EW→KPZ s > t KPZ c,s . One may think of this as shrinking the transient regime in Fig. 1 to zero, and thus, equating (21) with (22) and solving for λ eff yields In dependence of the critical effective coupling parameter we have Hence, the behavior of var[Ψ (t)] sketched in Fig. 1 is valid as long as λ eff > λ c eff . We now turn to the behavior of the variance of Ψ (t) for λ eff < λ c eff . In this case we have t EW→KPZ s > t KPZ c,s , which is physically not sensible as this implies that the system would have to become stationary in the KPZ regime before even crossing over from the EW to the KPZ regime. This situation is resolved by taking the EW correlation time t EW c into account. It is given by [32] As can easily be seen, λ eff < λ c eff implies that t EW c,s < t EW→KPZ s , hence the system becomes stationary in the EW scaling regime. Therefore, if λ eff λ c eff , its dynamical behavior will be governed for all times t by the critical exponent z = 2 of the linear theory. For λ eff ↑ λ c eff the behavior of the variance will change from the one in the linear theory to the one predicted for the KPZ equation and should be accessible by a perturbation expansion in λ eff up to λ eff ≈ λ c eff . Note, that when we state λ eff 'small', we mean λ eff < λ c eff . Perturbation Expansion As stated in the above section, for values of λ eff λ c eff we expect to obtain the correct behavior of the variance of Ψ (t) via a perturbation expansion in λ eff . As the analysis follows the one in [25], we will be brief here and focus on the results instead of the technical details. Note, that in this section we use the scaled version of the KPZ equation, which is obtained by the introduction of scaled variables according to [25]. This significantly simplifies the perturbation expansion. In the scaled variables the perturbative ansatz reads where the Fourier coefficients h (i) k (t) are given in [25]. In the following we will use (27) Here, the dimensionless form of (1), given in [25], is integrated with respect to the spatial variable to obtain (29). We use (29) to calculate the two-time correlation function where In principle any correlation of the Fourier coefficients h k can eventually be expressed by correlations of h (27), which depend linearly on the Gaussian noise η [25] and thus allow for the application of Wick's theorem. In practice, however, this results in a quickly growing complexity of the calculation for higher order approximations in λ eff . A possible circumvention of this issue is the physical assumption of so-called quasi-normality. This assumption has been successfully used in turbulence theory [35,36] and has been adopted in [37] for the height field h(x, t) of the KPZ equation. The quasi-normality hypothesis states that all even moments of h are assumed to behave like they were normally distributed and thus Wick's theorem may directly be applied to (31). At least for large times t , t the assumption is supported by the fact that h(x, t) is exactly Gaussian distributed in the NESS (see section 3). Hence, after applying Wick's theorem to (31), we have Replacing the h j 's in (32) with the expansion from (27), integrating twice over time and following the same steps as in [25], one obtains for t 1 which is calculated to one order higher than in [25] and S 1,2 are simply abbreviations for the respective sums in the first line. Next we will evaluate S 1,2 analytically in the limit of large Λ. For S 1 we find where H (n) Λ = Λ k=1 1/k n is the so-called generalized harmonic number of order n and ζ the Riemann-Zeta function. For S 2 we find after some straightforward algebraic manipulation The inner sum over m in the second line in (35) may be approximated by the model a − b k/Λ, with a, b free fit-parameters, which we estimated as Inserting (36) into (35) and taking Λ 1 leads to Thus in the case of large Λ we have the following asymptotic behavior (t 1) of the variance of Ψ , (38) or, in terms of the rescaled, dimensional variables We expect the approximations in (38) and (39) to yield sound results for λ eff λ c eff from (23). In section 8 this will be checked by comparison with numerical simulations in the according parameter regime. In the next section, we will focus on obtaining the variance of Ψ (t) for large values of λ eff via a DRG approach. The 1D KPZ-Burgers Equation and var[Ψ (t)] In this section we use the equivalence of the 1d KPZ equation to the stochastic Burgers equation (see e.g. [38]), given by the transformation u(x, t) = −∂ x h(x, t), with u(x, t) the velocity field of the Burgers equation where f (x, t) = −∂ x η(x, t). In terms of the Burgers velocity field u(x, t) the expression forΨ (t) from (29) readṡ In principle, the derivation of the expression for the variance of Ψ (t) is analogous to the one shown in section 5. However, here we will use the continuous Fourier transform instead of the discrete Fourier series as above, since a continuous wavenumber spectrum is needed for implementing the DRG scheme. In particular, we define as the forward and backward Fourier transform of the velocity field u(x, t), respectively. To apply (42) to (41), we use the b-periodicity of u(x, t) due to the periodic boundary conditions in (1). In particular we have where the second step holds for b 1 and in the last step we used the partial Fourier transform (42) in the spatial variable x. We thus obtain and therefore, similar to (28), The expressions in (45) and (46) again rely on the quasinormality hypothesis [37,35]. Two-Point Correlation Function via DRG Instead of calculating the two-point correlation functions in (46) perturbatively as in section 5, we here use the DRG method described in e.g. [39,40], where we have noise correlations corresponding to Gaussian white noise for the KPZ equation, i.e., (y = −2 in [39,40]). The starting point of the DRG procedure is the Fourierspace representation of (40), namely where we define the bare propagator The next step will be to split the velocity field in (48) into large-wavenumber modes, u > , and small-wavenumber modes, u < , where it holds that (see e.g. [35,39]) with l the renormalization parameter and Λ 0 an ultraviolet wavenumber cutoff. An analogous splitting applies to the noise f (q, ω) as well. Averaging the ensuing equations with respect to the noise history of the f > -modes and integrating out the contributions of the large-wavenumber modes u > yields corrections to the terms of the small-wavenumber modes u < . As these steps are well known and explained in detail in e.g. [35,39], we will simply state the results, which are the renormalization equations for ν and ∆ 0 , obtained after one elimination step. This mode elimination process is iterated using infinitesimally small wavenumber increments (l → dl) which causes parameter changes dν and d∆ 0 . One thus arrives at differential equations for ν(l) and ∆ 0 (l), given respectively by where with Λ (l) ≡ Λ 0 e −l (see e.g. [40]). λ(l) denotes the renormalized coupling constant characteristic for the eliminated modes. Up to a constant numerical prefactor, λ(0) equals λ eff from (3). At this point we adopt a DRG scheme introduced in [41,42] and analyzed in [43], which has been recently applied in [44,45]. It implies that the next step of the scheme consists in solving (53) and (54) for ν and ∆ 0 explicitly, making their scale dependence transparent. It follows directly that, with ν, ∆ 0 and Λ 0 the unrenormalized parameters from (40) and (47). The finding in (56) reflects the fluctuation-dissipation theorem, known to hold for the 1d Burgers-KPZ system (see e.g. [32,39,46]). Using (55) and (56), the integration of (53) yields As a last step we make the common identification |q| = Λ 0 e −l (see e.g. [35,39,40,41,42,44,45]) and obtain asymptotically for large values of l (i.e. |q| 1) Equivalently,λ(l) converges for l → ∞ (i.e. after all large wavenumber modes are eliminated) to a finite stable fixed point, the KPZ fixed point of the RGflow. This fixed point is associated with the dynamical scaling exponent z = 3/2. According to [40], the expressions from (58) and (59) allow for the introduction of a renormalized effective propagator and a renormalized effective noise with such that the nonlinear equation from (48) may be replaced by an effective linear Langevin equation Note, that in (62), as opposed to (48), the right hand side now depends on ν(q) and ∆ 0 (q) from (58) and (59), respectively. A justification of this step is given in [40,41,42,43] via the so-called -expansion. In the present case, a further justification may be given by the fact that for large times the fluctuations of h(x, t) become Gaussian distributed. This indicates that their dynamics may be described by a linear Langevin equation as in (62). An analogous conclusion has been drawn for a slightly different setting in [45]. DRG Results for var[Ψ (t)] Performing a Fourier backward transformation in frequency on both sides of (63) and inserting for C(q, ω) the expression from (65) leads to as the approximate two-point correlation of u = −∂ x h in wavenumber space. With (67) we now have the necessary means to calculate the product of two-point correlation functions in (46). In particular, where we substituted in the second step q = 2πk/b to attribute for the fact that we operate on a finite system-size x ∈ [0, b], which implies an explicit length scale, and Γ is the Euler-Gamma function. Analogously, the second term in (46) Inserting (68) and (69) Therefore in the long-time limit, which indicates super-diffusive behavior for the variance of Ψ (t). This expression is in accordance with the scaling form predicted in [32] for the transient KPZ regime. Moreover, the present DRG approach yields the amplitude factors as well. We use (71) in the time range t EW→KPZ < t < t KPZ c and check in the following for consistency with known results at the endpoints of this time interval. For times t ≥ t KPZ c , the variance of Ψ (t) behaves as var[Ψ (t)] = C t (see Fig. 1), with C a parameter to be determined. Hence, we have the matching condition Inserting t KPZ c from (20) into (72) and solving for C yields This expression may be compared with a result in [32] for the quantity W 2 c , which differs from var[Ψ (t)] only by a factor of 1/b 2 , given by with c 0 a universal scaling amplitude. Apart from the prefactor c 0 the result in (73) is the same as the one in (74), in particular with respect to the anomalous scaling in b. Regarding c 0 , this was determined in [47] for the ASEP-process and then adopted in [32] relying on the universality hypothesis. The exact value of c 0 reads We thus deviate from the exact result for c 0 by roughly 20%, which we regard satisfactory for our consistency check. Moreover we note that our numerical simulations in section 8 indicate that the numerical values resulting from the theoretical prediction of t KPZ c from (20) are too small by roughly a factor of 2 see subsection 8.3. Taking this into account, the correspondence between the numerical values from (73) and (75) is improved significantly (see (95)). At the left endpoint of the transient KPZ regime, i.e., at t = t EW→KPZ , consistency may be checked by comparing (71) with the perturbation expansion from (39) for λ eff ≈ λ c eff . This makes sense, since on the one hand we know that t KPZ c t EW→KPZ provided that λ eff λ c eff . On the other hand the expansion from (39) is expected to be valid for λ eff λ c eff . Thus, with (71) we get whereas evaluating (39) at λ eff = λ c eff ≈ 12.28 from (23) results in Hence, the respective results differ by just 5%. Taking into consideration that both results in (39) and (71) are approximations, this seems to be a reasonable match. To sum up, this section was devoted to the derivation of an analytical expression approximating var[Ψ (t)] in the transient and steady state KPZ regime, respectively. Whereas the result for the latter is essentially known from [32], the result for the former seems to be new. We stress that all amplitude factors are determined by analytic calculations for a generic KPZ system, i.e., without invoking specific model problems of the KPZ universality class. Furthermore, our approximation from (67) for the two-point correlation of u = −∂ x h in wavenumber space may be of some interest in itself. This is since the exact scaling function found in [9] for the 1d KPZ equation is given via the solution of certain differential equations (Painlevé II), which can be solved only by quite involved numerical methods. Especially, an exact analytic expression seems to be out of reach. Thermodynamic Uncertainty Relation Before we formulate the TUR for an arbitrary value of the coupling parameter, let us collect what we have derived for the variance of Ψ (t) in the above sections. Consider first the parameter regime where λ eff < λ c eff . Here we know from (38) where, for t KPZ c t, we chose the exact numerical value of √ π/4 for the universal amplitude c 0 [32]. The behavior for t t EW→KPZ may be obtained in various ways. For one, we could take the short-time limit of (70). Alternatively, we know from the scaling arguments presented in Fig. 1 that for these times the system is governed by the EW-scaling regime, which implies normal diffusive behavior according to the EW equation. Hence, with the exact results in section 3 (see (18)) and the approximations for the variance we can formulate the TUR product Q in the long-time limit as Here we state the Λ-dependent result from (33) in anticipation of the comparison to numerical simulations for a fixed system-size, which also implies a fixed value of Λ. The Numerical Scheme In this section we present numerical simulations of (1) via a stochastic Heun method as described in [26]. We use these simulations to numerically determine the values of Ψ (t) 2 , ∆s tot and var[Ψ (t)] and therefore Q. Due to the sensitivity of the numerics to specific discretization of the KPZ non-linearity as discussed in [26], we here choose the discretization introduced in [48] (i.e., γ = 1/2 in [26]), as this proved to yield the most accurate results in [26]. For the technical details and the respective definitions of the numerical observables we refer to [26]. The numerical scheme uses scaled system parameters { ν , ∆ 0 , λ} given by with δ = b/L and L the numerical system-size [26]. For the sake of simplicity, we set δ = 1. This implies for the effective coupling constant λ eff , For all numerical data shown here, we used ν = ∆ 0 = 1 and thus the critical value of the effective coupling constant is reached for (see (23)) In the case of L = 256, which is the system-size we used for the data shown below, (83) yields λ c ≈ 0.768. Like in [26], we use to establish a connection between the numerical system-size L and the Fouriercutoff parameter Λ from e.g. (80). Thus, in terms of the numerical parameters { ν , ∆ 0 , λ} and L the expression for Ψ (t) 2 reads with (16) and for ∆s tot we have with (17) Both expressions in (86) and (87) may also be found in [26], however, there under the condition of λ eff λ c eff . Similarly, we get for system parameters, such that λ eff λ c eff , and with the result from (78) the following expression for the variance of Ψ (t), On the other hand for parameter sets with λ c eff λ eff we have with (79) var[Ψ (t)] Accordingly, we have for the TUR product Q 8.2 Ψ (t) 2 and ∆s tot In Fig. 2 we show for two values of λ a comparison of numerical data for both Ψ (t) 2 and ∆s tot with the respective theoretical predictions according to (86) and (87). In the case of λ = 0.1 the system is in the EW scaling regime of the KPZ equation and thus the relevant time-scale is the EW correlation time t EW c , which is indicated by the vertical line in Figs. 2(a) and (b). As can be seen well, for times t > t EW c the numerical data follows the theoretical prediction for both Ψ (t) 2 and ∆s tot . For λ = 4.0 the system is in its KPZ scaling regime, which implies that the numerical data is expected to converge to the theoretical predictions for times t > t KPZ c , i.e., the KPZ correlation Fig. 2 are additional support for the fact that the expressions for Ψ (t) and ∆s tot obtained analytically in (86) and (87), respectively, hold for an arbitrary coupling constant. This extends the range of validity for these two entities from the EW regime (or the weak-coupling limit) (see [25,26]) to the KPZ regime (or the strong-coupling limit). Fig. 3 shows numerically obtained data of the variance of Ψ (t) for a system-size of L = 256 and for λ = 0.1 (see Fig. 3(a)), λ = 0.768 ≈ λ c from (84) (see Fig. 3(b)). To demonstrate the effect of including higher order terms in the approximation of var[Ψ (t)] in the EW scaling regime, we show in Fig. 3 each partial sum of the expansion in (88) separately in increasing order. As can be seen clearly in Fig. 3(a), there is no discernible difference between the lowest and highest order perturbation result for λ = 0.1. In Fig. 3(b), for λ = 0.768, however, the difference between the three approximation orders becomes apparent. Variance of Ψ (t) and Universal Scaling Amplitude Here the zero-order approximation ( λ 0 ) underestimates the numerical data and the first-order approximation ( λ 2 ) is a slight overestimation, whereas the second-order result ( λ 4 ) matches the numerical data well. For values λ > λ c , we leave the region in which the perturbation expansion from section 5 is expected to be valid, which is reflected in a rapid decline in the quality of the highest-order approximation (not shown explicitly), as is to be expected. The numerical values in the legend of Fig. 3 are obtained by evaluating (89) and inserting these results into (88) for L = 256 (i.e., (L − 1)/3 = 85). For the case of λ c eff < λ eff , we show in Fig. 4 numerical data of the variance of Ψ (t). As can be seen clearly, the variance displays the expected scaling behaviors (see (90)), namely, on the one hand, for times t < t EW→KPZ scaling according to the EW scaling regime of the KPZ equation. On the other hand, for times t > t EW→KPZ Fig. 4 shows the typical KPZ scaling regime behavior, namely for t EW→KPZ < t < t KPZ c the transient regime with its super-diffusivity and for t KPZ c < t the stationary KPZ regime. In regard of the EW to KPZ crossover time from (19), we see very good agreement between the theoretical prediction, indicated by the left vertical line in Fig. 4 and the numerical data. However, the theoretical prediction for the KPZ correlation time from (20), shown as the right vertical line in Fig. 4 seems to be too small, as the super-diffusive behavior continues beyond t KPZ c . We will investigate this in more detail below. This discrepancy aside, we find good agreement in all three sub-regimes of the variance between the numerical data and the theoretical predictions from (73) and (90). In Fig. 5 we show our approach to determining the numerical is roughly a factor of 2 larger than the one from (20). To be precise where the factor is the mean of the right hand column of Tab. 1 and the error the standard deviation of the mean. Let us use (93) to reevaluate the universal scaling amplitude c 0 from (74) according to the calculation in (73), which leads to Hence, we get for the universal scaling amplitude c 0 , where the theoretically predicted value from [32] is √ π/4 ≈ 0.44, which is well inside the error bars of (95). Thus, by using the numerically obtained value of the KPZ correlation time, t KPZ c from (93), and the DRG result for the variance of Ψ (t) from (90) in the transient regime with the matching condition from (72) we are able to obtain the universal scaling amplitude from (74) to good accuracy (see (95)). The result in (95) is a considerable improvement of (73) which used t KPZ c from (20). TUR Product Q In Fig. 6 we show for three specific values of λ the time-evolution of the TUR product Q. As can be seen for the two cases of λ ≤ λ c , Fig. 6(a), (b), the perturbation expansion from (91) yields convincing agreement with the numerical data for times t ≥ t EW c . To demonstrate the effect of the higher order contributions in the perturbation scheme, we also plot the zero-order result for reference. In Fig. 6(c), (d), i.e., in the KPZ scaling regime, we find that for times t ≤ t EW→KPZ the TUR product Q converges to the EW scaling result, namely Q = 5 − 3/(L − 1). For times t ≥ t KPZ c we see the final convergence to the KPZ steady state result of Q, where in Fig. 6(c) the upper horizontal line indicates Q according to (91) and the lower line represents the result one obtains by using (73). Both can be seen as reasonable approximations to the steady state value of Q, however, in the light of the results regarding (20) and the numerically obtained t KPZ c (see Fig. 5) in dependence of λ and for L = 256. The right column shows the ratio of the two correlation times. the numerical correlation time t KPZ c and the then resulting universal scaling amplitude in (95), we regard the upper line as the more reliable one. This is further supported by the observation in [26] that the numerical scheme intrinsically underestimates the TUR product Q. For the theoretical prediction in the transient regime of the KPZ equation in Fig. 6(c) we rely on the assumption that the steady-state results of Ψ (t) 2 from (86) and ∆s tot from (87) yield reasonable approximations even for times t smaller than the KPZ correlation time t KPZ c . This is to some extent justified by the findings in Fig. 2. We thus expect for t EW→KPZ < t < t KPZ c using (90), which is what we plotted in Fig. 6(c). As can be seen, the expression in (96) predicts the transient time-behavior well. The slight offset may either be a result of the intrinsic numerical underestimation of Q [26] or originate in a minor error in the DRG result from (90) in terms of the numerical prefactor c 0 . In Fig. 6(d), we show the same graphs as in Fig. 6(c). However, here we use the numerically obtained value of the KPZ correlation time, t KPZ c from (93), and the corresponding reevaluated universal scaling amplitude from (95), which replaces the numerical prefactor in (73). This leads to the closing of the gap between the two stationary results in the KPZ regime and thus smoothes the transition between the two branches of (90) for times t > t EW→KPZ . In Fig. 7 we show the TUR product in dependence of λ. The values for Q τ are obtained by calculating the temporal average in the KPZ stationary state of Q for times t ≥ τ = 10 4 . We expect the numerical data to follow the prediction in (91), which it does with good agreement as can be seen in Fig. 7. Here the solid line below λ c represents the perturbative result and is compared to the zero-order result depicted as the horizontal dashed line. In Fig. 7(a) we show for λ > λ c , with λ c from (84), the theoretical predictions according to (73) and (91), i.e., for the KPZ correlation time from (20). On the other hand, Fig. 7(b) displays the same theoretical predictions, reevaluated with the numerically obtained KPZ correlation time from (93) and the ensuing c 0 from (95). Using the KPZ correlation time t KPZ c from (93) demands also a reevaluation of the critical value of the coupling parameter. Repeating the calculation of (23) in section 4, we obtain for the numerically determined KPZ correlation time t KPZ c an effective critical coupling parameter of λ c eff ≈ 9.47, and thus for a system size of L = 256 which is shown in Fig. 7(b). As can be seen and is to be expected, using (93), for λ <ˆ λ c the 4th-order perturbation expansion from (91) is cut off before it reaches its local maximum (as opposed to Fig. 7(a)), which seems to be a physically more reasonable behavior. Conclusion We have given an analytical description of the thermodynamic uncertainty relation depending on the coupling strength of the KPZ non-linearity, see (80). In particular we showed that equal-time correlation functions, in the present case the steady state current J and the entropy production rate σ, can be obtained exactly via functional integration using the known steady state probability density functional p s [h] of the (1 + 1) dimensional KPZ equation, see (16) and (17), respectively. In case of var[Ψ (t)] we extended the result from [25] by calculating the next order of the perturbation expansion, valid in the EW scaling regime of the KPZ equation, see (39). Further, we approximated var[Ψ (t)] in the KPZ scaling regime via a DRG approach and obtained an analytic expression in the transient KPZ scaling regime, which not only recovers the correct scaling form but moreover yields an explicit amplitude factor, see (71). To our knowledge, this has not been done before. The knowledge of the general scaling behavior of var[Ψ (t)] in a finite KPZ system, see Fig. 1, enables us to match the result from the DRG calculation in the transient KPZ regime to the stationary KPZ regime, see (73). We found that (73) is in accordance with a result obtained via scaling arguments in [32], differing only in a numerical prefactor, the universal scaling amplitude c 0 in [32]. The numerical value of this prefactor depends on the KPZ correlation time t KPZ c from [32], see (20). During our numerical analysis, we found that for our data shown in section 8 this theoretically predicted KPZ correlation time is roughly a factor of 2 too small, see Tab. 1 and (92). With the numerically obtained correlation time we reevaluated the calculation leading to the universal scaling amplitude c 0 and found that within the errorbars the result matches the theoretically predicted exact result in [32], see (75). We would like to emphasize that here the universal scaling amplitude c 0 has been determined, if only approximately, without any recourse to a particular model problem within the KPZ universality class and then relying on the universality hypothesis. Furthermore, we found good agreement between the numerical data and the theoretical predictions of the individual KPZ-TUR ingredients, namely J, σ and var[Ψ (t)], for arbitrary values of the coupling strength (see Figs. 2 -4), as well as for the TUR product Q itself, both as a function of time (see Fig. 6) and as a function of the coupling parameter (see Fig. 7). In particular, we were able to describe the Q(λ eff )behavior in the EW-scaling regime (λ eff λ c eff ) via a perturbation expansion up to O(λ 6 eff ) in the effective coupling parameter. It shows that in the weak coupling limit (λ eff ↓ 0) Q(λ eff ) tends to 5 from above. For the KPZ scaling regime we found asymptotically for λ c eff λ eff a linear dependence of Q(λ eff ) on the effective coupling parameter. The perturbative description in the EWregime is expected to hold for λ eff ↑ λ c eff , which is supported by the numerical results. However, it is not clear whether the DRG result, i.e., Q(λ eff ) ∼ λ eff , remains valid for λ eff ↓ λ c eff , i.e., whether there are corrections to this linear behavior. While the numerical data do not indicate such corrections, their absence would imply that Q(λ eff ) is not smooth at λ eff = λ c eff . Resolving this issue has to be left for future work.
2021-06-23T01:16:26.746Z
2021-06-22T00:00:00.000
{ "year": 2021, "sha1": "1e8dd66d9d99dbd583e0070316f6604f980628cb", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10955-021-02845-8.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "1e8dd66d9d99dbd583e0070316f6604f980628cb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
8803944
pes2o/s2orc
v3-fos-license
Expression of matrix metalloproteinases 1, 3, and 9 in degenerated long head biceps tendon in the presence of rotator cuff tears: an immunohistological study Background Long head biceps (LHB) degeneration, in combination with rotator cuff tears, can be a source of chronic shoulder pain. LHB tenotomy reduces pain and improves joint function although the pathophysiological context is not well understood. Tendon integrity depends on the extracellular matrix (ECM), which is regulated by matrix metalloproteinases (MMP). It is unclear which of these enzymes contribute to LHB but we chose to study MMP 1, 3, and 9 and hypothesized that one or more of them may be altered in LHB, whether diagnosed preoperatively or intraoperatively. We compared expression of these MMPs in both LHB and healthy tendon samples. Methods LHB samples of 116 patients with degenerative rotator cuff tears were harvested during arthroscopic tenotomy. Patients were assigned to 4 groups (partial thickness tear, full thickness tear, cuff arthropathy, or control) based upon intraoperative findings. Partial and full thickness tears were graded according to Ellman and Bateman's classifications, respectively. MMP expression was determined by immunohistochemistry. Results MMP 1 and 9 expression was significantly higher in the presence of rotator cuff tears than in controls whereas MMP 3 expression was significantly decreased. MMP 1 and 9 expression was significantly higher in articular-sided than bursal-sided partial thickness tears. No significant association was found between MMP 1 and 9 expression and full thickness tears, and the extent of the cuff tear by Bateman's classification. Conclusion Increased MMP 1 and 9 expression, and decreased MMP 3 expression are found in LHB degeneration. There is a significant association between the size and location of a rotator cuff tear and MMP expression. Background Abnormalities of the long head biceps tendon (LHB) are often associated with rotator cuff tears and may be a reason for persistent shoulder pain [1,2]. Arthroscopic tenotomy of the degenerated LHB usually improves symptoms significantly [3,4]. LHB degeneration can be diagnosed both clinically and radiographically by magnetic resonance imaging (MRI) [5,6]. While tendinopathy has been studied extensively in the supraspinatus, Achilles, patellar, and extensor carpi radialis brevis tendons, there is a paucity of information on LHB tendon degeneration [7][8][9][10]. The anatomy of the LHB is unique. The proximal part of the tendon is intraarticular, so pathology is isololated to the biceps tendon itself, or to the glenohumeral joint and surrounding musculature [11]. The extraarticular portion is protected under the pectoralis major, and subjected primarily to tensional strain [12]. Studies on the histopathology of the intraarticular LHB are rare. Longo et al. demonstrated that ruptured tendons exhibit marked histopathologic changes in comparison to cadaveric tendons [13]. However, the molecular basis of tendinopathy is not completely understood. Much attention has been focused on the matrix metalloproteinases (MMP) in tendinopathy [14,15]. MMPs are a family of 24 zinc-dependent endopeptidases that collectively degrade the extracellular matrix [16]. MMP 1 belongs to the group that cleaves most subtypes of collagen, especially the fibrillar collagens, which provide mechanical strength. MMP 3 is of the stromelysins, broad-spectrum proteinases that also have regulatory functions (such as activation of other MMPs). MMP 9 is a gelatinase, which degrades smaller collagen fragments released during collagenase activity [16]. When comparing the histologic and molecular changes of the intraarticular and extraarticular LHB after tenotomy, Joseph et al. described increased MMP 1 and MMP 3 expression associated with histologic signs of tendinopathy [17]. In our study, we aimed first to demonstrate an alteration of MMP 1, 3, and 9 expression in degenerated LHB compared with healthy controls. Secondly, we hypothesized that there was a correlation between MMP expression in degenerated LHB and the extent of an intraoperatively observed rotator cuff tear. Methods 116 patients (55 male, 61 female) were included in our study. Approval was granted by the ethics committee of our institution and informed consent was obtained in all cases. 108 patients had a rotator cuff tear requiring surgery. LHB tissue specimens were harvested from the mid-portion of intraarticular part of LHB by arthroscopic tenotomy during arthroscopic shoulder surgery (performed by MDS). The control group consisted of 8 trauma patients with humeral head fractures. In this group, LHB samples were harvested during humeral head prosthesis implantation. In every control, the rotator cuff was visualized intraoperatively and confirmed to be normal; shoulder osteoarthritis was excluded radiologically. Patients were divided into four groups, according to the intraoperative findings, as follows: Group I: no shoulder pathology (control group); Group II: partial thickness rotator cuff tear; Group III: full thickness rotator cuff tear; Group IV: cuff arthropathy. Cuff arthropathy was diagnosed during arthroscopy of the shoulder when a massive, irreparable rotator cuff tear combined with complete chondral destruction was found [18]. Partial thickness rotator cuff tears were classified according to Ellman grade (I-III) and were categorized as articularsided ("A") and bursal-sided ("B") [19]. Full thickness rotator cuff tears were graded according to Bateman's classification (grade I-IV) [20]. Additional file 1 gives a detailed overview of the patient demographics. Specimen preparation LHB samples were immediately fixed in 4% formaldehyde for 24 hours, dehydrated in graded alcohol solution and cedarwood oil, and embedded in paraffin. Sections were cut at 5 μm by a Leica-microtome RM2055 (Leica, Wetzlar, Germany) 40°stainless-steel knife. Masson-Goldner staining was performed (Merck, Darmstadt, Germany) according to the manufacturer instructions. Histology Histomorphometrical analysis was performed at a primary objective lens magnification of 5× fold using a Leica DM5000 microscope and Quips analysis software (Leica) at 40× objective lens magnification. For differentially stained slices, a 10× objective lens magnification was used. Vessel number and size was determined by counting and measuring the vessels in three separate areas in each specimen. Cell counting was performed at a 40× objective lens magnification and recorded by percentage (MMP positive cells per total number of cells per slice). Immunostaining was performed either using a labeled streptavidin-biotin method (Dako, Hamburg, Germany, REAL Detection System Peroxidase/DAB+), the staining reaction based on 3,3'-diaminobenzidine (DAB) or VEC-TASTAIN ABC-AP Kit using Vector Red as substrate (Vector laboratories Burlingame, Canada). The stained slices were rinsed with distilled water and stained for 15 seconds with haemalaun as a counterstain. Lastly, sections were rinsed with water and treated with graduated-density alcohol and xylol, as previously described by Matsui et al. in 1998 [21]. Statistics Analysis of variance (ANOVA) and modified least square difference (Bonferroni) tests were used for statistical analysis. Data are shown as the mean ± standard error of the mean (SEM). A p-value of < 0.05 was considered statistically significant. The Spearman's rho test was used to evaluate potential correlations. Group I (controls) The control group consisted of 8 patients (4 male, 4 female). The exact values for MMP expression are shown in Additional file 1 and Figure 1a-c. Group II (partial thickness rotator cuff tears) There were 48 patients in this group, 33 of which are Ellman grade I. Of these, 28 partial thickness tears were "A" and 5 were "B." 15 patients were Ellman grade II (7 "A," 8 "B"). There were no Ellman III patients. Compared with controls, both MMP 1 and 9 expression was significantly increased whereas MMP 3 expression was significantly decreased (p = 0.027 and 0.035, respectively). Higher levels of MMP 1 and 9 were found in "A" versus "B" partial thickness rotator cuff tears (p = 0.039, Figure 2a- Group III (full thickness rotator cuff tears) There were 42 patients in this group. Demographic information, Bateman grades, and MMP 1, 3, and 9 expression are shown in Additional files 1 and 2. MMP 1 expression was significantly higher in this group when compared with controls (p = 0.021). Groups II and IV showed no significant difference in MMP 1 expression. MMP 3 expression was significantly decreased compared to group I (p = 0.012), while no significant difference in MMP 3 expression was seen in Groups II-IV. There was no correlation between MMP 1, 3, and 9 expression and increasing Bateman grade. Group IV (cuff arthropathy) In 18 patients (7 male, 11 female), cuff arthropathy was diagnosed. Patient age was 70 (51-87) years in average. MMP 9 expression was significantly increased in comparison to groups I, II and IV (p = 0.038). MMP 1 expression was significantly higher than in the control group but not significantly augmented in comparison with groups II and III (p = 0.025). MMP 3 expression was significantly decreased compared to the control group (p = 0.043) but there was no significant difference compared to groups II and III. No statistical correlation could be found between expression of MMP1, MMP 3 and MMP 9 and the age of the 116 included patients. Examples for stained tendon sections with different antibodies are given in Figure 3(a-d). Discussion We demonstrated that MMP 1 and 9 expression is increased, and MMP 3 expression is decreased in degenerated LHB compared with healthy controls. MMPs play an important role in tendon matrix, as they degrade collagen and proteoglycans in both healthy and diseased patients [22]. Repeated strain is considered to be the major precipitating factor in tendinopathy [23]. MMP 3 has been viewed as a key regulator of extracellular matrix degeneration and remodelling in normal tissue; Jones et al. claimed that its downregulation limits MMP activation within the tissue [24]. Joseph et al. were the first to describe increased levels of MMP 1 and 3 in the intraarticular (compared to the extraarticular) portion of the LHB [17]. In their rabbit flexor tendon model, Asundi and Rempel described [25]. Discrepancies in MMP expression have been noticed when controls were obtained from healthy tissue adjacent to the tendon, or from cadavers. Therefore, caution is required when comparing studies that utilize different types of control tissue [26]. However, multiple authors have confirmed our findings, of increased MMP 1 and 9 as well as decreased MMP 3 expression in degenerated Achilles tendons [26][27][28]. In this scenario, MMP 1 is considered to be the predominant collagenase [27]. In addition, it has been shown that increased MMP 1 activity occurs in ruptured human supraspinatus tendons [24]. We observed that its activity was highest in group II and decreased with rising extent of shoulder pathology, but this was not statistically significant. The process of LHB degeneration and its relationship with rotator cuff tears is not well understood. In a cadaveric study comparing 7 shoulders with rotator cuff tears with 7 healthy shoulders, Carpenter et al. could not find any structural or histopathologic differences in the LHB tendon [29]. Thus the authors assumed that LHB retains its material properties in the presence of rotator cuff tears. In contrast to these findings, Peltz et al. demonstrated that LHB mechanical properties worsened over time in a rat model [30]. They assumed that the function of the LHB tendon as a humeral head stabilizer is enhanced when the rotator cuff tendons are weakened; that is, the LHB tendon is required to perform new functions which alters mechanical loading. Previous histologic studies demonstrated inflammatory infiltrates in degenerated LHB [31,32]. Increased levels of MMP 9 were observed during aseptic tendon inflammation [24]. Karnousou et al. and Olesen et al. showed increased collagen type 1 synthesis in the peritendineum of Achilles tendons after prolonged mechanical loading [33,34]. In vitro, MMP 9 expression can be enhanced by attachment of collagen type 1 [35]. Both findings may explain the presence of enhanced MMP 9 expression due to altered LHB mechanical loading in the presence of rotator cuff tears. Interestingly, in an experimental study on cultured rat Achilles tendon cells published recently, Tsai et al. could reveal an upregulated expression of MMP 1, 8, 9 and 13 after incubation of the cells with ibuprofen a common NSAID that is popular in the treatment of degenerative tendon disease [36]. Likewise, further studies are needed to understand the pathophysiology and the clinical impact of these observations. LHB degeneration may be a result of age-dependent shoulder pathology as it occurs in the presence or absence of cuff tears. Rathbun and Macnab suggested that vascular insufficiency at the entrance to the biceps groove may be responsible for degeneration [37]. These findings are supported by Refior and Sowa who also described LHB degeneration in the bicipital groove of cadavers [11]. However, the fact that MMP 9 expression is enhanced as rotator cuff pathology worsens leads us to assume that LHB occurs secondarily to injury. Our assumption is supported by the work of Peltz et al. and by the fact that full thickness rotator cuff tears are responsible for distinctive glenohumeral instability. In contrast, partial thickness tears cause dynamic instability, especially in mid-and end-range of motion [18,30,38]. In the second portion of our study, we aimed to find a correlation between the extent of rotator cuff tears and MMP expression in the tendon. We demonstrated a significant correlation between the presence of rotator cuff tears and the MMP 1, 3, and 9 expression. In a recent study, Ko et al. found that "A" tears are associated with intrinsic pathologic changes of the shoulder joint, while "B" tears are found in impingement syndrome, a milder underlying condition [39]. Our findings confirmed these observations, in that MMP 1 and 9 expression was significantly higher in "A" versus "B" tears. Hence, it appears that LHB degeneration may be secondary to intraarticular pathological changes. Unfortunately, we did not have any patients with Ellman grade III tears to test this hypothesis. Biomechanical research should be applied to elucidate this finding. The reason for the development of rotator cuff tears is still controversial. Some authors argue that cuff tears develop from mechanical damage of the tendon caused by subacromial impingement [40]. Others claim that degeneration is due to partial tendon hypovascularisation or primary tendon degeneration [41]. The only consensus is that partial thickness tears may develop into full thickness tears. Gohlke showed that degenerative full thickness rotator cuff tears were more likely to occur in older versus younger patients, a finding we confirmed in our study [42]. In addition, a full thickness tear may progress to cuff arthropathy [43]. We showed that MMP 9 expression increased with the degree of the tear and was highest in group IV. Therefore, our findings suggest that LHB degeneration is related to the course of degenerative rotator cuff disease. While LHB degeneration and degenerative shoulder disease may develop concomitantly due to common etiological factors, we believe that LHB degeneration follows degenerative shoulder disease. This has not been previously reported and further research is necessary for this association to be completely understood. Full thickness rotator cuff tears may develop into cuff arthropathy, so that many full thickness rotator cuff tears are associated with osteoarthritis of the shoulder. Osteoarthritis was not measured but might influence MMP expression. Furthermore, every patient in group IV had a massive rotator cuff tear, making it difficult to differentiate between the two groups. This bias could be balanced by the high density of patients in each group. Our study has several limitations. Our control group is small in comparison to the other groups. As MMPs play a pivotal role in the formation of certain tumors, we had to exclude tumor patients who needed prosthesis implantation or upper limb amputation. We chose to obtain LHB samples of trauma patients with comminuted humeral head fractures who required humeral head prosthesis implantation. This indication is rare in young patients and the operation is usually performed in the elderly. We were able to include 8 patients with healthy LHB tendon and rotator cuffs as controls. Although we attempted to find controls that were matched for age with the other groups, trauma patients are younger on average. If LHB degeneration was a normal result of aging, the age factor could partially explain our results. As there was no significant statistical correlation between age and MMP 1, 3, and 9 expression, we do not think that would explain our findings. Furthermore, the difference in age is relatively small between groups. We demonstrated that MMP 1 and 9 expression is increased, and MMP 3 expression is decreased in degenerated LHB compared with healthy controls. We also showed a significant correlation between the presence of rotator cuff tears and the MMP 1, 3, and 9 expression. We also believe that LHB degeneration follows degenerative shoulder disease, a claim that has not previously been reported. Further research is necessary to clarify the role of the MMPs in the course of degenerative LHB and rotator cuff disease. Conclusion LHB degeneration is associated with increased MMP 1 and 9 expression and decreased MMP 3 expression. It appears that LHB degeneration is secondary to the development of rotator cuff tears and is aggravated over the course of degenerative shoulder disease. Additional material Additional file 1: Overview of patient demographics. Presentation of shoulder pathology classification and mean MMP 1, 3, and 9 expression among different groups. Additional file 2: Overview of mean MMP 1, 3 and 9 expression. Mean MMP 1, 3, and 9 expression for the different grades of full thickness rotator cuff tears.
2014-10-01T00:00:00.000Z
2010-11-25T00:00:00.000
{ "year": 2010, "sha1": "92b0ac8f25337b8d8e70a75027fc96418e0aac14", "oa_license": "CCBY", "oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/1471-2474-11-271", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "92b0ac8f25337b8d8e70a75027fc96418e0aac14", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
59144547
pes2o/s2orc
v3-fos-license
Evaluation of Some Biological Aspects of the Presence of Abrasive Silica and Thickening Silica in Basic Formulations of Dentifrices The aim of this study was to evaluate some biological characteristics and toxicity of basic formulations of dentifrices containing such substances, and to compare them with two existing products in market which also contains silic in their formulations. In this way, it was evaluated some biological parameters: weight of the animals, oral toxicity, hematological parameters, urinary analysis, and histological evaluation. The thrombocytes were also statistically at normal levels. The glutamate-pyruvate transaminase (TGP) showed normal aspect in 5 of the tested groups, as in control. Meanwhile, the oxalacetic transaminase (AST) in one group had a small increase in the control group. Regarding urine, in exception the rats of one group, the rats of the 4 other experimental groups showed leukocytosis urinary statistically higher than the control group. The histological evaluation of the animals showed that specimens from liver, stomach, kidney and submandibular gland presented normal aspects for these organs, without significant characteristics related to inflammatory infiltrates in any of the 6 samples tested in their respective groups. INTRODUCTION At the present stage of knowledge and development, dentifrices are focused on mechanical plaque removal, alteration of their microbiological constitution, inhibiting the tartar storage, promotion of antiseptic action against specific microorganisms, and other specific functions in different age groups, including desensitizing action (Newbrun, 1988;Thylstrup & Fejeskov, 1988;Pacios et al., 2003). With the rapid development of new dental materials, it is essential to carry out preliminary tests to screen and characterize the potentially harmful effects of a material to oral tissues before it is used clinically.There is therefore a great concern about the implications of such biological materials in dentistry, including with regard to cosmetics (Sherrill & Krouse, 1986;Roberta et al., 2003;Andreou et al., 2007;Cerqueira et al., 2008), as the inclusion of abrasive particles insoluble, chemical-therapeutic agents and enzymes that have suggested to the Council on Dental Therapeutics "The American Dental Association" to establish rules for the acceptance of these products (Sherrill & Krouse). The clinical relevance of tests for the assessment of cytotoxicity of dental materials is widely recognized.Many in vivo studies have been pointed out about the problems related to tooth paste absorption by kids (Simard et al., 1991;Naccache et al., 1992;Roberta et al.), especially in relation to the tooth pastes that have fluoride compounds.Different assays and different cell types cultured in vitro have been used to test dental materials.Cytotoxic effects on the cells have been found from changes in cell morphology, from a slower replication rate for cells which were damaged but not killed, from neutral red uptake for cell viability, from propidium iodide staining for cell death, and amide black staining for cell growth, as well as from survival rate (Geurtsen et al., 1998;Chen et al., 2002). Considering the problem of dentifrices swallowing by hospitalized patients, who could reach death due to respiratory complications resulting from the swallowing of sparkling, Morrow et al. (1972) developed a tooth paste without foam. Considering the growing application of silic products in oral hygiene products with different purposes, this work aims to evaluate some biological characteristics and toxicity of basic formulations of dentifrices containing such substances, and has as objective to compare them with two existing products in market and which also contains silic in their formulations. MATERIAL AND METHOD Biological tests.For the verification of possible adverse effects of silic components in dentifrices, with thickness and abrasive purposes, it was made preparations containing some of these products, related only to water, glycerin and preservative to obtain results that are very specific.It was used a control solution (sodium chloride to 0.9%), three preparations made with abrasive silic and/or thickness and two dentifrices available on the market, which also have silic components. Weight of the animals.It was used in this study Wistar rats weighting initially around 200 g.All the aspects of this work were approved by local ethic committee.The animals, male and female group, were weighed in the beginning, each week of the experiment, and finally in the end of the experiment (28 days). Oral toxicity evaluation.It was determined the effects of the administered samples by oral administration in rats during 28 days, once a day.The water and chow were gave ad libitum for the animals and removed 4 hours before the samples administration, and gave again after each administration.It was used 6 groups with 12 animals each, composed by 6 male and 6 female animals in each group, totalizing 72 rats.As a control solution, it was used sodium chloride at 0.9% -Group 1. Group 2 received a preparation of Tixosil 73 (abrasive silic) and Tixosil 333 (thickener silic).Group 3 received the preparation of Tixosil 73 (abrasive silic).Group 4 received a preparation of Tixosil 73 (abrasive silic) and Aerosil 200 (thickener silic).Groups 5 and 6 were composed by commercial preparations, Group 5 received a dilution of toothpaste Emoform Anti-Plaque, and Group 6 received a dilution of toothpaste Emoform Anti-Tatar.All experimental dentifrices have their laboratory performance evaluated previously, regarding physical and physical-chemical aspects and they were also compared with the commercial dentifrices (Pedrazzi et al., 1999a;Pedrazzi et al., 1999b). The preparations of groups 1 and 3 were directly administered to the rats, 0.5 mL/animal.Meanwhile, the viscosity of other preparations required dilution in water, diluting them with water in equal proportions with the aim to administrate by the oral cavity in the animals.Due to this dilution, it was administered 1.0 mL of this suspension daily.This procedure was also used for preparations administered to the groups 5 and 6. The suspensions were administered during 28 days, after which the urine and blood of animals were collected.The urine collection began in the day of the test (day 28), and the urine labeled and sent together with blood samples to a laboratory for medical tests.The blood collection occurred in the day 29 of the study, day of sacrifice, when the animals were anesthetized with ether sulfuric inhaled.An inferior cut was made in the abdominal cavity with the aim to expose the inferior vena cava, and aspirate 4 to 6 mL of blood.The collected blood was deposited on 2 separate bottles and identified for each animal, one of these vials contained heparin in the internal walls (for transaminases evaluation) and the other not (for complete blood counting).The material was identified and sent to the Laboratory of Clinical Analysis for the following evaluations below.Hematological evaluation.Red series: Red cells, hemoglobin, haematocrit, middle globular volume (VGM), average globular hemoglobin (HGM), average concentration of globular hemoglobinic (CHGM); White series: leukocytes, neutrophils, eosinophils, basophils, monocytes, lymphocytes; Platelets: the number of thrombocytes.It was also evaluated the oxalacetic transaminase (AST) and piruvic (TGP). Urinary analysis. Physical examination: volume, pH and density.Chemical review: ketone bodies.Examination of microscopic sediments: leukocytes, erythrocytes. Histological evaluation.The animals were sacrificed and, through conventional methods, samples were taken from tissues of organs selected for the study (liver, stomach, kidneys, and submandibular gland).The samples were fixed during 24 hours in 4% PFA and performed the histological techniques, being carried out serial histological cuts 7 µ m of thick that were subsequently stained by hematoxylin-eosin.The histological fields were captured in 60 and 200X of magnification with the aid of an optical photomicroscope Nikon Alphaphot-2 (YS2).Due to the infusion solution, it was not considered the weight of the organs in this phase.Statistical analysis.The present data did not present a normal distribution, in this way, it was used the non-parametric test, Wilcoxon rank sun test with 95% of confidence level. RESULTS Body weight analysis.When contrasted to the Group I (control), considering the five experimental groups only the Group IV showed statistical difference in relation to the weight gain (p<0.05),both for male and female rats (Table I). Hematological analysis, Red Series.When compared to the Group I, considering the erythrocytes parameter for the five experimental groups, only the Group II presented statistical significant (p<0.05).According to hemoglobin parameter, only the Group II showed significant difference (p<0.05).For the following parameters, hematocrit, VGM and HGM, it was not found significant difference (p>0.05) for any of the experimental groups when contrasted to the Group I (control).For the CHGM parameter, groups IV, V and VI showed statistical significance (p<0.05)(Table II). Hematological analysis: White series.Analyzing the leukocytes parameter, the Group III was the only one that when contrasted to the Group I (control) showed significant alteration (p<0.05).Also, for the neutrophils parameter, Group III was the only that presented statistical difference (p<0.05).For eosinophils parameter, none of the groups compared to the Group I (control) showed statistical significance (p>0.05). According to monocytes evaluation, Group II showed significant alteration (p<0.05)compared to the control.For lymphocytes parameter, none of the groups showed statistical difference (p>0.05)compared to the Group I (control) (Table III). Transaminases and platelets analysis.None of the platelets and pyruvate transaminase glutamate (TGP) parameters, considering the 5 experimental groups, had statistically significant differences (p>0.05) when contrasted to the Group I (control).Moreover, the transaminase oxalacetic (AST) parameter presented in the Group III showed statistical differences (p<0.05)(Table IV). Urinary Evaluation.It was found significant differences (p<0.05) between the Group IV and Group I (control), where the excreted volume of urine by rats of the Group IV was statistically higher than the rats of the Group I, and between the Group VI and Group I (control), where the urine excreted volume by rats of the Group VI was lower than the rats of the Group I, meaning that there was greater volume of urine excreted by rats in groups following the order IV>I>VI. Evaluating the pH parameter, only the Group VI showed statistical significant differences (p<0.05) in relation to the Group I (control), since the urine of the rats of the Group VI presented the highest pH, or alkaline, when would be the normal acid or slightly acid. When the density parameter was evaluated, only the Group V showed statistical difference (p<0.05).Considering the leukocytes presence in the urine of the rats from experimental and control groups were not so much according to the applied tests.Group III, IV, V and VI were statistically different (p<0.05).In the Group I (control), it was found a great presence of leukocytes in urine of the rats, indicating that there was inflammatory process in the urinary tract of the rats of these groups.Only the rats of the Group II (abrasive silica + thickener Tixosil = 73 + Tixosil 333) did not presented significant differences (p>0.05) in relation to the rats of the control group.Regarding the presence of red blood cells in the urine of rats, only the Group II showed significant differences (p<0.05) when contrasted to the Group I (con-trol), but the number of red blood cells in the urine of the rats was lower than the rats of the control group, showing that there was no hematuria in any of the tested groups (Table V). Histological analysis.Under macroscopic examination, the removed organs showed normal aspect, without morphological changes.The samples composed by liver, stomach, kidney and submandibular gland stained by hematoxylin-eosin revealed that all of them had histological characteristics compatible to normal structures, without inflammatory cell infiltration, which indicate the existence of anomalies in these structures (Fig. 1). DISCUSSION The purpose of this study was to evaluate some biological characteristics and toxicity of basic formulations of dentifrices containing such substances, and compare them to two products in the market and which also contains silica in their formulations.The tested samples in this study follow the Gaffar & Afflitto, 1992 requirements and other authors who claim compatible systems for therapeutic dentifrices and other dental materials (Cerqueira et al.) that have not impair or interference in bioavailability and activity of substances with preventive and curative action components of a toothpaste. In respect to weight variations of the animals, it was found variations in all tested groups.Animals were usually gain weight during the time, situation commonly found in Wistar rats lineage (Snell, 1941;Farris & Griffith, 1962;Totani et al., 2008).Male animals of the Group I gained on average 54% of total body weight during the period of 1 month that lasted of the experiment, while Group II gained 75%, the animals of Group III showed on 74% average of weight gain.The winners belong to Group IV -82% (the largest percentage gain medium), the group V showed 54% of weight gain on average and the Group VI gained 66%. Female rats showed the following average body weight for each group: Group I = 30%, Group II = 27%, Group III = 31%, Group IV = 48% (weight gain greater percentage of the 6 groups), Group V = 17% and finally, the Group VI = 24%.The difference of weight gain between male and female was expected, since the female in this age are entering the reproductive phase, with obvious hormonal changes, which brings them less weight gain in proportion (Snell;Farris & Griffith;Gorski et al., 2006).Statistical analysis for variation of weight gain in animals, separate male and female, the non-parametric Wilcoxon test revealed that only the rats belonged to the Group IV showed significant weight gain, when contrasted to the Group I (control). Clinical and laboratory reports of blood -red series, showed that a mouse of the Group I presented discreet anisocytosis or a change in the diameter of the red blood cells, and policromatophilia, red blood cells with traces of basophilic substances (RNA).When occurs a concomitant increase of policromatophilic with reticulocytes (reticulocytosis), sets up a case of severe regenerative anemia (Petkov, 1994).Two rats showed, as well as sharp anisocytosis and policromatophilia, an anemia that occurs in thalassemia, and the hematocrit of one of the rats was below of the normal status, showing the existence of an anemia.The peripheral blood of the 9 remaining rats in the group did not presented morphological changes.In Group II, all the rats showed blood with normocytic aspect, with the peripheral blood without morphological changes.In Group III, 2 female and 1 male rats showed marked anisocytosis and policromatophilia, while another 2 male and 1 female rats showed discreet anisocytosis and policromatophilia, revealing an anemia condition, especially the last rat that had low haematocrit.The 6 rats had other peripheral blood without morphological changes.In Group IV, 3 male rats and 1 female rat presented policromatophilia in the blood, and 1 female rat presented discrete anisocytosis, with low haematocrit, indicating that it could be in anemic condition.All other rats showed normocytosis without morphological changes.In Group V, the blood of 3 male and 3 female rats had discrete anisocytosis and policromatophilia.One rat had only mild anisocytosis and other female rat presented an anemia condition revealed by the haematocrit.The blood of the other 4 animals was normal, without morphological changes.In Group VI, all the rats showed normocytosis blood and peripheral blood without morphological changes.One male and one female rats showed discreet anisocytosis blood, but with the normal haematocrit.All these description are in according to classical literature of hematology, described here by Harmening (2002). Moreover, the statistical analysis of blood red examinations, red cells parameter, showed that in the five experimental groups, only the Group II showed a slight increase in the number of these cells, but this fact does not characterize an erythrocytosis condition.In hemoglobin examinations, Group II was also the only group with a significant increase compared to group I, despite this fact does not characterize as erythrocytosis condition.For the haematocrit, VGM, and HGM parameters, none of the experimental groups showed significant changes when contrasted to the group I, which indicates there is not any anemia evidence in the five experimental groups (Harmening).The CHGM parameter what was inside the analysis of blood -red series -presented the largest number of groups contrasting to the group I, with an increase of this hematological index. According to clinical and laboratory report issued together with the results of blood -white series in Group II, 2 female rats showed leukopenia, but with good proportions in relation to lymphocytic and neutrophilic cells.Two female and 1 male rats showed neutrophilia, which may have a clinical significance of the relevant part of endogenous intoxication.One male rat had lymphocytopenia still acute, which can mean cirrhosis, as it was not absolute.There were 7 rats with normocytosis.In Group III, there was normocytosis in 8 rats.However, there was leukocytosis in 1 male and 2 female rats, with leukocytosis in the male animal (24.300/mm 3 ), but that the misuse of a leukogram only means that the animal is defensive in the third stage, the lymphocyte, healing from infectious processes.The monocitary and eosinophilic levels confirm this situation.In Group IV, 5 female rats showed a leukopenia situation, more pronounced in one of them, which could provide an acute anemia, allergy or poisoning.In a male rat, it was found a table of soft lymphocytopenia without clinical significance.The other group of rats presented normocytosis.In Group V, only 3 male rats showed mild leukocytosis, and 1 female rat presented mild leukopenia, without clinical significance, the others showed normocytosis.It was found leukocytosis in 4 male rats, maybe associated to an infectious process recovering; mild leukopenia was observed in 1 female rat, but without clinical significance in the rats of the Group VI. In the evaluation of the white blood series, the results indicated that the Group III was the only one who presented an increase in the number of these cells (mild leukocytosis and neutrophilia) statistically significant when contrasted to the Group I, but discrete enough to show the importance of this fact.The eosinophil parameters and lymphocytes (important for diagnosis of infections and malignancies) were not statistically significant in any of the five experimental groups when contrasted to the Group I.For monocytes' level, Group II was the only one that when contrasted to the Group I, presented statistical significance, but on a small scale, the maximum that could indicate the presence of a protozoans in animals of that group. The clinical and laboratory reports issued together with biochemical analysis of the blood (oxalacetic transaminase and glutamate-pyruvate) and trombocitary series, in Group I, only 1 female rat presented small change in the rate of transaminase glutamate-oxaloacetic (AST), but it could mean only a liver poisoning, according to data from crossing with the leukogram.The platelets were at normal levels, according to medical reports.The rats of both genders in Group II showed regular levels of transaminase and thrombocytes.In Group III, there were two cases of TGO significant increases.A male rat, with 709u of level, probably was part of hepatic necrosis less serious than 1 level of male rats with 1095u, which may indicate serious virotic hepatitis.It may have occurred liver obstruction and kidney problems in rats at higher levels, however, the findings in leukogram of these rats are not conclusive to speculate about these pathologies.Concerning the thrombocitary system, the findings are normal.In Group IV, all the rats showed levels of TGO, TGP, and platelets in normal condition.In group V, also all the rats of both genders showed normal levels of thrombocytes and transaminase according to the literature (Snell;Farris & Griffith;Silva Jr. et al., 1986). The Group VI showed normal values for transaminases and platelets.In addition, considering the results for transaminase (TGO and TGP) and platelets examinations, the glutamate-pyruvate transaminase (TGP) and platelet series did not have statistical significance when the Group I was compared to any of the five experimental groups.Meanwhile, in relation to oxalacetic transaminase (TGO) parameter, the Group III showed greater values than the Group I (control), despite it was not observed any evidence of serious liver disease. The clinical and laboratory findings reported little bacteriuria in the urine of the rats in Group I.The rats of the Group II had mostly cloudy urine, which coincides with the reaction to the pH (a urine newly harvested when it is translucent and crystalline acid, but cloudy when at rest and alkaline).The rats of this group had degrees of intense bacteriuria, which coincides with the findings of neutrophilia in leukogram.The rats of Group III had cloudy urine, with the presence of red blood cells and / or hemoglobin in the urine of 5 male rats and 4 female rats, indicating a possible hematuria condition, which could suggest glomerulonephritis, cystitis or leukemia.However these figures were not sustained by those found in leukogram.In Group IV, the presence of many white blood cells and red blood cells in the urine of rats, as well as intense bacteriuria could strengthen the hypothesis of an anemia in one of the animals.Animals of the Group V showed moderate degrees of bacteriuria and the presence of ketone bodies in 5 rats, paying attention about the glucose metabolism deficiency provoking the acetone accumulation in the liver, forming "ketone bodies" (Havel & Weuster-Botz 2007;Quijano et al., 2007), characteristic of diabetes condition and starvation, which could explain the low gains in weight of the rats of this group.In Group VI, it was found amorphous calcium phosphate in the urine of all the rats, without however, a meaningless diagnosis. The Wilcoxon analysis applied to the urine examination showed for the urine volume parameter that the volume excreted by the rats of the Group IV was statistically greater than that excreted by the rats of the Group I, while the volume excreted by the rats of the Group VI was statistically lower than the rats of the Group I, but without any physiological changes and / or pathological evidence.Analysis of the urine density showed for the Group V that this parameter was statistically higher than the Group I, but as density and volume are correlated, it was not found clinical significance. In relation to urinary pH, only the Group VI presented statistical significance when contrasted to the group I, showing a pH average higher than the control group, which means that the urine of the rats in that group showed alkaline characteristics, when it should be acid.This fact could easily be explained by assuming that this sample probably contains specific agents in it.In relation to the leukocytes presence in urine of the rats, contrasting the five experimental groups to the Group I that received the first sample, it was found more groups with statistical differences, indicating that the urinary tract of the rats of these groups had inflammatory process.Only the Group II did not have differences in this parameter when contrasted to the group I.According to red blood cells presence in urine, only the group II showed red blood cells in urine excreted statistically lower than the rats of the Group I, indicating that the urinary tract of these rats presented inflammatory processes. In relation to the histological evaluation of the liver, stomach, kidney and submandibular gland, the analysis showed that all the samples had characteristics compatible to normal aspects for each organ, in the Group I and other groups, according to the literature (Costa & Chaves, 1949;Bailey et al., 1973;Leeson et al., 1985).A few cases of inflammatory infiltration observed in some tissues were considered normal.Hannon et al. (1983), studied cases of foreign body due to silica developed in the oral cavity of patients after traumatic deployment of sand, in this way, we should consider the possibility of such injuries occur by swallowing products during teeth brushing with dentifrices containing silica in their formulations, until the introduction of silica particles in gingival sulcus.While some changes have been found in biochemical tests, these situations can conduce to the possible existence of pathological processes that should be considered as isolated cases and should not be worrying because they are lighter and are also present in Group I. It was possible to conclude that in respect to the changes in body weight of rats during 29 days of this experimental research, only the rats of the Group IV received a statistically significant weight gain for both genders. The blood of the rats showed no changes in the parameters that could be evaluated conclusive, in clinical terms, an anemia situation or change in crase blood, and in none of the 5 leukogram samples tested provoked reactions indicative of infectious processes in any of 5 groups' products, such as the control group.The thrombocytes were also statistically at normal levels.The glutamate-pyruvate transaminase (TGP) showed normal aspect in 5 of the tested groups, as in control.Meanwhile, the oxalacetic transaminase (AST) in Group III had a small increase in the control group, but with the point of low figures are not indicative of having been cirrhosis, or liver obstruction in rats of this group.Regarding urine, in exception the rats of Group II, the rats of the 4 other experimental groups showed leukocytosis urinary statistically higher than the control group, which is indicative of inflammation occurrence in the urinary tract of the animals of these groups. The histological evaluation of the animals showed that specimens from liver, stomach, kidney and submandibular gland presented normal aspects for these organs, without significant characteristics related to inflammatory infiltrates or mutagenicity in any of the 6 samples tested in their respective groups.Grossly the esophagus and the inner wall of the stomach did not show scars or abnormalities that indicate the introduction of the hypodermic tube with the tip active modified, used to track gold-gastric inoculation of the samples tested.RESUMEN: El objetivo del estudio fue evaluar algunas características biológicas y de toxicidad provenientes de las formulaciones básicas de dentífricos que contienen sílice en su composición y compararlos con dos dentífricos disponibles comercialmente que también presentan sílice.El análisis hematológico no mostró diferencias entre los grupos evaluados.Los niveles de trombocitos presentados por los grupos fueron también normales.La transaminasa gluámico pirúbica se mostró un aspecto normal en 5 de los grupos estudiados, así como en el grupo control.La transaminasa glutámico oxaloacética en uno de los grupos tuvo un pequeño incremento.En relación a la orina, 4 grupos presentaron leucocitosis urinaria significativamente mayor que el grupo de control.La evaluación histológica del hígado, estómago, riñones y glándulas submandibulares se presentó con aspecto normal, sin presencia de infiltrado inflamatorio.PALABRAS CLAVE: Sílice; Dentífrico; Histología; Ratones Wistar. Fig. 1A . Fig. 1A.Photomicrography showing a liver lobe and the hepatic portal system with normal aspect.H-E.(200 x). Fig. 1B . Fig. 1B.Cross section of the stomach walls showing the smooth muscle cells, glands, lymphoid and connective tissue highly vascularized, commonly situation found in rats with normal aspect.H-E.(60 x). Table I . Weight variation according to the time. TableII.Hematological evaluation: Red series according to groups and genders Table III . Hematological evaluation -white series according to groups and genders Table IV . Transaminases and platelets evaluation Table V . Urinary evaluation.
2019-02-14T23:15:25.038Z
2009-03-01T00:00:00.000
{ "year": 2009, "sha1": "3339739e2603b5b007d8e1416421bf1fb31b4fd4", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/ijmorphol/v27n1/art28.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3339739e2603b5b007d8e1416421bf1fb31b4fd4", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Chemistry", "Physics" ] }
208268363
pes2o/s2orc
v3-fos-license
Meta Adaptation using Importance Weighted Demonstrations Imitation learning has gained immense popularity because of its high sample-efficiency. However, in real-world scenarios, where the trajectory distribution of most of the tasks dynamically shifts, model fitting on continuously aggregated data alone would be futile. In some cases, the distribution shifts, so much, that it is difficult for an agent to infer the new task. We propose a novel algorithm to generalize on any related task by leveraging prior knowledge on a set of specific tasks, which involves assigning importance weights to each past demonstration. We show experiments where the robot is trained from a diversity of environmental tasks and is also able to adapt to an unseen environment, using few-shot learning. We also developed a prototype robot system to test our approach on the task of visual navigation, and experimental results obtained were able to confirm these suppositions. I. INTRODUCTION In recent years, we have seen many agents perform numerous tasks using Imitation learning, in countless applications, especially robotics. There has been a significant progress made for algorithms which learn amidst noisy environments [9], sparse training signals [26], and imperfect demonstrations [10]. However, there has not been much focus on allowing these agents to gather data and generalize to a wide variety of environments. Especially for the task of navigation, this is quite crucial because, autonomous navigation systems like selfdriving cars, delivery robots should be able to function in almost any situations ["cite -DBLP:conf/icra/LekkalaI21, lekkala2016simultaneous˝, 21,22,23]. Since the data distribution continuously changes, it is challenging to learn a task from a fixed set of data, nor is it practical to obtain a comprehensive dataset [4]. In nearly every real-world applications, the data distribution is long-tailed, meaning that the agent would always encounter new patterns, which has a small number of examples. There would always be instances where the agent has never encountered in the past. Taking a step forward, we would then want these systems to perform well in any given situation by applying prior patterns. Although we are concerned about navigation, researchers from other backgrounds can also find related reasons to concur with us. Many of the existing solutions restrict their data domain, by training and testing on datasets collected on same environments ["cite -wen2022can, lekkala2020attentive˝, 36,11]. Other works like [2], apply their algorithm to selfdriving cars, have a much broader data domain. However, since these models were not trained in different settings, for example, cluttered, pedestrian-rich environment, they would not generalize to other settings. Some of the recent works, which try to generalize to new contexts are quite promising but also have some loopholes. With all these practical considerations, it is imperative that we design a method which enables the algorithm to function in diverse scenarios. Meta-Learning deals with applying prior knowledge from various skills to learn a new skill in a few shot setting. These algorithms facilitate the model to utilize previous experience by constructing reusable structured patterns which could then be adapted in new contexts. We propose a method which meta-learns a set of tasks and generalizes to new tasks using a few samples. This paper deals with the first step in making an agent adapt to dynamically changing environments. II. BACKGROUND AND RELATED WORK Direct Policy Search is a class of policy estimation algorithms which find the parameters of the model by optimizing a predefined cost function, which can be with respect to a reward function or expert demonstrations. For a single task, methods on the aspect of Learning a policy from expert demonstrations have been seen to be predominant in the past [16,3,15,18]. For more complicated tasks involving non-stationary data distribution [6], methods involve gathering the expert data [29], and training a predictive model. Recently, a lot of methods as outlined by [1] were proposed in the aspect of model fitting on the expert data for an agent. These methods mainly deal with mitigating, covariate shift, where the input distribution or the training data changes, but the conditional distribution of labels given the data remains fixed [27]. On the other hand, several other works address this problem from an other angle, [1,19,33], especially DAGGER [29] using active learning [14]. Out of all, we choose DAGGER to train our model as it simple and works well in practical cases. A number of works have been applied to the task of navigation, like [4,25] Some of the significant improvements in Imitation learning involve making algorithms robust to long-horizon tasks or changing data distribution [24,20]. Other works, which follow a similar trend, include partitioning the domain into individual tasks and making the model train using multiple tasks [35]. An enhancement of multi-task imitation learning, i.e., hierarchical imitation learning, involves high-level planners to estimate sub-goals for the low-level policies [37]. Although these methods perform better than naive Imitation learning, most of them do not generalize to new related tasks. Many recent works on few-shot imitation learning [12,5,8] involve novel meta-learning schemes. The basic principle underlying all these works involve adapting and inferring model to unseen tasks. Some of the novel approaches used by these methods are hybrid loss functions [5], evolving policy gradients [17], and estimating meta parameters [7]. The core idea of most of the works mentioned above involve parameter adaption for unseen tasks. Some of the few recent works which apply meta-learning to Visual Navigation are [30,34]. Compared to others, the meta objective of our adaptive approach relies on the alignment of the evaluated gradients on training data to the test data. Previously, variants of these approaches were used in supervised learning, for minimizing distribution shift between training and test datasets [28]. Our main contributions are outlined as follows: 1· We propose a novel Importance Weighting method to amplify the gradients evaluated on the training demonstrations for better performance on the test task. 2· Our method is robust on dynamically changing distributions, and can also be extended to Meta Imitation learning, where an agent needs to quickly learn an unseen related task from prior experiences. 3· To the best of our knowledge, we are the first to apply iteratively trained world models, along with our proposed Fig. 3. Picture of the physical robot used to test our method. On the right side, the real-world environments used for evaluating our model are pictured. improvements, to the task of Imitation learning This paper has been organized as follows. In Section III, World models and our improvements are illustrated. Then, In Section IV, out main contribution is outlined. After that, different experiments performed and the related observations are explained in V and VI respectively, followed by conclusion. III. IMITATION LEARNING USING WORLD MODELS Instead of learning the parameters by training the model end to end like [4], an Itervative training procedure using World models [13] is adapted. We hypothesize that modularizing a model and training each component individually works well for many tasks. We corroborate this hypothesis with previous works in Neuroscience such as [31,32]. This methodology also performs well for meta-learning scenarios, where the end policy alone can be retrained for new tasks by leaving prior modules fixed 1 , as it will be clear in the later sections. A. Prelimnaries A World model ( Figure 1) consists of a Variational Autoencoder (VAE), Mixture Density Recurrent Neural Network (MDN-RNN) and a Neural Network (Policy). Each of these modules are named as Vision (V), Memory (M) and Controller (C) modules. Note that in the remainder of the paper, we will be using policy and controller interchangeably. When a world model is evaluated on a specific task, say T i , we obtain a set of observation-action pairs During training, A VAE, encodes an observation O j i to a latent variable z j which is sampled from a Normal distribution z ∈ N (µ, Σ). In tasks like navigation where the current state is conditional on the previous state and action, we have a transition model (MDN-RNN) which models p(z j+1 |z j , a j ). Specifically, MDN-RNN emits h t at every time-step, which contains the transition probabilities. A state is a vector formed by concatenating h j and z j . In the case of Model-free approaches, s j would just be z j . Given this state, a policy is a single layer neural network f (s j , θ) B. Improvements on existing World Models To train the V and M modules, Instead of spawning trajectories with a random policy [13], we use a trained policy to collect data. We performed some experiments, where we found that reproducing the original World model experiments on an V and M trained on data from an expert policy, resulted in better performance. We also found that World models generalize very well, even with small amounts of data trajectories, (of the order 10-20), which makes them suitable for applying in real-world settings. C. Active learning for policy estimation In this work, we train world models using Imitation learning by employing DAGGER as proposed by [29]. DAGGER is an active learning-based method which involves giving control to the agent to gather training data. The policy is then trained by aggregating data over each trial on the environment. We used the original world model [13] as an expert world model and our improved world model as the agent's model. The policy parameters of the original world model were trained using Reinforcement learning and performed particularly well, even amidst any disturbances or noise, in the environment. This specific trait made it the right choice for an expert in this case, as in many situations, the agent would go to a vulnerable state, and the expert should recover from such erratic behavior. In the case of the real-world robot, we do not have such a robust expert yet, and so we used a human demonstrator. IV. PROPOSED METHOD The main contribution is outlined. The code related to our method can be found at https: //github.com/kiran4399/weighted_learning. A. Problem formulation We consider a Meta-Supervised learning setting, where the agent has access to the distribution of training tasks p(T train ) at train-time. To train the policy, we require a set of expert demonstrations { s i j ,â i j } j=T j=0 , where the T is the length of an episode, on the set of train tasks T i train to estimate θ * which minimizes the log-likelihood, on the expert data. The goal of the agent is to generalize to new test task T i test using a few samples. The train and test datasets consist of the data aggregated on the training tasks and the test specific task, respectively. θ * is θ after convergence on the training dataset and, is used to initialize φ i , which are the policy parameters required to train on the test data obtained on a specific task. B. Assigning Importance weights In general, Imitation learning is based on optimizing a model maximize the log-likelihood which is represented in the following equation, where f is the model and N , s n ,â n are the number of samples, states and actions respectively. However, as mentioned before, maximizing the average likelihood may not yield the desired outcome in a lot of applications because of some training samples being irrelevant, noisy or unevenly distributed. We can correct the covariate shift by estimating a non-trivial distribution of scalar weights, estimated from a small data batch drawn from an optimal distribution [13]. Compared to Eqn 2, the optimal parameters and the gradient of the Loss function with respect to parameters φ becomes: In the above equation, P is a vector of the size of training samples and adapts based on the training and test data. For meta-imitation learning, we can use same method to learn the task distribution shift and make the training adapt to different perturbed scenarios. In other words, during test-time, we can evaluate gradients ∇ φ L(φ) on task-specific demonstrations D test and impose them on the per-sample gradients ∇ φ L n (φ) estimated on the training data D train , where φ i are the policy parameters. Initially, we train the policy parameters θ on a sampled set of train tasks T i train to collect training data Note that, θ and φ are the parameters of the policy at train and test time respectively. We could have used θ i , instead of φ i , but we use that notation for parameters which are obtained by training the policy only on the test task T i test . During test time, when we asses the generalization ability of a policy on an unseen test task, we train the policy likewise to the training approach, but only using the test data. We, however, use the test data to learn the P distribution and calculate the dot product, represented by • with the per sample gradients, which we use to update φ. For simplicity sake superscript i is omitted. The distribution P , which is a vector of size N , can be updated by optimizing the cost function J(P, φ) of the L2 distance between ∇ φ L(φ) and ∇ φ L(φ) for each batch of test and train data. Note that if the dimensionality of the average gradients ∇ φ L(φ) is R a×b , the dimensions of ∇ φ L(φ) would be R N ×a×b Also, we apply softmax over distribution parameters p n such that P n always sum to 1. The gradient of the cost function with respect to P is as follows. During every iteration, apart from updating φ using our method, we also update the distribution parameters p n , for K iterations. Using the analytical gradient computed in Eqn 6, we can compute the gradient with respect to p using chain rule as follows. In the above equation ∇ p σ(p) is the gradient of the softmax function, which is a N × N matrix. The policy parameters φ i are thus, iteratively learned by utilizing the training data, but deriving the P distribution from the test data. A synopsis of the entire algorithm is illustrated on the next page. A. Experiments on simulator We used a Car Racing simulator from OpenAI gym to test our method. We adapted a world-model from [13] by Algorithm 1 Estimate θ * and φ i * Require: p(Ttrain) and p(Ttest) as task distributions Require: α, β and γ as step-size parameters Require: Expert policyθ 1 Calculate ∇P J(P, φ) From Eq 8. 21: end for 26: end for retraining the architectures for the VAE and MDN-RNN, using different hyperparameters. The controller/policy was changed to a single layer classifier to categorize a state to one of the five discretized actions. We chose to use a single layer policy, as simple architectures tend to generalize better. We created multiple car racing environments and considered them as tasks. We collect 24 expert trajectories, 4 for each task, and use them as the training data. For policy training, we used Stochastic Gradient Descent (SGD) with step-size parameters α, β and γ as 0.01, 0.01 and 0.05 respectively. We limited K updates to 10, as we found this to be sufficient for the experiments. We set τ to 3000 and 4000 iterations for training and testing respectively. We encourage the reader to refer to the algorithm in the previous page for the notations. B. Physical setup We also evaluated our method using a physical system, in our case a Non-Holonomic, differential-drive based robot for the task of visual navigation. For real-world experiments, we defined a task as the environment on a specific level in the building. Each level was visually very distinct from the other, and we interpreted an environmental seed as a unique source and destination location on a particular level of the building. Images obtained from a camera, mounted on the robot, were timestamped and synced with the action commands before being sent for training. We used ROS and Tensorflow for implementing our experiments. We used a world model with similar modifications applied to the simulator experiments. We collected 30 humancontrolled trajectories, 10 for each task, and trained the V and M modules. The robot was then tested on the remaining one environment, which it had not seen before. VI. OBSERVATIONS AND RESULTS Compared to the state of the art benchmark [13] on the car-racing simulator, our method deals not so much with the maximum score in a given episode but how quickly it can learn an optimal behavior and adapt to a related unseen environment. Following are some of the observations, which we had found. A. Generalization to unseen Tasks Apart from the training data, We also generated some environments as test tasks for the model. Though none of the components of the world model were trained on those tasks, our method made the world model generalized to them. We also added uniform Gaussian noise on all the observations of the test tasks for robustness. To compare our method, we used 2 baselines: DAGGER baseline (θ * ) which was trained by aggregating the data from all the prior tasks and the test task and Fine-tuning baseline (θ i * ) which was trained the policy only on the test task. Figure 7 shows that our method outperformed these baselines. As a primary measure, we used the number of times the expert had to intervene to allow the agent to get off a vulnerable state, which we call override. We also portrayed the mean accuracies on different tasks for each baseline, as we wanted the reader to notice the relationship between accuracy and overrides as an appropriate measure for comparing baselines. We argue that both of them are required in active imitation learning, as a model might have a high average accuracy, but might not perform well on some important states. For quantitative comparison on different baselines, refer Table I B. Converging to local optima for train tasks Usually, in Meta-learning, the goal for the classifier is the generalize to unseen tasks from the prior information obtained from the train tasks. However, in some scenarios, like that of navigation, we want the agent to perform well on the training tasks as well. After θ convergence, we evaluate the policy on each training tasks, sampled with random seeds and added Gaussian noise. Surprisingly, the agent performed suboptimally on every task. Since the world model was trained on the training tasks, Naively aggregated data should've performed well. However, in situations, where there are a sufficiently large number of tasks, the model would collapse to a local optimum. However, when we ran used our method for on a specific train task as a test task for 1 iteration. it resulted in better performance. C. Robustness amongst noise in demonstrations Our algorithm works well, even in cases where there is noise in the collected demonstrations. During training, in each iteration of policy evaluation, we randomly select 50 % of the collected demonstrations and corrupt the action labels. Even in such scenarios, our algorithm remains robust by giving those corrupted samples less importance, i.e., less p values and performing well on the test task. Figure 6 states the results. D. Visualization of the P values Although we performed many experiments confirming the robustness of our algorithm, it is important to understand the p values in each case, to know which of the samples are getting more importance. Note that, each aggregate training sample, which are in total 24k, has a p value associated with it. In Figure 5, we provide a color map of the p values, after φ i converges. Although it is unclear, how the samples having high p are related to the test task, we can comprehend that the model is able to learn the new task from the prior tasks. In the same figure, we also show how the p values change with each trial, which indicates that, the sample importance, changes with getting more information from the new test task. E. Experiments in real-world We ran our robot on the corridors in the Hedco Neuroscience building at the University of Southern California. Though the geographical and structural maps of each floor were similar, the visual features were very different, which makes it perfect for applying our method. Pictures taken in different floors are depicted in Figure 3. Results obtained by test the robot in different environments are shown in Table I VII. CONCLUSION In this paper, we presented a Meta-Imitation learning algorithm which involves learning new skills from prior knowledge. We defined a task or skill as an environment having a specific data distribution attributed by time or Fig. 7. Comparision of 2 different tasks (marked with red and blue) on which the model was evaluated. The plain, dotted and dashed curve represents the performance of θ * , θ i * and φ i * baselines respectively. We can see that our method outperforms other baselines. The y axis represents the cumulative number of overrides over the trials, and the x axis represents timesteps of all the trials combined. situation. applications, which involves substantial covariate shifts, by considering it as a meta-learning problem. We have also shown how the proposed algorithm can be used to improve the policy performance on a single task, which was trained on a set of tasks Some of our experiments performed using a real robot, shows how our algorithm can aid realworld scenarios as well. The results shown on the task of navigation support our assertions.
2019-11-23T07:22:32.000Z
2019-11-23T00:00:00.000
{ "year": 2019, "sha1": "484cd42499a2872f36ec1ccdfb007bb255f74c59", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "484cd42499a2872f36ec1ccdfb007bb255f74c59", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
201669243
pes2o/s2orc
v3-fos-license
Clinical implications of Delphian lymph node metastasis in papillary thyroid carcinoma: a single-institution study, systemic review and meta-analysis Background To evaluate the possible predictive value and clinicopathological characteristics of Delphian lymph node metastasis in papillary thyroid carcinoma. Methods A retrospective analysis of papillary thyroid carcinoma patients with Delphian lymph node metastasis in a single institution and meta-analysis of literature reports were performed. Results In own series, Delphian lymph node metastasis was detected in 19 (9.9%) of 192 papillary thyroid carcinoma patients and was significantly associated with tumor size≥1 cm (P = 0.003), multifocality (P = 0.006) and extrathyroid extension (P < 0.001) in the multivariate analysis. Female was a protective factor for Delphian lymph node metastasis (P = 0.001). Delphian lymph node metastasis was highly predictive of further central lymph node metastasis (positive predictive value = 89.5%, negative predictive value = 67.6%) and moderately predictive of lateral lymph node metastasis (positive predictive value = 26.3%, negative predictive value = 95.4%). In this meta-analysis, there was a strong correlation between Delphian lymph node metastasis and aggressive clinicopathologic characteristics with regards to multifocality (P = 0.0008), bilaterality (P = 0.04), extrathyroid extension (P < 0.00001), lymphovascular invasion (P < 0.00001), further central lymph node metastasis (P < 0.00001) and lateral lymph node metastasis (P < 0.00001). Conclusions This single-institution observational study and meta-analysis identified that Delphian lymph node metastasis was significantly associated with unfavorable clinicopathological characteristics and had a strong predictive power for further disease in the central compartment. Trial registration The clinical study was retrospectively registered to UMIN clinical trials registry (the registry number: UMIN000033835). Electronic supplementary material The online version of this article (10.1186/s40463-019-0362-7) contains supplementary material, which is available to authorized users. Introduction The Delphian lymph node (DLN), also called the prelaryngeal or cricothyroid node, locates in the fascia above the thyroid isthmus and lies between the cricoid and thyroid cartilages [1,2]. The DLN receives afferent lymph flow from the larynx and the thyroid gland, then flows towards the central and lateral lymph nodes [2]. Therefore, DLN status should be critically evaluated in patients with cancers involving the larynx, hypopharynx and thyroid. Most studies to date on the DLN address its significance in laryngeal and hypopharyngeal carcinoma and emerging evidence suggests that the positive DLN is a poor prognostic factor in laryngeal and hypopharyngeal cancers [1,[3][4][5][6][7][8][9]. Unlike laryngeal cancer, the clinicopathologic factors of the positive DLN in thyroid cancer are still not completely understood. To our knowledge, there are 9 reports addressing the predictive value of DLN metastasis in papillary thyroid carcinoma (PTC) [10][11][12][13][14][15][16][17][18] and most of these studies were published in recent years. The clinical data from some studies indicate that DLN metastasis is a predictor of further disease in the central compartment and in the lateral compartment. However, some investigators contended that the DLN is frequently found without extensive lymph node disease and even argued that DLN involvement is a misleading and unreliable sign [19,20]. Because of inconsistent results and lack of meta-analyses to systematically review the significance of the DLN in PTC, we report the outcomes of a single-institution case series and present the first meta-analysis of the clinical characteristics of PTC patients with DLN metastasis. Single-institution observational study We retrospectively reviewed the medical records of 192 patients with a final diagnosis of PTC who underwent total thyroidectomy and central lymph node dissection (CLND) with or without lateral lymph node dissection (LLND) in Shanghai Changzheng Hospital (Shanghai, China) between July 2017 and August 2018. The protocol for this research project was approved by the local Clinical Ethics Committee and written informed consent was obtained from each participant. The clinical study was registered to UMIN clinical trials registry (the registry number: UMIN000033835) (Additional file 3). CLND was performed routinely on the affected side. CLND on the contralateral side was performed when any of the central lymph nodes were found to be suspicious on preoperative imaging examination or upon intraoperative inspection. LLND was performed only if preoperative fine needle aspiration cytology confirmed evidence of metastasis. The soft tissue and lymph nodes between the thyroid and cricoid cartilage were excised in all 192 cases, then were routinely labeled as DLNs and examined by frozen section evaluation and separate histopathological examinations. Most of the frozen sections were interpreted by three subspecialty pathologists who all had rich experiences with the pathological diagnosis of PTC and lymph nodes. Systematic review and meta-analysis Electronic searches were performed for relevant reports in two databases (PubMed and Embase) with publication date before August 2018. The search strategy used the following terms: (Delphian lymph node OR prelaryngeal lymph node OR cricothyroid lymph node) AND (thyroid tumor OR thyroid cancer OR thyroid carcinoma OR thyroid neoplasm). The abstracts of all potential articles, references, and related articles were reviewed according to their titles and each article was independently assessed for eligibility using inclusion criteria. Inclusion criteria were as follows: (1) comparative studies of the clinicopathologic parameters of the positive DLN and the negative DLN in PTC; (2) articles published in English before August 2018 and (3) exact and intact dichotomous-type or continuous-type data with standard deviations. The literature search was independently performed by 2 authors. The third author determined whether to include the study when controversy occurred. Two investigators independently extracted and collected data using a standardized data-extraction protocol. For repeated publications, the latest data about interesting outcomes were extracted. For example, Zheng et al. reported their results twice at different periods of the same trial [18,21]. Two authors (BinWang and Xing-zhu Wen) independently assessed the quality of the included studies according to the Newcastle-Ottawa Scale (NOS) [22]. Disagreements regarding methodological assessment were discussed and resolved through consensus. Statistical analysis For our single-institution experience, the database was exported to SPSS software (version 19.0; IBM-SPSS, Inc., Chicago, IL, USA). Continuous data was compared using Student t-test and categorical variables were compared using Pearson's chi-squared or Fisher's exact test. Variables with p < 0.15 in univariate analysis were considered statistically significant and included in the multivariate analysis. Differences were considered significant when P < 0.05. Regarding meta-analysis, data extracted from the included trials were integrated with Review Manager Software version 5.3 (Cochrane Collaborative, Oxford, UK). The odds ratio (OR) was used for dichotomous-type data. The ORs were combined using a random-effects model. Heterogeneity among articles was quantitatively assessed using the Q test and I 2 statistic [23]. A significant heterogeneity was defined as I 2 > 50% or Q-test reporting a P value < 0.05. Sensitivity analysis was applied by removing individual studies from the data set and evaluating the effect of their removal on the pooled OR. Publication bias was examined by Begg's funnel plot as well as Egger's linear regression test. Single-institution experience All patients underwent CLND and 13 patients underwent LLND. DLN metastasis was observed in 19 patients (9.9%) and only 1 patient (0.5%) had DLN metastasis without other compartments metastasis. A total of 101 lymph nodes among the patients with DLN metastasis and 799 lymph nodes among the patients without DLN metastasis were detected. No significant difference existed between the 2 groups regarding the mean number of detected lymph nodes (P = 0.320). Histopathological examination proved 65 metastatic lymph nodes among the patients with DLN metastasis and 145 metastatic lymph nodes among the patients without DLN involvement. The mean number of metastatic lymph nodes among the patients with DLN metastasis was significantly higher than that among patients without DLN involvement (P < 0.001). In univariate analysis, female, age ≥ 45 years, tumor size, multifocality, bilaterality, ETE and Hashimoto's thyroiditis showed a p < 0.15. These factors thus were included in the multivariate analysis ( Table 2). The multivariate analysis revealed that tumor size≥1 cm (P = 0.003), multifocality (P = 0.006) and ETE (P < 0.001) were found to be independent risk factors of DLN metastasis. Female was a protective factor for DLN metastasis (P = 0.001 The sensitivity analyses revealed that the study of Zheng et al. [18] influenced the bilaterality result. After exclusion of this study, the prevalence of bilaterality failed to show significant difference between DLN-positive PTCs and DLN-negative PTCs (OR, 1.43; P = 0.195; see Additional file 1: Figure S1). The study of Chai et al. [12] influenced the heterogeneity of further CLNM. After exclusion of this study, the significant heterogeneity vanished statistically (P for heterogeneity =0.22, I 2 = 29%) and no change in the result with regard to CLNM (OR, 12.02; 95% CI, 6.78-21.32, P < 0.00001; see Additional file 2: Figure S2) was observed. Other studies did not affect the pooled ORs. In the funnel plots and the Egger's regression tests, there was no evidence of publication bias (Fig. 5). Discussion This single-institution observational study and metaanalysis showed that DLN metastasis was less common in women with PTC but had a positive association with aggressive clinicopathological characteristics of PTC such as larger primary tumor, ETE, multifocality, and other compartment metastasis. The name "Delphian" originated from a priestess in Delphi who can foresee the future and was first used to name the lymph node by Raymond B. Randall, a senior student of Harvard Medical School, in 1948 [1,17]. As its name indicates, the DLN can predict the progression [11] indicated that DLN positivity predicted a 9-fold higher frequency of LLNM, and a 40-fold higher frequency of any nodal disease. This meta-analysis revealed a strong correlation between DLN metastasis and aggressive clinicopathologic characteristics such as ETE, multifocality, lymphovascular invasion, and further central and lateral compartment metastasis. In own series, tumor size≥1 cm, multifocality and ETE were found to be independent risk factors of DLN metastasis. Previous studies also investigated the risk factors for DLN metastasis but the conclusions were inconsistent. Oh et al. [16] revealed that lymphovascular invasion and tumor size played key roles in the occurrence of DLN metastasis. Chai et al. [12] found that tumor location in the isthmus or upper third of the thyroid was a predictor for DLN metastasis. Tan et al. [17] identified capsular invasion as an independent risk factor for DLN metastasis by multivariate analysis. Zheng et al. [18] reported that BRAF mutation was correlated with DLN metastasis. Kim et al. [14] demonstrated thyroiditis contributes to inhibiting DLN metastasis. This single-institution study showed that female played a role in impeding DLN metastasis. The inconsistent results may be attributed to the differences in sample size or demographics of patients included in each study. In laryngeal cancer, the presence of DLN metastasis increases LLNM, resulting in a high recurrence rate and low survival rate [1,24]. In PTC with DLN metastases, lymph node metastasis are detected in up to 95.9% of the central lymph nodes [16] and 47.2% of the lateral lymph nodes [14]. According to nodal staging for thyroid cancer, N1a refers to metastatic disease in the central compartment and N1b refers to metastasis to the lateral nodal chains. Because DLN positivity is predictive of lateral compartment disease, Delbridge et al. have suggested that nodal metastasis to the DLN should be upstaged to N1b [10]. DLN status has important implications in extending the scope of surgical procedures, planning radiotherapy, and determining outcome. Current methods for evaluating preoperative DLN status such as ultrasonography, computed tomography, or magnetic resonance imaging are imperfect [12,25] because the median size of the positive DLN is small [13]. Intraoperative frozen section is generally accepted as one of the sensitive and useful tools for evaluation of nodal status of DLN. The current American Thyroid Association guidelines [26] do not recommend prophylactic CLND for small tumors (T1 or T2 classification). However, even in cases of PTC with small tumors, there was a high rate of CLNM [27]. Therefore, DLN could be sent for frozen section evaluation because of its predictive value for widespread nodal metastasis [13]. If the DLN is positive, CLND should be carefully considered even in clinically node-negative PTC. There are few reports addressing the recurrence of PTC with DLN metastasis [12,16,18]. Studies showed that PTC recurrence was slightly higher in DLN-positive than in DLN-negative patients, although the difference did not reach statistical significance [12,16]. However, Zheng et al. [18] showed that DLN-positive patients had a significantly higher rate of unstimulated Tg ≥1 ng/ml than DLN-negative patients during a median follow-up duration of 14 months and 11 months for DLN-positive patients and DLN-negative patients, respectively. Metastasis to the DLN is a poor prognostic factor in many malignant neck cancers [24,25], and it is associated with several poor prognostic factors in PTC, including ETE [28], and a heavier nodal burden, in terms of number of metastatic nodes and node size [29]. These factors could act as confounders for the relationship between DLN metastasis and PTC prognosis [30]. To date, there is no published evidence or definitive studies on the association between survival and DLN involvement in PTC. Owing to the limited data on long-term follow-up of DLN metastasis in PTC, its relationship with the recurrence and survival of PTC patients remains unclear. Therefore, further studies with longer follow-up periods are warranted to explore the prognostic significance of DLN metastasis in PTC. The present study had several limitations. First, this study was a 1 year retrospective study in a single institution. Some new findings in this study such as female as a protective factor for DLN metastasis might be not completely convincing and needed more perspective studies with large sample size and relatively long-term follow-up periods to confirm. Second, the sample size in the metaanalysis and the single-institution observational study was not well-matched because the rate of DLN metastasis was low. In this study, the ratio of DLN positive and negative patients was 19/173, which resulted to the low power of test statistic. Third, most studies were performed in Korea and China. Therefore, the results may not accurately reflect the clinical characteristics of PTC in another region. Funnel plot of standard error by log odds ratio data of this single-institution observational study and meta-analysis regarding the clinicopathological characteristics of papillary thyroid carcinoma patients with Delphian lymph node metastasis and the ability of positive Delphian lymph node for predicting central lymph node metastasis and lateral lymph node metastasis. BW and X-zW wrote the manuscript. MQ made the critical revision of the manuscript. All authors read and approved the final manuscript. Funding The work was supported by the National Natural Science Foundation of China under Grant no. 81703854. Availability of data and materials The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Ethics approval and consent to participate The protocol for this research project was approved by the local Clinical Ethics Committee and written informed consent was obtained from each participant. Consent for publication Not applicable.
2019-08-30T18:14:56.433Z
2019-08-30T00:00:00.000
{ "year": 2019, "sha1": "2070d0c287cd558f62ec80b093f6c5440e6ff912", "oa_license": "CCBY", "oa_url": "https://journalotohns.biomedcentral.com/track/pdf/10.1186/s40463-019-0362-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2070d0c287cd558f62ec80b093f6c5440e6ff912", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233790388
pes2o/s2orc
v3-fos-license
Research on the Economic Growth for Under-Developed Counties -- Based on Two-way Fixed-effects Model The county economy, especially the problem of poor counties, is currently a hot issue in China. At present, the research on the key factors for county economy is scarce, partly because it is difficult to obtain accurate and sufficient data. Because power consumption and industrial level are often highly correlated, and the power consumption is real-time data and hard to falsify, this paper used spatial, electricity consumption and economic data of 66 Chinese counties, which contains 29 poor counties, from 2009 to 2016 to find out the key factors for under-developed counties’ economic growth by applying fixed effect regression model and machine learning models. The result shows that for poor counties, though the 1st industry is still the fundamental industry for county-level economy, the development of 3rd industry has significant positive impact on local economy. However, the development of 2nd industry, including recruiting large companies, and the input of electricity resources cannot well drive the local economies, which may suffer great loss due to the elimination policy of overcapacity in recent years. And the machine learning results support the above conclusions and suggest an obvious geographical cluster of poor counties, and the location, development of 1st and 3rd industry and net income of rural residents can explain most difference between the poor and non-poverty counties. These conclusions can be helpful for the government to lead the poor counties to get rid of poverty, and the cluster of poor counties should be focused. Introduction With the progress of poverty alleviation, the issues of economic growth for under-developed county have attracted more and more attentions. As the foundation of China's economy, the economy of under-developed counties has significant impacts on the economic growth of China, and the successful development of them will greatly affect whether China can achieve the leap and sustainable growth of economy. There are some methods and ideas to find factors that affect economic growth of county, and the factors that the researchers mainly focused on are the political, social and some other macro indicators. For example, Liu examines the impacts of fiscal decentralization on the county long-run economic development, and the result indicates that the Chinese county development is frustrated by the fiscal decentralization system [1]. Wu used the difference-in-difference method to analyse the casual relationship between the China's province-managing-county reform (PMC) and the economic growth and finds that the PMC has a positive effect on economic growth of county [2]. Yu employs the quasi-experimental design to argue that the peripheral counties along the upgraded railway lines 2020 International Symposium on Water, Ecology and Environment IOP Conf. Series: Earth and Environmental Science 690 (2021) 012065 IOP Publishing doi:10.1088/1755-1315/690/1/012065 2 experienced reductions in GDP and GDP per capita due to the concurrent drop of fixed asset investment [3]. Zhang et al. proves that county economic growth and the county financial development has a positive u-curve relationship based on the panel data of 1732 counties in China from 2013 to 2016 [4]. From the researches above, it can be seen the current researches mainly focus on exploiting the empirical models to investigate the effect of one or more economic factors on the county economy growth. Nevertheless, there are few researches paying attention on the effect of industrial structure on county economy, partly because it is difficult to obtain accurate and sufficient data. On the basis of neoclassical economic growth theory, the industrial structure plays an important role in the economic growth. Through the research of county industrial structure, it will be conducive to finding out the reasons for the differences of economic growth between poor counties and non-poor counties. Due to the close correlation between modern industrial production and energy, electricity consumption will be helpful for us to find out the key factor for under-developed counties' economy and their industrial structure. Moreover, the power consumption is real-time data and hard to falsify, which makes it undoubtedly a great indicator to reflect economy, and more direct and objective than other economic statistics. In practice, there are also many methods to apply electricity consumption to measure the economy, like the Li Keqiang Index, which is proved to be useful and practical [5]. Huang et al. focus on the electricity consumption of county-level tourism industry and establishes a forecasting model [6]. Shiu and Lam exploit the error-correction model to examine the casual relationship between real GDP for China and electricity consumption during 1971-2000 [7]. Wolde-Rufael also tests the long-run and casual relationship between GDP per capita and electricity consumption per capital for 17 African countries from 1971 to 2001 [8]. Though researches on large observation like a nation or province using the electricity consumption are relatively sufficient, there are few studies with large sample pool and small observation on counties in the field of economic structure and growth. Therefore, this paper uses big data on electricity consumption to analyse China's current targeted poverty alleviation and problems in the sustainable development of the county economy, and on this basis, provides us a new perspective to discuss the poverty issues. Data Description The raw dataset comes from the State Grid Corporation of China. The dataset is well sorted into panel data with 30 electricity consumption variables of many subdivided industries variables, which includes annual electricity consumption data of 70 subdivisions of 66 counties in Hebei, Henan and Hunan provinces from 2009 to 2016. Moreover, the corresponding economic, spatial and policy data of these counties are collected from the official statistics bureau of each region. In this paper, the growth rate of GDP and 2 nd industry added value will be chosen as the dependent variables to measure the local economic outcome, and the longitude and latitude of the county will be used to measure the spatial influence. Methods The fixed effects regression model is a type of panel data analysis method that varies with individuals but not with time in panel data. It helps to control the unobserved heterogeneity [9]. From the perspective of time and individuals, the explanatory variable of the individual fixed-effects regression model can represent its marginal influence on the explained variable, and the effects of all other variables that are not included in the regression model or unobservable but affect the explained variable can be controlled. In the end, it appears as a fixed effect that varies with the individual but not with time. Besides, the economic data always contains a strong time tendency, and the same holds for fixed-time effect. With the addition of fixed-time effect, the model comes to a two-way fixed-effects model as shown in Figure 1. Based on the above analysis, the empirical model used in this study is a balanced panel data individual fixed effects regression model. The regression method uses a multiple stepwise regression method. The regression model is constructed as follows: The multiple regression equation is as follows: Where i is the i th county, t is the t th period, j/k is the j th /k th explanatory variable, yit is the explained variable, Povertyi, Economyji(t-1) and Electricitykit are the explanatory variables, λi is the county fixed effect, μt is the period fixed effect, βj/βk is the partial regression coefficient, and u is the regression constant, εit is random error. Data Cleaning The data set was first cleaned, including sorting data format, eliminating null values and outliers, organizing variable labels and so on. Variables Clustering In the panel data analysis of this paper, too many explanatory variables can easily lead to the problems of multicollinearity and over-fitting, so it is necessary to analyse and refine the independent variables and reduce the dimension. This study uses the method of systematic clustering, and uses Pearson correlation as the distance between variables. The 24 electrical sub-industry variables are clustered, and the pedigree diagram is shown in the Figure 2. Based on the results, these 24 variables are grouped into five clusters, and they are named by their main components in Table 1. It can be seen that Industry and infrastructure accounts for 85% of total industrial electricity consumption. The Pearson correlation between each cluster has been largely reduced compared to the raw data. Dummy Variables Besides, the Longitude and latitude of the counties are used to measure the location of each county, and they are also clustered as 2 clusters. And a dummy variables North are created to represent the 2 location clusters. And notice that since 2014, the Chinese government has started to eliminate of backward and overcapacity industrial sector, which must have a large impact on the industrial outcome. Hence, a dummy variables Policy are created to represent the policy effect since 2014. OLS Linear Regression This paper first tried the OLS linear regression analysis for variables G_GDP and G_GDP_2ND. The variable of electricity doesn't lag by one period because they are not significant. The results are sorted in the Table 2. It shows that the two models are significant with the R-squared of 0.69 and 0.35. 48.67 *** 11.92 *** Note: Variable Xt-1 was represented as X(-1), variable named with G indicates the growth rate of it, variable named with CON indicates the electricity consumption of it. * p < 0.1. ** p < 0.05. *** p < 0.01, same below. Comparing the results of the two models, it can be found that per capital net income of rural residents has a significant negative effect on the added value of 2 nd industry while the effect on GDP is not significant, and nor is the interaction of poverty and income. It may reflect the fact that the progress of county-level industrialization and urbanization in the non-poor counties occupied part of agricultural resources, like rural labor force and land, which makes the development of 2 nd industry negatively correlated with the income of rural residents. Besides, it's notable that the growth of total electricity consumption has a significant positive effect on the growth of GDP in poor counties but a significant negative effect in non-poor counties, which indicates that electricity resources still have relatively high marginal utility for economic growth in poor counties. It also can be seen that the NORTH and POLICY all have significant effects on these two models, and in model 2, both POLICY and NORTH have a relative smaller ߚ,which indicates that the growth of added value of 2 nd industry would suffer more negative impact by the location and overcapacity policy. It can be inferred that the growth rate of GDP would be largely influenced by county's location and time, and the simple linear regression model cannot well measure such impact. Hence, County and Period Fixed Effects Regression was conducted to better measure the marginal impact of each variable. Two-way Fixed Effects Model This paper used two-way fixed effects model to fit the data. 203 observations of poor counties and 259 observations of non-poverty counties are regressed respectively. Table 3 shows that it's suitable to establish cross-section and period fixed model for G_GDP and G_GDP_2ND based on the F test result. Note that the effects of variables NORTH and POLICY will be absorbed by the county and period fixed effects, so they are excluded in this part. The results of Model 3 and 4 are sorted in the Table 4, and the period fixed effect of Model 3 is shown in Figure 3. From Table 4, it can be seen that the two models are generally significant, and reached to better R-squared of 0.77 and 0.58, which greatly improves the explanatory ability. In year 2015, the central government of China had paid much more attention to the elimination of overcapacity and out-dated industry. Also, the period fixed effect of Model 3 shows that the growth rate of GDP does suffer a common reduction even since 2012, and reached its lowest point in 2015 (see Figure 3), which follows our common sense According to the regression result, for both poor and non-poor county, the growth rate of GDP will be largely influenced by the growth rate of 1 st and 3 rd industry added value of last period, but it has nothing to do with that of 2 nd industry. However, the growth of 3 rd industry has significant negative impact on non-poor counties but general significant positive impact on poor counties, which means the development of 3 rd industry may be a more effective path for poor counties' economy compared with 2 nd industry. Moreover, in model 4, the increase of number of industrial enterprises above state designated scale has a significant coefficient of 0.08 for non-poor county but a sum-up coefficient of -0.02 for poor county. That is to say, the increase in large industrial enterprises cannot effectively bring the expected industrial growth to the local economy. From the perspective of electricity variables, the growth of total electricity consumption has significant backward impact on the growth of GDP in non-poor counties, but it doesn't work in poor counties. Besides, the growth of total electricity consumption and industrial consumption are not significant in model 4. This abnormal phenomenon indicates that the county-level electricity consumption may have large fluctuations among years and the electricity efficiency on economic outcome is relatively low. Moreover, the growth of electricity consumption of service and transportation industries are significant in both non-poor and poor counties in model 3. In non-poor counties, the electricity consumption on service industry will pull up the growth of local GDP, but in poor counties, its impact has statistically reduced to zero. And that of transportation is going in the opposite direction. Therefore, for poor counties, it's of significance to pay more attention on their transportation. Machine Learning Model To better fit the variable POVERTY, this paper used 30 variables including location, electricity consumption and macroeconomic statistics (each sub-sector variable has its log-transformation and growth-rate data). And in this case, Z-scaled transformation was applied to all variables. 75% of the data set is randomly divided into training set, five common machine learning methods including XGB Classifier, LGBM Classifier, Random Forest Classifier, Support Vector Classifier and Decision Tree Classifier are used to train the data. The results of well-tuned models are shown in Table 5. The result shows that XGB Classifier has the best performance and robustness among 5 methods, so its normalized feature importance is obtained as Figure 4 shown. According to the feature importance, Spatial_North has the largest feature importance of more than 0.13, and log_gdp_1st, log_gdp_3rd and log_Per_capita_net_income_of_rural_residents also have relative larger feature importance of more than 0.08. Conclusions In this paper, to find out the key factors for under-developed counties' economy, two-way fixed-effects model and machine learning models were established to better fit the data. Compared to linear models, the results have been improved. The map of key factors is drawn as Figure 5 shows. For poor counties, the increase of electricity consumption of service and transportation industry can drive the growth of local GDP, but they should also try to control the total power consumption, which is to improve the electricity efficiency. And the growth of 3 rd industry has the same large positive impact on GDP, which is in line with economic laws. Besides, the growth of 1 st industry is still the fundamental industry for county-level economies. It's important to note that the increase in the number of big industrial enterprises cannot effectively bring the expected industrial growth to the local economy, and it may even do the opposite work. Another interesting finding is that electricity resources, known as the blood of industry, are not necessarily correlated with county-level industrial outcome due to the low developed-level of these counties and the policy of overcapacity elimination. Figure 5. Map of key factors for under-developed counties' economy. Note: Red line represents that it contains significant effect on poor counties as shown in Figure 5. Besides, the machine learning results confirm the conclusions of regression, and suggest that the spatial factors are significant to identify a poor county, which can be inferred that there is an obvious cluster of poor counties. The feature importance of XGBoost Classifier shows that the location, development of 1 st and 3 rd industry and net income of rural residents can explain most difference between the poor and non-poverty counties. These conclusions can be helpful for the government to lead the under-developed counties to better develop their economy and find the economic driver for themselves. After all, the development strategies of large economies may not be suitable for county-level economies.
2021-05-07T00:04:05.147Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "4ab24243aace7499fdfdaaa291553fc8c13fa002", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/690/1/012065", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "02cd89ac3ded8a90c38da6a8593b95d034ed98ae", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Physics", "Economics" ] }
237470058
pes2o/s2orc
v3-fos-license
Tough Love Lessons: Lateral Violence among Hospital Nurses Background: Workplace violence is a growing social problem among many professions, but it particularly affects the health sector. Studies have mainly focused on evaluating user violence toward health professionals, with less attention being paid to other sources of conflict, such as co-workers themselves. There are different manifestations of this violence in what has been called a context of tolerated or normalized violence among co-workers. However, its effects are far from being tolerable, as they have an impact on general health and job satisfaction and contribute to burnout among professionals. Based on this idea, and following the line of the previous literature, nursing staff are a population at high risk of exposure to workplace violence. For this reason, the present study aims to evaluate exposure to lateral violence or violence among co-workers in nursing staff in public health services and the relationship of this exposure with some of the most studied consequences. (2) Methods: A cross-sectional associative study was carried out in which scales of workplace violence (HABS-CS), burnout (MBI-GS), job satisfaction (OJS), and general health (GHQ-28) were applied to a sample of 950 nursing staff from 13 public hospitals located in the southeast of Spain. (3) Results: The results show that nursing staff have a high exposure to violence from their co-workers, which is more common in male nurses. Greater exposure is observed in professionals with between 6 and 10 years of experience in the profession, and it is not characteristic of our sample to receive greater violence when they have less experience or are younger. A positive correlation is observed with high levels of burnout and a negative correlation with general health and job satisfaction. (4) Conclusions: The results of this work contribute to increasing the scientific evidence of the consequences of a type of workplace violence frequent among nursing staff and to which less attention has been paid in relative terms to other types of prevalent violence. Organizations should be aware of the importance of this type of workplace violence, its frequency and impact, and implement appropriate prevention policies that include the promotion of a culture that does not reward violence or minimize reporting. A change of mentality in the academic environment is also recommended in order to promote a more adequate training of nursing staff in this field. Introduction Violence and harassment in the world of work is considered a violation of human rights and a threat to equal opportunities. It is defined as the incidents where professionals are abused, threatened, or assaulted/intimidated in a context related to their work [1]. This violence can be triggered by a set of individual, psychosocial, and work-related factors. Likewise, a workplace culture where abuse is usual and accepted carries the danger of normalizing violence [2,3]. According to the WHO, at least 25% of workplace violence occurs in healthcare settings [4], but, although much has been published on user violence, there is not enough evidence of harassment between co-workers in in this setting yet. This violence follows different vectors. Vertical violence is the one between a superior and an employee and can be top-down or down-up [5], the former being the most prevalent [6,7]. Lateral violence consists of harassment behaviors between co-workers with equivalent status and consists of physical or verbal violence [5,8]. Lateral violence can also occur in different ways, from person-directed attacks (personal lateral violence) to social isolation (social lateral violence) or work-related (workplace-related lateral violence), the latter being not always seen by the receiver as a form of violence [9]. The few studies on this matter have demonstrated its high prevalence in healthcare and the nursing profession [4,6] and its deleterious effects [10,11], yet several gaps remain to be filled. Background Emergency departments generally show higher levels of violence [12,13], but, although the mental health area poses a high risk [14], all health professionals have a high rate of violence and harassment. The difference seems to be that those with greater proximity and/or prolonged exposure to the public are at higher risk, namely physicians [15], but particularly nurses [12,16]. Differences have been found according to individual factors, for instance, personality and gender, but also regarding work-related factors such as job tenure, shifts, type of work, and department (emergency department, primary care) [17][18][19]. Sex remains a topic of discussion since some results point to greater exposure to violence by male professionals [7,16], while others find this variable to have little or no influence [13,20,21], including a meta-analysis of 65 articles [22]. Nonetheless, studies have stated that female professionals are more often a target of psychological harassment behaviors [6] and males suffer more physical violence [15]. Age is also a factor of vulnerability since younger professionals are more often harassed [5] while personality may also contribute (e.g., negative affectivity) to incivility and worker-to-worker violence [23,24]. Psychological violence among co-workers seems to be more frequent among shift workers than among fixed schedule workers [25] with differences between shifts [26]. A 10-European-Country-study in 2008 with 39,898 nursing professionals found that harassment by superiors is very frequent (21%) and that male nurses, younger nurses, and nursing aides are at higher risk of violence compared to female nurses, older nurses, and registered nurses [11]. However, violence between co-workers has been scarcely studied compared to other types of workplace violence. Currently, the problem is far from solved, since the 2015 Eurofound European Working Conditions found that 17% of all male and female participants were victims of workplace harassment and 7% of all participants suffered some sort of exclusion or discrimination, with an increasing tendency, considering previous rates (2005-5%; 2010-6%) [4]. In 2020, due to the pandemic, an overall change of working habits led to more people working from home, but physical exhaustion at the end of the day was more often reported by young and female respondents (35% of women), according to the 2020 Eurofound report. Additional challenges appeared during these difficult times, such as feeling more at risk of contracting COVID-19 due to their jobs (nearly 80% of employees), being highly exposed to emotionally demanding situations (referred by more than 30% in France, Lithuania, Portugal, and Spain), as well as facing the lack of personal protective equipment (PPE), as reported by 3 out of 10 employees [27]. This is common ground for health professionals, who are referred to as the most likely to have higher levels of emotional demands in this report. However, the report did not evaluate workplace violence so it is unknown how the recent changes in the workplace experienced by health professionals have affected this phenomenon. It is important to note that these figures are just the tip of the iceberg, meaning that not all incidents are reported, either due to the lack of knowledge, fear, or any other reason [28]. There is evidence that nearly 60% of new nurses leave their job due to some form of verbal abuse from a co-worker in their first 6 months of work [29] and that violence is generally strongly related to the intention to leave the nursing profession [30], change institution, and burnout [11]. The reasons found in the literature for the greater vulnerability of nurses to lateral violence is the proximity and constant interaction with patients and working in coordinated teams [31]. Besides, nurses can be at higher risk of violence from co-workers due to being new to the job, having been promoted (when others find it unfair), having relational difficulties, receiving special attention from physicians or working in understaffing conditions [32]. Occupational hazards that come from this close interaction vary according to services, individual characteristics, and work-related issues. A European study found that fixed night shift workers are at a higher risk of violence, which is associated with a higher incidence of burnout, while working part-time seems to be associated with less violent events [11]. A latent class cluster analysis by Einarsen et al. [33] points to the importance of identifying target groups more prone to suffering behaviors that range from incivility to aggression. The former, defined as "rude and discourteous actions of gossiping and spreading rumors, and refusing to assist a co-worker" [34], is more frequent, but the latter, seen as "repeated, unwanted harmful actions intended to humiliate, offend, and cause distress in the recipient", is characterized by its severity and reiteration [34]. In a study about type-III violence (worker-to-worker violence), with 185 perpetrators, all health professionals, most were female (74.4%), 60% worked full-time, had a mean age of 45.2 years, and a mean job tenure of 11.7 years in the hospital. Nurses are common victims of violence but also common perpetrators [35]. The origins of this violence have been related to the patriarchal historical background of nursing, with its internalized sexism, due to the oppression that it generates, both individually and collectively [1]. This is supported by a more recent content analysis of nurses in California, which states that lateral violence may be used as a form of informal power as a result of organization-related feelings of oppression, meaning that those who feel oppressed (or undervalued) may try to regain power by hurting their colleagues [2]. Given the high prevalence of this type of violence, it is important to note its short-and long-term repercussions, the latter being the most frequently observed. Victims often feel humiliated and undervalued, which affects their relationship with the work environment. It also affects their relationships, causing physical and emotional fatigue, and depressive, anxiety, or stress symptoms [36][37][38]. Specifically, lateral violence between nurses further results in low self-esteem, feelings of powerlessness, and negative patient outcomes [39], besides contributing to burnout [40] and job dissatisfaction [13,41]. A wide range of mental health consequences is expected from this phenomenon, such as anxiety, vulnerability, guilt, anger, sadness, and peer blaming following violence exposures [42], as well as fear, shame, helplessness, chronic fatigue, depression, sleep disorders [43], and, in severe cases, post-traumatic stress syndrome (PTSD) and increased suicide risk [10]. Physical disorders such as musculoskeletal disorders and a heightened risk of cardiovascular disease have been reported [44,45]. Harassment in the workplace has been proven to cause negative repercussions on job satisfaction to an extent that the more situations workers faced, the less satisfied they were [10,46] affecting the quality of care [30,47]. Additionally, the quality of teamwork seems to be part of/fuel/support the occurrence of violence when reduced or lacking, leading to burnout symptoms [11,48]. Lateral violence may be triggered by the enmity or animosity between co-workers that gradually turns into persistent and long-lasting harassment and is often perceived as a turnover from a previous relationship of friendship and trust. It is also a group phenomenon toward one individual and causes severe anxiety to the victim [49]. The main hypothesis stated by the present study is that the perceived violence in this setting by the nursing staff of hospitals by their co-workers (lateral violence) is related to distinct sociodemographic and work variables also found in literature on user violence. It is expected that differences are found concerning sex, age, area of work, job tenure, and length of time in the job. It is further anticipated that the consequences associated with workplace violence most commonly found in the literature (burnout, job satisfaction, and health problems) vary according to the higher or lower perception of lateral violence. Aims Since nursing professionals are a population at a high risk of violence, this study addresses the specific individual factors related to the work context. The specific goals of this study focus on the lateral violence perceived by nursing professionals in public hospitals in the Region of Murcia, Spain, as follows: (1) identifying differences associated with higher exposure to personal-, social-, and work-related violence according to socio-demographic and socio-occupational variables; (2) empirically obtaining subgroups depending on their exposure to violence, and analyzing them according to their reported levels of burnout, satisfaction, and general health. Materials and Methods This is a cross-sectional associative study using self-report questionnaires. The target population consisted of nursing professionals from public hospitals in the southeast of Spain. Sample A random block sampling was performed, prompting a total sample of 950 nursing professionals from 13 public hospitals located in the southeast of Spain, 6 of which were considered large (with a bed capacity of more than 200 beds) and 7 medium or small (with a bed capacity of 200 beds or less). Considering the characteristics of the sample (Table 1), the age range of participants was from 30 to 50 years, with a mean age of 39.43 years (SD = 9.65). Most were women (77.8%) and were married or cohabiting (63.2%). Concerning work characteristics, 54.3% were in the nursing profession for 0 to 5 years (with a mean of 14.02 years) and at least 54% were in the same job position for the last 5 years (mean 7.31 years, SD = 8.35). From the studied sample, 20.3% worked in surgery, 17% in internal medicine, 14.3% in the emergency department, 6.9% in day care, 5.5% in mental health, and 14.8% in other facilities. Instrument A 76-item protocol including the above-mentioned sociodemographic variables (such as age, sex, marital status) and work-related variables (length of time in the profession, hospital, length of time in the current job position, and type of unit) was used. Other scales were used to measure lateral workplace violence, burnout, job satisfaction, and general health. Health Workers' Aggressive Behavior Scale-Co-Workers and Superiors (HABS-CS) The HABS-CS was created by Waschgler et al. [50] and assesses hostility behaviors perceived by health professionals by co-workers. It encompasses 10 items with 6 response options (1 = never to 6 = daily) grouped into three factors: personal factors, social factors, and work-related factors. The original study presents an internal consistency of 0.82 for the personal scale, 0.79 to social scale, and 0.72 to work-related scale, with a total Cronbach Alfa of 0.864. For the present study, the total reliability of 0.87 was met for the total scale and 0.82, 0.82, and 0.71 in the personal, social, and work factors, respectively. Maslach Burnout Inventory-General Survey (MBI-GS) Created by Schaufeli et al. (1996), this study used the Spanish version, translated and validated by Gil-Monte [51]. It includes 16 items grouped in three dimensions: emotional exhaustion, professional efficacy, and cynicism, with 5, 5, and 6 items, respectively. Responses range from 0 (never) to 6 (always). Gil-Monte's study presents an internal consistency of 0.83 for emotional exhaustion, 0.72 for professional efficacy, and 0.73 for cynicism [51]. To the present study, internal consistency of 0.85 was found for emotional exhaustion, 0.70 to cynicism, and 0.85 to professional efficacy. Overall Job Satisfaction (OJS) This scale, first built by Warr, Cook, and Wall [52], was adapted to Spanish by Pérez and Fildalgo [53], the version used in this study. It encompasses 15 items organized in two subscales: intrinsic satisfaction (factors related to responsibility and work recognition) and extrinsic satisfaction (organizational work-related factors such as work schedule). Responses range from 1 (very dissatisfied) to 7 (very satisfied). The original study presented an internal consistency of 0.85 to 0.88 for extrinsic factors and 0.74 to 0.78 for intrinsic factors. The present study yields 0.84 for the former and 0.70 for the latter. General Health Questionnaire (GHQ-28) Proposed by Goldberg and Hillier (1979), this scale measures general health. Its Spanish version, used in the present study, was adapted by Lobo, Pérez-Echevarría, and Antral [54], including 28 items grouped into 4 subscales: psychological somatic symptoms (somatic GHQ), anxiety and insomnia (anxiety GHQ), social dysfunction scale (dysfunction GHQ), and depressive symptoms scale (depression GHQ). Responses are provided in four options from zero to three (0-3), going from lower to higher intensity. The original study's internal consistency was 0.78 for somatic GHQ, 0.85 for anxiety GHQ, 0.75 for dysfunction GHQ, and 0.78 for depression GHQ. The present study yielded 0.79 for both somatic GHQ and anxiety GHQ, 0.71 for dysfunction GHQ, and 0.78 for depression GHQ. Procedure For sampling purposes, the authors contacted the directors and supervisors of the participating hospitals to provide them with detailed information on the present study and its goals. Upon acceptance, a meeting was arranged with the supervisors of the different units (as fellow researchers) during which the study protocols were delivered. These included an informative note, the above-mentioned scales, and instructions regarding its completion, informed consent, and delivery to the research team in a sealed envelope. A code was ascribed to each worker and protocols were randomly assigned to 50% of the sample. The protocol was delivered by fellow researchers who later managed its reception, in a sealed envelope without identification in a maximum deadline of two weeks. Protocols undelivered during this time length were considered lost. A response rate of 70.48% was obtained. The present study, designed under STROBE guidelines, was approved by the Ethics Committee of the University of Murcia and the board of directors of each hospital. The authors declare no conflict of interest. Data Analysis Data analysis for the present study was performed using SPSS version 25. The sample distribution (mean and standard deviation) and response percentages according to the study variables of the ad-hoc questionnaire were analyzed. A Student's t-test was used to compare the mean of dichotomic variables and ANOVA for the analysis of variance of the factors with more than two levels. Tukey's test for post hoc analysis was used to delve into such differences. For this purpose, the size effect was estimated as well as the Cohen's d for mean differences, partial Eta squared (η2) for variance differences, and r for post hoc analysis. The Pearson correlation test was used to complete the analysis of the relationship between the scales used. Results It was possible to observe that at least 59.2% of the sample was exposed to violence from a co-worker at least once in the last year. Specifically, 51% of the sample was exposed to lateral violence of a personal nature (e.g., "Some co-workers spread false rumors about me"), 37.3% of social nature (e.g., "Some co-workers have stopped talking to me"), and 21.3% work-related (e.g., "Some co-workers deliberately accuse me of other people's mistakes"). Personal Lateral Violence Co-worker violence of a personal nature displayed significant differences according to the sex of the respondent, being higher in male participants (t = 2.16 p = 0.03 d = 0.17). Further, workers within the age range of 30-50 years seemed to be at higher risk, followed by those younger than 30 and those 50 or older (F = 3.23 p = 0.02 d = 0.01). Concerning time in the profession, those with 6 to 11 years in the profession perceived more violence from co-workers, followed by those with 12 to 20 years, less than 5, and more than 20 (F = 2.61 p = 0.03 d = 0.01). The time in the job position yielded significant differences, its influence being higher in the 11-to-15-year interval, followed by 6 to 10, 1 to 5, more than 15, and less than a year (F = 2.90 p = 0.01 d = 0.02). All differences found revealed a low effect size (Tables 2 and 3). Social Lateral Violence No significant differences were found concerning sex and type of hospital ( Table 4). The figures obtained for this factor, though, yielded significant differences related to the type of unit, although again with a low effect size, showing a higher prevalence in external consultations and outpatient units (F = 2.19 p = 0.04 d = 0.02) ( Table 5). No significant differences were found concerning the other variables studied (Tables 4 and 5). Work-Related Lateral Violence Concerning work-related violence between co-workers, sex of the respondent and type of hospital did not present significant differences (Table 6). On the other hand, significant differences were found according to the type of unit ( Table 7). The units with the highest exposure to this violence were outpatient and external consultation, followed by surgery, other, mental health, maternal and child care, emergency department, and internal medicine (F = 2.8 p = 0.01 d = 0.02) ( Table 7). Discussion The goals of the present study were to measure the lateral violence perceived by nursing professionals and identify differences associated with higher exposure to personal, social and work-related violence according to sociodemographic and socio-occupational variables. Additionally, an analysis according to the reported levels of burnout, job satisfaction, and general health was also envisioned. The results of the present study sustain that nursing professionals are highly exposed to co-worker violence, in line with other studies [6,11,33,35,55]. This violence is perceived by a high percentage of nurses in their workplace as disruptive and inappropriate behavior by one employee toward another, whether in an equal or inferior position. This is partly due to their demanding and highly supervised environment [56,57]. This violent experience causes stress and negatively impacts psychological health [5,58], leads nurses to consider leaving their jobs [58][59][60][61], or adversely impacts patient care and attention if they stay in the job [62]. Sex The observed differences in personal lateral violence concerning sex are in line with the literature, which identifies male nurses as more at risk of workplace violence [63,64] and, more specifically, verbal violence [65]. On the other hand, some authors place female nurses at higher risk depending on the hospital setting [66] and find cultural differences in exposure to different types of violence (e.g., more physical violence against men in cultures where women are usually seen as frail and in need of protection) [55]. Others found no significant sex differences in lateral/horizontal violence [67]. Age, Time in the Profession and Time in the Job In the present sample, with a mean of 14.02 years in the nursing profession, the most affected by personal lateral violence were those with 6 to 10 years in the profession, being below average. A qualitative approach sustained that perpetrators target those newer to the job and younger because they are easy targets and, being less experienced, they are less likely to be able to defend themselves [60], consistent with the tendency found by other research [58]. In addition, acts of a personal nature such as gossiping, boycotting opportunities, tough love, or sink-or-swim are common incivilities in young nurses' education, in the context of a permissive culture of vertical or lateral violence [60], but can also be found in nurses who are not necessarily young, but younger than their leaders [68]. In the present study, it was not the youngest nurses who perceived more workplace violence which, in general, challenges studies that indicate a higher risk in younger people of experiencing more violence among co-workers [69]. The differences found between the present study and those found in the literature require further research to check if they are a typical characteristic of this population or due to another feature. A mixed-method approach suggests that, when younger nurses work together with older ones, the former is more proactive and straightforward and the latter tend to be more conservative, which is a conflict generator [70]. As we see it, lateral violence in healthcare settings is strongly a nurse-to-nurse phenomenon that is frequently based on institutionalized tolerance to it and to the idea that hierarchy needs this type of behavior to maintain civility in the workplace [59,70]. It also lays in a culture of legitimized workerto-worker violence in healthcare [32] and is frequently found in Latin European countries, keeping a relationship with low power distance [71]. Norton et al. [6] found that nurses suffer more vertical than horizontal violence (74.2% vs. 25.8%, p < 0.029) which confirms the normalization of the use of violence in healthcare by people in a higher position to control those in a lower one. Despite the current results, the term formulated by Meissner that "nurses eat their young" does not seem out of place when we face this top-down violence tendency used as a mean of authority [72]. Time in the profession and the job position also differs from data found in the literature on the exposure to workplace violence, since those with medium time were the ones most affected by peer violence (neither the newest in the profession or job position nor those with more time in both). Although these variables have a low effect size, it is possible to contextualize that, of the age range of the participants in the present study, the age group most prone to be victimized during training [68] was not represented here. A study using the NAQ-R states that co-worker violence such as withholding information that affects performance, having opinions ignored, or being forced to work below the individual's competence is often identified by nurses with an average time in the profession of 20 years [73]. Dellasega [74] points out that, besides nurses under training, newly hired nurses with experience, independently of age, are also often targeted by co-workers. On the other hand, the present data point that older nurses (aged more than 50) are less likely to get harassed, which can be related to perpetration being more prevalent among nurse leaders and staff toward younger nurses [68]. Additionally, the risk of co-worker violence was found to decline as nurses' length of service and age increased in a cluster analysis by Karatuna et al. [71]. Shift or Fixed Schedule and Setting Concerning higher perceived co-worker violence in the case of shift work, the present results are corroborated by previous studies. Shift-working nurses have been proven to be at higher risk of vertical and lateral violence than fixed schedule workers [6,75]. The results for social lateral violence factor point to a higher prevalence in external consultations and outpatient units although without significant differences, although there is reference in the literature to greater impact of lateral violence in the Emergency Room (ER) [55,76]. Burnout, Job Satisfaction, and General Health As expected, personal-, social-, and work-related lateral violence are significantly negatively correlated to both extrinsic and intrinsic satisfaction and positively correlated to dimensions of burnout and poorer health quality, as happens in other types of workplace violence, such as user violence [12,19,77]. The present results confirm other data found for burnout, namely positioning co-worker violence as a predictive factor for burnout (β = 0.37 p < 0.001) and holding a negative correlation with job efficiency (r = −0 322, p < 0.01) [78]. This may be related to the interference of violence in workers' wellbeing, representing an additional source of stress, especially in the long run, when it becomes toxic (known to be health-disruptive) and negatively impacts self-regulating body functions and psychological health [3]. In an ER-based study, 91.7% of respondents stated that lateral violence decreases their job satisfaction, with 53.3% pondering transferring to another unit or hospital, or leaving their job [79]. Absenteeism has also been shown to be 1.5 times higher in comparison to non-victimized peers (95% CI: 1.3-1.7) and the intention to leave rises to 78.5% of bullied nurses with a length of service lower than 5 years [78]. This study is not without limitations, so the data reported in it should be interpreted with caution. The cross-sectional design does not allow us to make causal relationships, limiting us to the description and comparison of the data. The differences found between our study and previous studies may be due to specific circumstances that were not taken into account, so it would be interesting to propose designs that allow us to study them in depth. Furthermore, although measures were taken to encourage response and anonymity, self-reported questionnaires can be another source of bias, especially if they collect sensitive information that may affect the worker's work environment. Finally, although the sample size is large, the sample belongs to a specific region, which may lead to differences with other populations both nationally and internationally. Conclusions Experience of personal lateral violence in nurses is strongly positively correlated to higher levels of burnout and poorer psychological health indicators. This type of workplace violence also negatively impacts both extrinsic and intrinsic job satisfaction. The culture of fostering peer violence in nursing exists when organizations allow, ignore, or reward such behaviors and leaders minimize complaints, which suggest that combating this phenomenon requires organizational support and bystander empowerment policies [59]. This supports the common idea that nurses "eat their young", a mindset that is also enabled by nursing academic training [72]. We highlight that victimized nurses may not report lateral violence out of fear of retaliation or because they see it as necessary for nursing education, encoded in a "sink or swim" mindset. Similarly, other nurses and health professionals may, as bystanders, witness this phenomenon but lack skills on what to do or fear being the next in line to be harassed [80]. In general terms, intervention, prevention, training, or support programs for professionals are focused on user violence. Although this phenomenon is of special interest, in our opinion, both the approach to these proposals and their implementation must have qualitative and quantitative differences when the objective is to reduce violence among co-workers. For this reason, our study provides evidence of this reality and facilitates the planning of specific programs aimed at this objective which, as has been observed, may be affecting both the health of professionals and their work performance. For subsequent studies, it would be interesting to know the biopsychosocial profile of these professionals, allowing the design of even more specific programs. It is also advisable to explore which of the variables studied for the reduction of violence by users against healthcare personnel are effective for the reduction of violence among co-workers. Finally, longitudinal studies grouping these two sides of the same reality could serve as a basis for improving the work environment of healthcare professionals. Institutional Review Board Statement: The study was conducted in accordance with the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of Universidad de Murcia. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to the confidentiality agreement with the participating organizations.
2021-09-11T06:17:04.754Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "5b1953cb4d840486262bba9d85e52e84a847514e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/17/9183/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f1ff5a6fd6d5ead2d45e30267004b9d7bb0c5ca", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
250713178
pes2o/s2orc
v3-fos-license
Case Report: Invasive Fungal Infection and Daratumumab: A Case Series and Review of Literature Life expectancy of multiple myeloma (MM) patients has improved in last years due to the advent of anti-CD38 monoclonal antibodies in combination with immunomodulators and proteasome inhibitors. However, morbidity and mortality related to infections remain high and represent a major concern. This paper describes the “real life” risk of invasive fungal infections (IFI) in patients treated with daratumumab-based therapy and reviews the relevant literature. In a series of 75 patients we only observed three cases of fungal pneumonia. Unfortunately, the early signs and symptoms were not specific for fungal infection. Diagnostic imaging, microbiology and patient history, especially previous therapies, are critical in the decision to start antifungal treatment. Recognising the subgroup of MM patients with high risk of IFI can increase the rate of diagnosis, adequate treatment and MM-treatment recovery. INTRODUCTION Infection is one of the major complications and cause of death in patients with multiple myeloma (MM) (1). This is due to immunosuppression and hypogammaglobulinemia caused by the disease itself and to treatment regimens (2). Historically, invasive fungal infections (IFIs) (3,4) were uncommon in the course of MM treatment, however recent literature has highlighted a specific risk for this infection in the era of new therapies. From 2017 to 2021 over 75 MM patients were treated with Daratumumab-based regimens in the Hematology and Bone Marrow Transplantation Unit at San Raffaele Institute, Milan according to approved clinical indications. We report 3 cases of IFI; 1 probable and 2 possible infections according to the definitions from the European Organization for Research and Treatment of Cancer and the Mycoses Study Group Education and Research Consortium (EORTC/ MSGERC) (3,4): CASE REPORT 1#: PROBABLE PULMONARY ASPERGILLOSIS A 57-years old man was admitted for progressive respiratory failure and fever. He was diagnosed with a relapsed/refractory IgGl MM and was undergoing the first cycle of carfilzomib-lenalidomidedexamethasone therapy as 7 th line of anti MM treatment. Five years earlier he had undergone an allogenic stem cell transplant complicated during engraftment phase by fungal infection (probable pulmonary aspergillosis). Previous lines of therapy also included bortezomib, lenalidomide, pomalidomide and 9 months of daratumumab-single agent as 6 th line of therapy. He was treated as per cycles' schedules with dexamethasone for more than 10 months (equivalent dose: 0.5 mg/kg/day of prednisolone). Last daratumumab infusion had occurred 40 days before admission. He was no longer receiving secondary mould-active prophylaxis and he had not graft-versus-host disease (GVHD). Upon admission, piperacillin-tazobactam was started and, at the same time, respiratory support was provided with non-invasive ventilation. Empiric oseltamivir was added in consideration of flu epidemic period of the year. The patient was in progressive disease, with severe lymphocytopenia (0.2 x10 9 /L) but normal neutrophils count. Immunoglobulin levels were low (IgG 3.45 g/l, IgA and IgM 0.3 g/l) with normal renal function and no anemia. C-reactive protein (CRP) at admission was 2857 nmol/L (normal value < 47 nmol/L), while at the moment of IFI diagnosis CRP was 660 nmol/L. Lung computerized tomography (CT) scan showed numerous bilateral peribronchial areas with increased parenchymal density and ground glass, particularly in the left lower lobe, with also areas of parenchymal consolidation in the right base, not excavated, suggesting inflammation ( Figure 1). We documented microbiologically Influenza A (H1N1) infection associated with pulmonary aspergillosis. Serum aspergillary antigen (AGASP) was high (1.07, normal values <0.5) and sputum culture was positive for Aspergillus Flavus. Therapy with intravenous (iv) voriconazole was started with a progressive improvement in dyspnoea, pulmonary imaging and inflammatory markers and a reduction of respiratory support requirement. AGASP levels rapidly decreased until disappearance. Patient continued with oral voriconazole for other 60 days. At discharge lymphocytopenia was resolved with a lymphocyte count of 2.3x1 9 /L with persistent immunoparesis (IgG 3 g/l). Indeed, he received high dose of iv immunoglobulins (IVIg) as substitutive therapy during hospitalization and the following 4 months and during winter period in the next years. There was no recurrence of IFI also during subsequent lines of therapies. CASE REPORT 2#: POSSIBLE PULMONARY ASPERGILLOSIS We report the case of a 59-years old man with IgGk MM undergoing treatment with daratumumab-lenalidomide and dexamethasone as 6 th line of therapy (dexamethasone dose was 50% reduced after 4 months of therapy for better patient compliance). He had relapsed 7 years after allogenic stem cell transplant not complicated by GVHD and had been previously treated with both bortezomib, thalidomide and lenalidomide. After nine months of daratumumab-based treatment, he was admitted to Haematology Department with gastroenteritis, fever and dyspnoea. Immunoglobulin levels were low (IgG 4.04 g/l, IgA and IgM 0.1 g/l) with a normal full blood count (Hb 111 g/L, neutrophils 2.7 x10 9 /L, Lymphocyte 1.1 x10 9 /L). CRP at admission was 900 nmol/L (normal value < 47 nmol/L). He had achieved more than 6 months earlier a very good partial response (VGPR) with negative imaging for bone lesions. He had received 0.35 mg/kg/day prednisolone equivalent dose for the last 150 days before IFI diagnosis. He was started empirically on antibiotic therapy consisting of iv metronidazole, azithromycin and linezolid but he experienced progressive respiratory failure and persistent fever. Lung CT showed small bilateral pleural effusion, areas of pulmonary thickening with air bronchogram, suggestive for inflammation, small pulmonary thickenings with peribronchial distribution, areas of increased density with a ground glass appearance and shaded micronodules ( Figure 2). Based on these findings, iv voriconazole was started 72 hours after antibiotics. Bronchoalveolar lavage (BAL) was not performed for worsening of the clinical conditions. There was no microbiological evidence of IFI or viral reactivations and blood cultures were negative. With antifungal therapy there was a progressive improvement in respiratory failure, cough and pyrexia: follow up CT scan showed a complete resolution of the nodules. Empiric oral voriconazole was continued for 2 months with no recurrence of signs of IFI even during further lines of MM-therapy. CASE REPORT 3#: POSSIBLE PULMONARY ASPERGILLOSIS A 75-years old man receiving daratumumab-lenalidomide and dexamethasone as first line therapy for an IgAk MM was admitted for fever, hypoxia and decrease in consciousness. Blood tests showed normal neutrophils count (3.4 x10 9 /L), lymphocytopenia (0.5 x10 9 /L) and severe hypogammaglobulinemia (1.3 g/l) that was not present at MM diagnosis. CRP at admission 647 nmol/L (normal value < 47 nmol/L). The patient had no pneumological comorbidities, respiratory function was unremarkable before starting MM-therapy and he was on antibiotics prophylaxis with levofloxacin during the first 3 cycles of therapy. He was treated with 3 cycles of MM-therapy and dexamethasone dosage was reduced at 50% according to age (0.27 mg/kg/day of prednisolone for more than 90 days) achieving a biochemical VGPR. Meropenem and linezolid were started but 4 days later, the respiratory support was increased with non-invasive ventilation. CT scan showed bilateral pleural effusion, ground glass areas and parenchymal consolidation areas. Oral posaconazole was added to broad spectrum iv antibiotics ( Figure 3) few days before BAL obtaining therapeutic levels on blood. Serum AGASP was persistently negative. A BAL was performed during antimicrobial therapy with no microbiologically documented infection. Very low copies of cytomegalovirus (CMV) were detectable on BAL sample but there were judged not enough for diagnosis of pulmonary CMV infection. CMV DNA on whole blood was negative and patient was not treated with antiviral therapy. Patient underwent orotracheal intubation due to worsening gas exchange. Broad spectrum antibiotics were continued for 10 days while empiric antifungal therapy for almost 2 months. There was a progressive improvement, with oxygen weaning and suspension of all antimicrobial drugs. Follow up CT scan showed a regression in all ground glass and consolidation areas, lymphocyte count increased > 1.5 x10 9 /L and patient recovered and was able to resume every-day activities. After this favourable evolution, patient was able to safely re-start daratumumablenalidomide-dexamethasone 2 months later. He was not on secondary antifungal prophylaxis and the monitored lung CT scan at 4 months from the IFI was negative. Until now, after other 5 cycles of MM-therapy there was no IFI outbreak, patient continued substitutive IVIg according to blood dosage (median IgG 3.2 g/l). DISCUSSION Infections are among the major causes of morbidity and mortality in MM patients (13). Susceptibility to infections in MM is multifactorial, including hypogammaglobulinemia and aberration of dendritic cell function, B and T cell immunity (14,15). The importance and interconnected responses of innate and adaptive immune system in IFI-protection are being widely investigated (15). Patient characteristics such as old age, multiple comorbidities, state of the disease and immunosuppressive treatments, confer major risk and increase the severity of infections (14). In the past years, the incidence of fungal infections (especially aspergillosis) was reported as significant in MM treated with conventional chemotherapy, with a mortality of 50% (16). Another study described 98 cases of neutropenia relatedinvasive aspergillosis (IA) in MM patients after chemotherapy or autotransplant, with a 63.4% response to antifungal therapy (17). From the advent of biological therapies, IFI epidemiology has changed. A retrospective study on lymphoproliferative diseases showed a rate of 5.6% IFI in 248 MM patients; all IFI were IA (18): first line with PIs and further lines of therapy with IMIDs were equally distributed (8 cases vs 6 cases). The SEIFEM2004 study evaluated the incidence and outcome of IFI in haematological malignancies in Italy. 1616 myeloma patients were included with 7 patients diagnosed with IFI (0.5%, of whom 4 were mould infection) (19). Notably, the incidence of IFI after allogeneic stem cell transplant (SCT) can be as high as 20%, with a mortality rate of 50-80%, compared to 2-6% in the autologous SCT setting (2-6%) (20). The French SAIF network identified 5 MM patients who received allo-SCT with IA on a total population of 424 allotransplants (21). Indeed, IA often occur as a result of cumulative immunosuppression, neutropenia and prolonged use of steroids that are part of all therapeutic regiments for MM (often 40 mg or 20 mg of dexamethasone per week). Clinical manifestation are reported to be atypical with micronodules, ground-glass opacities and tree-in-bud infiltrates (22). A single-centre study published in 2015 studied IFI in MM patients treated with novel agents (IMiDs and PIs): the rate of invasive mould infection and IA were 0.8% and 0.3% respectively. IFI rates were reported to be 2.2-2.5% in relation to the use of auto transplant as consolidation with a mortality of 44%. Multivariate analysis showed that the only risk factor for IFI was having received more than 3 lines of therapy with a rate of IFI of 15% in this setting. The authors noted that, despite the lack of administration of mould-active prophylaxis, the rate of IA and mould infections were low and IFI occurred mostly during disease progression and in patients with a median of 5 lines of therapy (23). Another recent single-centre study reported a 3.5% incidence of proven or probable IFI in MM patients with high early mortality. Patients were treated with both PIs, IMIDS, conventional chemotherapy and auto or allo-SCT. Of the 22 IFI reported, 31.8% were mould infections, and among these 71% were pulmonary IA. Multivariate analysis showed that light chain disease, low haemoglobin level, low serum albumin and previous allogenic stem cell transplant were associated with IFI (24). Randomized clinical trials (RCT) on daratumumab in relapsed/refractory MM documented an incidence of grade 3 and 4 infections of 21.4% and 28.3% respectively with a rate of grade 3-4 pneumonia of 9% (6,7). An analysis on RCT with daratumumab in first line demonstrated that the risk of infection is increased especially in patients with age >= 75 years, elevated baseline alanine aminotransferase, high LDH and low albumin levels (25). Moreover, in addition to previously reported risk factors, treatment with Daratumumab reduced both Natural Killer (NK) cells and other CD38-expressing immune cells and cytotoxic T lymphocytes (26)(27)(28), providing a biological explanation of a possible increased risk of infection in this population especially for viral infection. Some evidences also reported that NK cells play an important role in the antifungal host response with direct fungal damage and the release of multiple cytokines that activate the immune system (29). Despite the role of Daratumumabimpaired NK and T cells response in fungal infections is not fully demonstrated, it is possible that it may play a role in increasing the risk of IFI in patients, although this risk cannot be separated from the concomitant use of steroids and other biological therapies. A retrospective study evaluated the incidence of infections in patients treated with daratumumab-containing regimens: rate increased from 26% to 56% when daratumumab was used as single agent or combined with other agents (30); no data on IFIs were availed. Another "real word" study on rates of infections and severe lymphopenia in 100 patients treated with daratumumab showed only 1 patient with IFI (fungal meningitis) (31). The authors showed a higher rate of severe lymphopenia in daratumumab-based regimens combined with IMIDs, with higher rates of serious infection in this patient population. A recent study also evaluated the role of hypogammaglobulinemia in daratumumab-based therapies, both in relapsed/refractory MM patients and during first line treatment (respectively 88% and 12%). Daratumumab causes a rapid decrease in uninvolved free light chain and immunoglobulin levels with a nadir within 2-4 months. The authors demonstrate that decreased poly-IgG levels after treatments and high risk cytogenetics were associated with higher risk of infections in multivariate analysis. Infections were mainly respiratory and often selfresolving. There was a 1.2% incidence of invasive fungal infections (1 patient treated with Daratumumab monotherapy and 1 with Daratumumab+PIs) (32). Some evidence showed that administration of intravenous immunoglobulins at substitutive doses can reduce infection rate (33), but the impact of this practice is not clearly established, especially for patients in first line of treatment. Due to the low rate of IFI in patients with MM, there is currently no consensus on the role of antifungal prophylaxis, especially mould active (1,34,35). The study by Teh et al. (23), showing a 15% risk of developing an IFI after 3 or more lines of treatments, suggests the opportunity to consider surveillance and antifungal prophylaxis in high-risk patients. On the other hand, patients who receive high-dose chemotherapy and develop severe mucositis could require yeast prophylaxis (35). We presented 1 case of probable pulmonary aspergillosis and 2 cases of possible pulmonary fungal infection. According to EORTC/MSGERC consensus (4), our patients had more than 2 host factors for IFI that are well recognized not only as risk factors but also as a clear predisposition to IFI. All patients received a significant dose of steroids for more than 60 days. Two patients were heavily pre-treated and underwent previous allogenic stem cell transplantation. Two patients were receiving immunoglobulins replacement therapy due to the low levels possibly related to daratumumab therapy or MM itself. All patients had from moderate to severe lymphocytopenia that was resolved during the IFI episode. In 4 years' time, we have only observed 3 cases of IFI over 75 patients treated with Daratumumab for MM. Two of them previously received allo-transplant. The incidence is 0.04% until now, however we need to extend the observation period to evaluate the impact of daratumumab in first line of treatment and in patients treated continuously for more than 3 years of therapy and in whole MM population. Based on our observation and on published data, even if there is a biological rational, it is impossible, at the moment, to establish whether patients treated with daratumumab have a higher risk of IFI or the risk is a sum of the variety of host factors that MM-patients usually accumulate. The evaluation of IFI risk is complex also due to the fact that patients receive various treatment classes in combination even in previous line of therapy, thus adding up the specific infection risk for each class. The possible role of IVIg infusions and lymphocyte to neutrophils ratio in prevention of infections, especially IFI, in MM patients needs to be extensively studied. Published data and our single centre experience confirm at the moment that primary mould-active prophylaxis is not recommended in all MM population. Role of antifungal prophylaxis need to be established case by case looking at all host factors. Nevertheless, the cases presented underline the importance of early recognition of signs and symptoms of IFI especially in MM patients at high risk suggesting an active serum AGASP surveillance in relapsed/refractory MM and persistent severe lymphopenia and a role of early lung CT scan in persistent fever without other microbiological explanation. Further studies are necessary to better recognize the epidemiology of IFI in this setting and clearly recognize a subpopulation at higher risk. CONCLUSIONS IFI in MM patients is a rare complication that historically occurred in highly pre-treated patients with important immunosuppression related to therapies and progressive disease. There is no actual evidence of a direct increased risk of IFI in Daratumumab-treated patients: at the moment, the majority of patients are affected by relapsed/refractory MM in which we cannot split the effect of previous and concomitant cytotoxic or biological therapies, concurrent neutropenia and hypogammaglobulinemia, autologous or allogenic HSCT and the role of the underlying disease. Currently, there is no strong evidence of which MM population can benefit from an antifungal prophylaxis, especially mould-active. A risk-adapted selection of population, taking in consideration tumour and host risk factors, will help the clinician in the management of suspected IFI. Prophylactic immunoglobulin infusion is currently suggested to reduce the risk of infections in patients with hypogammaglobulinemia. However, the progressive increase in use of daratumumab-based therapies adding a novel IFI risk factor will possibly change the epidemiology of fungal infections and result in a different antimicrobial approach. Further "real life" observations are necessary to understand the impact of anti CD38 therapy and better recognize patient at risk. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
2022-07-21T15:22:18.224Z
2022-07-18T00:00:00.000
{ "year": 2022, "sha1": "0fb57ddedf065c8511e4d39f18174a7187682ad3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2022.867301/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "caf8b7a5758986d3d6179aba0fa6e683494dc0e6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253274661
pes2o/s2orc
v3-fos-license
Effects of Multicomponent Exercise Training on the Health of Older Women with Osteoporosis: A Systematic Review and Meta-Analysis This study aimed to analyze the effects of multicomponent exercise training in older women with osteoporosis. We conducted a systematic review following the PRISMA guidelines and registered on PROSPERO (number CRD42022331137). We searched MEDLINE (via PubMed), Web of Science, Scopus, and CINHAL databases for randomized experimental trials that analyzed the effects of physical exercise on health-related variables in older women with osteoporosis. The risk of bias in the studies was verified using the Cochrane Collaboration tool and the Jadad scale was used to assess the methodological quality of the studies. Fourteen randomized controlled trials were included, with a total of 544 participants in the experimental group and 495 in the control group. The mean age of all participants was 68.4 years. The studies combined two to four different exercise types, including strength, aerobic, balance, flexibility, and/or functional fitness training. The practice of multicomponent training with an average of 27.2 weeks, 2.6 sessions per week, and 45 min per session showed improvements in strength, flexibility, quality of life, bone mineral density, balance, and functional fitness and reduced the risk of falls in older women with osteoporosis. Multicomponent training was shown to be effective in improving health-related variables in older women with osteoporosis. Introduction The world population has shown an abrupt increase in older people in relation to the total population since the mid-twentieth century. Aging tends to be accompanied by a loss of bone and muscle mass and an increase in the percentage of fat due to the reduction of sex hormones, especially anabolic steroids [1,2]. In this sense, changes in bone mineral density (BMD) levels can generate classifications of osteoporosis, such as mild, moderate, and severe. About 200 million people have osteoporosis. In the next three decades, the number of people with this disease is expected to increase by up to three times. Women have a lower BMD and a higher risk of fractures from falls due to the reduction in estrogen and the occurrence of menopause. Other functional losses can occur, such as reduced strength and muscle mass, balance, and visual capacity, which can increase the risk of falls [3]. The risk of fractures increases with osteoporosis followed by morbidity due to the reduction of the bone mineral component. Mortality is directly related to increased hip Research Question We based the research question and strategy of our study on the population, intervention, comparison, and outcome (PICO) model, often used in evidence-based practice and recommended for systematic reviews [16]. Therefore, the population was older women with osteoporosis, the intervention was multicomponent exercise training, the control was the group of participants that did not practice multicomponent exercise training, and the outcome was health-related variables. Therefore, the final PICO question was "What are the effects of multicomponent exercise training on health-related variables in older women with osteoporosis?". Risk of Bias Analysis The risk of bias of each included RCT was assessed by the Cochrane Collaboration tool, available at: https://training.cochrane.org/handbook/, accessed on 10 April 2022. This tool consists of 7 domains: (1) generation of the random sequence; (2) allocation concealment; (3) blinding of evaluators and participants; (4) blinding of outcome evaluators; (5) incomplete outcomes; (6) reports of selective outcomes; (7) report on other sources of bias. Each domain has the risk of bias classified as "high", "uncertain", or "low". The final score is assigned with the highest rating among the domains evaluated in each study [17]. Two authors independently performed the risk of bias assessment of each included study and a third researcher was consulted in case of divergences. Methodological Quality Analysis The Jadad scale was used for the analysis of the methodological quality of the RCTs. This instrument has 3 items with a total of 5 points: (1a) the study was described as randomized; (1b) the randomization was accurately performed; (2a) the study was a double-blind trial; (2b) the blinding was properly performed; (3) the study described the sample loss. The score can vary from 0 to 5. Studies with a score ≤ 3 are considered at high risk of bias. Two researchers conducted the methodological quality analysis. Any divergences in the analysis were sent to a third researcher for consensus [18]. Data Collection Process Data from the included publications were independently extracted by two researchers. Disagreements were resolved in a consensus meeting with a third researcher. The following variables were extracted: authors, year of publication, country, characteristics of the study population (age, sample size, and BMD), and intervention data, including general and specific exercises, intervention duration (weeks), intensity and volume of training (duration of the training session, in minutes, and frequency, in times per week), evaluation, and outcomes for variables related to physical and mental health. Meta-Analysis We used the Review Manager 5.4.1 program, available at http://tech.cochrane.org/ revman, accessed on 25 October 2022, to analyze the effects of multicomponent exercise training on the health of older women with osteoporosis. Meta-analyses were performed when two or more studies could be pooled. As variables were continuous, we used the inverse variance statistical method and the analysis model with the random effect. The effect measure was the difference between the means with a 95% confidence interval from the studies. The meta-analysis and distribution of the studies were analyzed by the weight of each variable in the meta-analysis. Evidence Level Assessment Two independent researchers used the grading of recommendations assessment, development and evaluation (GRADE) approach to evaluate the evidence level for each investigated outcome. The quality of evidence can be assessed by four classification levels: high, moderate, low, and very low. RCTs start with high quality of evidence, and observational studies begin with low quality of evidence. Five aspects can decrease the quality of the evidence: methodological limitations, inconsistency, indirect evidence, inaccuracy, and publication bias. Contrariwise, three aspects can increase the quality of the evidence: effect size, dose-response gradient, and confounding factor [19]. Results In total, 919 studies were found following the proposed research methodology (MED-LINE via PubMed = 416; Scopus = 226; Web of Science = 108, CINAHL = 169). After using the selection criteria, 14 articles were included in the qualitative analysis and four studies provided data to be included in the pooled analysis ( Figure 1). http://tech.cochrane.org/revman, accessed on 25 October 2022, to analyze the effects of multicomponent exercise training on the health of older women with osteoporosis. Metaanalyses were performed when two or more studies could be pooled. As variables were continuous, we used the inverse variance statistical method and the analysis model with the random effect. The effect measure was the difference between the means with a 95% confidence interval from the studies. The meta-analysis and distribution of the studies were analyzed by the weight of each variable in the meta-analysis. Evidence Level Assessment Two independent researchers used the grading of recommendations assessment, development and evaluation (GRADE) approach to evaluate the evidence level for each investigated outcome. The quality of evidence can be assessed by four classification levels: high, moderate, low, and very low. RCTs start with high quality of evidence, and observational studies begin with low quality of evidence. Five aspects can decrease the quality of the evidence: methodological limitations, inconsistency, indirect evidence, inaccuracy, and publication bias. Contrariwise, three aspects can increase the quality of the evidence: effect size, dose-response gradient, and confounding factor [19]. Results In total, 919 studies were found following the proposed research methodology (MEDLINE via PubMed = 416; Scopus = 226; Web of Science = 108, CINAHL = 169). After using the selection criteria, 14 articles were included in the qualitative analysis and four studies provided data to be included in the pooled analysis ( Figure 1). Table 1 shows the risk of bias of the included RCTs assessed using the Cochrane Collaboration tool. Of the 14 studies included in the present systematic review, 13 (92.85%) presented a low risk of bias and 1 study (7.15%) presented an uncertain risk of bias because it did not present how the participants were randomized [20]. Table 1 shows the risk of bias of the included RCTs assessed using the Cochrane Collaboration tool. Of the 14 studies included in the present systematic review, 13 (92.85%) presented a low risk of bias and 1 study (7.15%) presented an uncertain risk of bias because it did not present how the participants were randomized [20]. Table 2 presents the analysis of the methodological quality of the RCTs by the Jadad scale. The studies showed a high risk of bias (score ≤ 3). In the studies, randomization occurred in a simple way, despite having a satisfactory score in the description of sample loss and randomization. Double-blinding could improve the methodological quality of the studies. Burke et al. [20] Uncertain Low Low Low Low Low Low Uncertain Carter et al. [7] Low Low Low Low Low Low Low Low Dizdar et al. [21] Low Low Low Low Low Low Low Low Evstigneeva et al. [22] Low Low Low Low Low Low Low Low FilipoviĆ et al. [23] Low Low Low Low Low Low Low Low Garcıa-Gomariz et al. [24] Low Low Low Low Low Low Low Low Halvarsson et al. [25] Low Low Low Low Low Low Low Low Lord et al. [26] Low Low Low Low Low Low Low Low Murtezani et al. [23] Low Low Low Low Low Low Low Low Olsen and Bergland [27] Low Low Low Low Low Low Low Low Paolucci et al. [28] Low Low Low Low Low Low Low Low Preisinger et al. [29] Low Low Low Low Low Low Low Low Stanghelle et al. [30] Low Low Low Low Low Low Low Low Nawrat-Szołtysik et al. [31] Low Low Low Low Low Low Low Low 1: randomization; 2: allocation of randomization; 3: blinding of participants; 4: blinding of the evaluators; 5: incomplete outcomes; 6: report on selective outcome; 7: other sources of bias. Table 3 presents the years, countries, mean values and standard deviation of age, sample size, and BMD of participants of the studies included in the present systematic review. Interventions from the included studies consisted of a total of 1186 participants, with 691 participants in the experimental group (EG) and 495 in the control group (CG). It was found that the mean age of participants in the EG and CG of all studies was 68.4 years. The studies included in this review were developed in different countries, located on different continents. All participants were over 50 years of age. Publication years ranged from 1996 to 2021. Table 4 shows the intervention and training volume of the studies. It was found that 12 studies had EG and CG, while 2 studies used only EG. The CG participants did not perform physical exercises, except for the studies of Dizdar et al. [21], García-Gomáriz et al. [24], and Paolucci et al. [28]. The EG participants performed strength, aerobic, balance, flexibility, and/or functional fitness exercises. The duration of the interventions ranged from 4 to 96 weeks, 20 to 60 min per training session, and a frequency of 2 to 5 sessions per week. Table 5 presents the data on the evaluation and results of the included studies. The evaluation was divided between two and four moments according to each study. Functional fitness, BMD, and balance appeared more frequently in the included studies. Variables such as muscle strength, agility, quality of life, flexibility, pain assessment, and cardiorespiratory fitness were also analyzed and showed significant post-intervention increases (p < 0.05). The effect size (d) in the last column should be interpreted as follows: weak (<0.2), moderate (0.2 to 0.79), or strong (>0.8) [33]. Figure 2 shows the results of the meta-analyses of the studies that used the QUALEFFO-41 to evaluate the quality of life. The effect size was calculated by the standardized mean difference (SMD) with a confidence interval (CI) of 95%. When calculating the effect size, the negative sign means greater effects to the EG compared to the CG. In the forest plot, lines on the left side of the graph denote participants who received the multicomponent training intervention and presented significant positive changes compared to control participants. The average effect size of all RCTs is represented by the diamond and should be interpreted equally. There was a no significant difference in QUALEFFO-41 (95% CI: −2.06 to −0.69) with inconsistency I 2 = 95% and p-value = 0.33. Table 6 shows the level of evidence of the included studies, which was considered high, according to the GRADE tool. This means that there is moderate confidence in the estimated effect. Discussion The present study aimed to analyze the effects of multicomponent training on healthrelated variables of older women with osteoporosis. Increases in muscle strength, balance, cardiorespiratory fitness, and functional fitness were reported in the studies included in the present systematic review. The included studies (n = 14) combined a minimum of two and a maximum of four different exercise types, involving strength, aerobic, balance, flexibility, and/or functional fitness training. The analysis of the 14 studies showed that older women with osteoporosis that practiced multicomponent training, with an average of 27.2 weeks, 2.6 sessions per week, and 45 min per session, improved strength, flexibility, QoL, BMD, balance, functional fitness, and reduced the risk of falls. Burke et al. [20], Lord et al. [26], and Murtezani et al. [32] verified increases (p < 0.05) in isometric muscle strength of lower limbs in the knee, hip, and ankle flexion and extension movements with the application of the tests: knee extension, leg press, and back extensor strength. Murtezani et al. [32] reported increases (p < 0.05) in handgrip strength and lower limb strength. Filipović et al. [23] found an increase (p < 0.05) in lower limb muscle strength with the sit-to-stand test used to assess physical quality. Cardoso et al. [12] also reported increased muscle strength in upper and lower limb resistance exercises in a 12-week multicomponent program. However, Carter et al. [9] found no changes in lower limb muscle strength in the knee extension strength test. Balance was the most analyzed variable in the studies included in this systematic review. Carter et al. [7] reported increases (p < 0.05) in balance using the Berg balance, while Murtezani et al. [32] found no changes in this variable using the same test. Burke et al. [20] and Halvarson et al. [25] reported improvements (p < 0.05) in balance through the COP velocity, directional control, balance performance, walking speed with a dual-task, fast walking speed, advanced lower extremity physical function, timed up and go (TUG), and Bretz stabilometer measurements. Lord et al. [26] and Carter et al. [7] found no differences (p > 0.05) in balance with the sway test and the composite balance score. Similarly, Dizdar et al. [21] and Filipović et al. [23] used the TUG test to assess balance and found no significant differences. Evstigneeva et al. [22] investigated flexibility and found no significant differences (p > 0.05). Nevertheless, increases in flexibility (p < 0.05) were reported by Olsen and Bergland [27] in the functional reach test. Increases in QoL (p < 0.05) were observed in five studies, one [28] of them analyzed this variable with the Shortened Osteoporosis Quality of Life Questionnaire and four [21,22,30,31] used the 41-item Quality of Life Questionnaire of the European Foundation for Osteoporosis (QUALEFFO-41): pain, activities of daily mobility, jobs around the house, mobility, leisure social activities, general health perception, and mental function. On the other hand, Carter et al. [7] found no differences in the assessment of QoL in EG with the same instrument. In the variable functional fitness, significant increases (p < 0.05) were reported in five studies [22,27,[30][31][32], while the study by Carter et al. [7] did not present significant changes in this variable (p > 0.05). Different tests were used to assess functional fitness (eight-figure, test sit-to-stand weight transfer, six-minute walking test, maximum walking test, and functional reach test). However, the TUG test appeared more frequently in the evaluation of the functional fitness variable. Multicomponent training has been shown to be effective (p < 0.05) to improve the functional autonomy of older women [10], as well as resistance training with a frequency of two times and three times a week [6]. Few studies have investigated falls. The reduction in the frequency of falls (p < 0.05) was reported by Olsen and Bergland [27], Halvarsson et al. [25], and Filipović et al. [23] using the falls efficacy scale tests. Lord et al. [26] used resistance training for 5 weeks and found no differences in BMD (p > 0.05) between the CG, but three studies [24,29,32] reported increases (p < 0.05) in BMD in the lumbar spine, forearm, and total BMD. A possible explanation may be the longer intervention time used in these studies, both with more than 40 weeks. The study of Borba-Pinheiro et al. [6] evaluated BMD, functional autonomy, muscle strength, and QoL in 52 postmenopausal women using different types of resistance training (RT), one performed twice a week (RT2) and the other performed three times a week (RT3). Both training programs (RT2 and RT3) showed positive results in 13 months of intervention when compared to the CG, using the Osteoporosis Assessment Questionnaire (OPAQ). Olsen and Bergland [27], with postmenopausal women using different types of exercises (water aerobics and judo) with 12 months of intervention, demonstrated that RT presented the best results (p < 0.05) for lumbar BMD, balance, and QoL (OPAQ) compared to other exercises and GC. Of the 14 studies included in this systematic review, 4 studies were part of the metaanalysis. Evstigneeva et al. [22] and Stanghelle et al. [30] analyzed the quality of life using the QUALEFFO-41. These studies [22,30] showed favorable results (p < 0.05) with the multicomponent training intervention when compared to the CG (Figure 2). Additionally, two studies [22,23] evaluated the balance with the TUG test. Both of them showed improvements (p < 0.05) with the multicomponent training intervention when compared to the CG (Figure 3). A limitation of the present systematic review to be highlighted was that some studies did not use the double-blind randomization methodological process. Furthermore, some studies investigated patients with and without fractures, which may interfere with the time and optimization of results. Other limitations to be considered are the different intervention protocols presented and the lack of data from some studies [22,25,27] regarding the confirmation of osteoporosis. The lack of patterns for the outcomes among the elected studies is another limitation. Moreover, there were a small number of studies included in the meta-analysis. Thus, these findings should be analyzed with caution when prescribing physical exercises for women with osteoporosis. Conclusions Physical exercise involving multicomponent training in women with osteoporosis can improve BMD, strength, flexibility, balance, functional fitness, and QoL, and reduce the risk of falls. Other types of physical exercise (aerobic, resistance, and flexibility) were presented in this review for this population. The results showed the importance of applying different forms of physical exercise as a treatment for osteoporosis in older women. Therefore, a physical exercise program that aims to stimulate different physical qualities in training sessions can promote musculoskeletal health and QoL in this population. Future studies are recommended to investigate body weight excess, due to low mobility, and rheumatic diseases, as they may be related to bone remodeling and the association of physical exercise in the health of older women with osteoporosis. Moreover, it is suggested to design and apply an intervention program of multicomponent exercise training for women with osteoporosis to determine if there are some positive effects on BMD. CINHAL AB (osteoporosis OR bone density OR bone loss) AND AB (elderly OR aged OR older OR elder or geriatric) AND AB (treatment OR intervention OR therapy) AND AB (exercise OR physical activity)
2022-11-04T19:19:21.573Z
2022-10-30T00:00:00.000
{ "year": 2022, "sha1": "efa69c5e5c2c8f380a78da0db93165f34c350f63", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/21/14195/pdf?version=1667352151", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cdedae21176de9621a1dffd42587f8ccb4b1fb6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
27900861
pes2o/s2orc
v3-fos-license
Monte Carlo Methods for X-ray Dispersive Spectrometers We discuss multivariate Monte Carlo methods appropriate for X-ray dispersive spectrometers. Dispersive spectrometers have many advantages for high resolution spectroscopy in the X-ray band. Analysis of data from these instruments is complicated by the fact that the instrument response functions are multi-dimensional and relatively few X-ray photons are detected from astrophysical sources. Monte Carlo methods are the natural solution to these challenges, but techniques for their use are not well developed. We describe a number of methods to produce a highly efficient and flexible multivariate Monte Carlo. These techniques include multi-dimensional response interpolation and multi-dimensional event comparison. We discuss how these methods have been extensively used in the XMM-Newton Reflection Grating Spectrometer in-flight calibration program. We also show several examples of a Monte Carlo applied to observations of clusters of galaxies and elliptical galaxies with the XMM-Newton observatory. Introduction The X-ray band (∼ 0.1 to 10 keV or ∼ 1 to 100Å) contains the K and L shell transitions of abundant metals. This allows the use of a wealth of spectral diagnostics in order to understand the physical conditions of highly ionized plasmas in astrophysical systems. Historically, X-ray instruments have used both dispersive and non-dispersive spectrometers in order to resolve individual spectral features. Non-dispersive spectrometers, such as proportional counters, CCDs, and calorimeters, directly measure the energy of the incident photon by dissipative processes. In contrast, dispersive spectrometers such as gratings or crystals, measure the wavelength of the incident photons by recording the distance the incident photon was dispersed from a nominal focus. Dispersive spectrometers have higher resolution at lower energies and non-dispersive spectrometers have higher resolution at higher energies. Development of instruments utilizing either approach has proceeded in parallel and probably will continue for the forseeable future. Both the XMM-Newton (Jansen et al. 4 ) and Chandra (Weisskopf et al. 6 ) observatories contain dispersive spectrometers, which now routinely resolve emission or absorption lines from individual ions. Both observatories have collected hundreds of high resolution spectra of many different kinds of X-ray sources and have dramatically changed our physical understanding of many of these sources. The Reflection Grating Spectrometers (den Herder et al. 3 ) (RGS) on XMM-Newton consist of two arrays of reflection gratings placed behind nested mirror shells. X-rays are dispersed to an array of CCDs. Chandra has two arrays of transmission gratings, the low energy transmission grating (LETG) and the high energy transmission gratings (HETG). The X-rays are also dispersed to either an array of CCDs or a micro-channel plate. A variety of new techniques to analyze data from these dispersive spectrometers are being developed which differ from many approaches previously used in X-ray astronomy. In this presentation, we focus on a number of multivariate Monte Carlo techniques, which were developed for use with the Reflection Grating Spectrometers. These methods have proved particularly useful both in the analysis of extended sources, such as galaxy clusters and elliptical galaxies, and as part of the in-flight calibration program. These techniques have allowed the use of modeling approaches previously done with Monte Carlos, but rarely applied directly to the data. The Nature of Data Sets Obtained with a Slitless Dispersive X-ray Spectrometer Individual photons are collected with X-ray dispersive spectrometers. Each photon's position on the detector (β, y), detector energy (e), and arrival time (t) are recorded. We restrict attention to non-variable sources here, so we will ignore t. The β-axis is aligned along the dispersion axis, and its value is then related to the wavelength of the incident photon by the dispersion equation. The quantity y is the off-axis direction perpendicular to the dispersion axis, which we will also refer to as cross-dispersion. Photons dispersed by the grating obey the dispersion equation, Here d is the grating spacing, β is the dispersion angle, α is the incident angle related to the direction of the source on the sky, and m is an integer representing a spectral order. If the photon energy is crudely measured non-dispersively by the detector, and α is known, the measurement of β give a unique determination of the photon wavelength at high resolution. A diagram of the angles and a reflection grating is shown in Figure 1. For an extended source the situation gets more complex, as we discuss in detail below. In that case, α is not uniquely known, causing an ambiguity in the (β, λ) relation. The quantity y then also contains useful information about the spatial distribution perpendicular to the dispersion axis. An example of the data of an RGS observation of a galaxy cluster, Sérsic 159-03, is shown in Figure 2. The corresponding image for a non-dispersive spectrometer is seen in Figure 3. It is immediately apparent that what is seen in a dispersive spectrometer is very different than what is seen on the sky. Typically, observations contain between 10 3 and 10 6 photons. All three measurements (β, y, e) contain useful information in the case of an extended source. The 3-dimensional data space is also always sparsely filled. This means the number of photons is always less than the number of effective resolution elements (10 7 to 10 9 ). Clearly, this sparse multi-dimensional data is common in other applications. Instrument Response Calculation The standard method of response calculation in X-ray astronomy is to convolve the model for the spectrum, S, with a response kernel, R (see e.g. Arnaud 2 ). This can be expressed as Figure 2. Raw RGS data for the galaxy cluster, Sérsic 159-03. The plot consists of three panels for each of the twodimensional projections of the three dimensional data. The dispersion coordinate (β) vs. cross-dispersion coordinate (y) shows the dispersed spectral image. It is blurred in the cross-dispersion direction due to the size of the source. The three curved lines in the dispersion coordinate vs. CCD energy plot show the first, second, and third order dispersed spectra. The four horizontal lines are the Al K and F K calibration sources. Most of the photons in this image are due to Bremsstrahlung. A darker region in the first order curved line is due to Fe L lines. Figure 3. X-ray Image of the galaxy cluster, Sérsic 159-03, using the EPIC-MOS detector. This is the same source as in Figure 2. Here P is the probablility of a photon being detected with a given wavelength, R is the response kernel, and S is the input spectral model. It should be noted that R and S are rarely analytic functions. The integral is normally computed numerically by converting the above expression into a sum. P is then compared directly with the detected data histogram. Figures 4 and 5 show the RGS response functions for an on-axis monochromatic source. When we extend this calculation to the case of an extended source, it becomes a three dimensional integral. Calculating the three dimensional integral is impractical, since we must integrate over the sky coordinates (α and φ) in addition to the wavelength. This is especially true if one wants to modify the model function, S, iteratively. Often model fitting involves trying many different models and many different parameters. Some approximations can be made, such as assuming the source's spectrum does not vary as a function of spatial position and assuming the response function does not change too much off-axis. These are useful in some circumstances, but the general case requires a Monte Carlo. Then each photon can be chosen from a distribution of S with a λ, α, and φ. This then can be converted to a β, y and e by using probability response functions. In this way, the probabilities of all possible β, y, and e do not have to be calculated at every λ, α, and φ. Below we focus on two sets of challenges that had to be overcome in order to use Monte Carlos for efficient analysis of the data. Optimization of the Response Calculation The calculation of a single element of the RGS response is computationally intensive. This is due to the fact that many components of the response, such as the effect of the mirror, grating, and detector, have to be convolved in the calculation. Each usually has its own energy and spatial dependence. For this reason it is extremely useful to save the response probabilities in pre-calculated tables. The probabilities can be looked up for each photon depending on its sky position and wavelength (α, φ, and λ). This then decreases the computation time by factors of 10 4 or more. Monte Carlo Interpolation: Morphing the Probability Functions An immediate consequence of this, however, is that the multi-dimensional tables if saved at high enough resolution take up too much memory. For two dimensional tables this would not be the case. For the response for the RGS, however, the six dimensional function, R, could only be broken into a series of two or three dimensional functions. These functions can only be saved in tables with coarse grids. A solution to using the coarse grids, however, is to approximate the probability density functions between grid points as a linear combination of the response at the grid points. With a Monte Carlo, this is accomplished by the following method. Consider that the probability of a photon having a given β for a given λ, P (β|λ), is calculated at two wavelengths, λ 1 and λ 2 as shown in the top panel in Figure 6. If we want to know the response at some λ 0 (such that λ 1 < λ 0 < λ 2 ), the response function at λ 1 can be used some fraction of the time and the response function at λ 2 can be used the other fraction of the time. The fraction is determined by the distance λ 0 is from λ 1 and λ 2 . An additional step is needed if the response functions have sharp peaks in them that shift as a function of β. We then shift the output value, β, by the first order behaviour of the response. Say we use the probability distibution at λ 2 and choose a β=β 2 based on the distribution, P(β|λ 2 ). The final value of β is given by . dβ dλ is calculated by the dispersion equation. This method causes the distribution of β to approach the probability distribution shown in the second panel in Figure 6. The combined implementation of these two techniques results in photons being created at 50,000 events per second per GHz of processor speed and can be generalized to more dimensions. General Method Once this Monte Carlo is built, individual models can be used and then compared to the data. A procedure similar to that described in the flowchart in Figure 7 can be adopted. • First, models are formulated in terms of a set of parameters. These parameters predict probability distributions for the spectral and spatial models. • Photons are then drawn from these probability distributions and assigned a wavelength and two sky coordinates. (λ, α, φ). • The dispersion, cross-dispersion, and pulseheight values (β, y, e) are predicted from the instrument Monte Carlo. In addition, a background Monte Carlo predicts background events with β, y, and e values. • Finally, the simulated events are compared with the measured photon events in a variety of ways. Some of these methods are described in the next section. Parameters are modified to improve the consistency of the data with the model. Event Comparison There are a variety of methods to compare one set of detected events to another set of simulated events. Transforming and Cutting the Data A major advantage of the Monte Carlo approach, which has been recognized for some time, is that the simulated events can be selected and transformed in various ways in methods identical to what is done with the real data. There is then no need to worry about biasing the data or misinterpreting the detection, because a Monte Carlo already explicitly takes into account these data analysis steps. In our case, the transformation of the data into a binned spectrum takes several steps. For an extended source, the assignment of wavelengths is not unique. The effect of various data selection cuts is difficult to correct for by any other method than an explicit Monte Carlo. With the Monte Carlo, however, wavelength can be assigned by fixing the incidence angle, α, in the same way for the simulated photons and the detected photons. The normalization is properly calculated by the direct sampling Monte Carlo method. A two-sample χ 2 statistic can be constructed to compare the model with the data. If V 1j is the observed data values in the jth bin, and V 2j is the observed model values and there are n events in the data and m in the model, then the χ 2 statistic is given by This method has been applied in several cases. An important result is shown in Figure 8 in the study of cooling-flows in massive clusters of galaxies (Peterson et al. 5 ). Here it has been shown that the simple thermodynamic prediction for the soft X-ray spectrum of cooling-flows fails to reproduce the observed data. The model consists of the following. An isothermal envelope of hot plasma is surrounds a three dimensional cooling region with a different X-ray spectrum. The second X-ray spectrum consists of the unique model for an optically-thin multiphase isobaric cooling plasma. Photons are chosen based on the plasma emissivity and then projected on the sky. An important aspect of the Monte Carlo method in this case was that the straight-forward theoretical description of the cluster had different spectra at each spatial position. Multi-dimensional Cramér-von Mises This works for one-dimensional data and is useful for sparse data. The multi-dimensional analog of this statistic is obtained by using a linear combination of the measured values (β, y, and e) in each dimension, and using that for the value of x in the above equation. Various linear combinations can be used to emphasize one dimension over another, and it appears that summing over the various combinations results in a robust multi-dimensional statistic. A series of simulations in Figure 9 demonstrates the use of this statistic. Emission Line Profiles A final method of comparison involves explicitly looking at the cross-dispersion profile of an individual emission line. Again, the precise interpretation of such a distribution is difficult without using a Monte Carlo to select the events and simulate the effect of the data selection. This is seen in the analysis of the elliptical galaxy, NGC 4636 (Xu et al. 7 ). Individual compressed emission line images can be seen in Figure 10. Differences in the spatial distribution of certain emission lines can be used to provide joint constraints on physical properities of the source. Figure 11 shows three ratios of emission lines in the cross-dispersion direction. The first panel is an indication that the X-ray plasma is cooler in the center of the elliptical galaxy. The second panel is an indication that one emission line is optically thick and photons have been redistributed on the sky due to resonant line scattering. The third panel indicates that oxygen and iron have a similar spatial distribution. This yields joint contraints on the density and temperature structure and provides robust determinations of the abundances of various metals. Future Work An undesirable aspect of this method is that convergence of the model is complicated by the fact that the model has statistical noise in it as well. This is partly overcome by simulating far more events in the model than are detected in the data. Additionally, careful consideration of what parameters affect what parts of the data allows the vast parameter space to be more efficiently searched. We expect new methods, however, could develop this further. Figure 9. An example of model iteration using the Cramér-von Mises multivariate statistic. The top five plots show the projections of the data for five separate simulation. Each set of three plots has the same meaning as the plots in Figure 2. The center simulation is a 1 keV isothermal beta model of a cluster of galaxies plus an instrument background model. Top and bottom plots are the same simulation but with a larger and smaller, respectively, background fraction. The left and right plots are 0.5 and 2.0 keV simulations. The bottom countour plot shows the value of the Cramér-von Mises statistic for various combinations of the temperature and the background fraction of the model like those shown in the simulation above. It is compared to the simulation in the center plot. The contour plot then demonstrates the the statistic is capable of finding the right solution. Figure 10. Cross-dispersion vs. dispersion angle images of emission lines of the elliptical galaxy, NGC 4636. The vertical axis is the spatial distribution in the cross-dispersion axis in each emission line; whereas, the horizontal axis is the wavelength. The images are (from left to right), Ne X, Ne IX, Fe XVIII 2p-3d, Fe XVII 2p-3d, Fe XVIII 2p-3s, Fe XVII 2p-3s, and O VIII. Figure 11. Ratios of the number of photons as a function of cross-dispersion angle for various emission lines. This plot is made by selecting narrow wavelength regions (horizontal axis) in Figure 10 and then plotting the ratio of the number of counts. The same thing is done for a simulation with a temperature and density distribution and resonant scattering Monte Carlo.
2017-09-16T20:42:04.085Z
2002-10-30T00:00:00.000
{ "year": 2002, "sha1": "4e5e5e101e15b512b7ac625c47acc248860e03f9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/0210664v1.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "c17d60ff5093e8d6209231b66ab6c108fc8477f0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science", "Engineering" ] }